Skip to main content

Posts

Recent posts

ElasticSearch [FORBIDDEN/12/index read-only / allow delete (api)]

While trying to index a document, if ElasticSearch throws following exception: elasticsearch.exceptions.AuthorizationException: AuthorizationException(403, 'cluster_block_exception', 'blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];') then, this means that ElasticSearch has cluster-wide locked writes to all indices. As a temporary solution, you can unlock writes to the cluster (all indexes) using: curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' but, this might still result in locking of indices because underlying problem has not been fixed. Most often it is caused by exceeding the disk watermark / quota. In its default configuration, ElasticSearch will not allocate any more disk space when more than 90% of the disk are used overall (i.e. by ElasticSearch or other applications). We can reduce this watermark extremely low using: curl

Securing ElasticSearch / Kafka clusters with SSL

By default, there is no encryption, authentication, or ACLs configured in Elasticsearch/Kafka. Any client can communicate to ES nodes / Kafka brokers via the PLAINTEXT port. It is critical that access via this port is restricted to trusted clients only. Network segmentation and/or authorization ACLs can be used to restrict access to trusted IPs in such cases. If neither is used, the cluster is wide open and can be accessed by anyone. While non-secured clusters are supported, as are a mix of authenticated, unauthenticated, encrypted and non-encrypted clients, it is recommended to secure the components in your cluster. Secure Sockets Layer (SSL) is the predecessor of Transport Layer Security (TLS) , and SSL has been deprecated since June 2015. However, generally people use the term SSL instead of TLS in configuration and code. SSL can be configured for encryption or authentication. You may configure just SSL encryption (by default SSL encryption includes certificate authenticatio

Logstash throws error while installing plugins

While trying to install logstash plugin, I was getting below error: $ /work/logstash/logstash-5.5.2/bin/logstash-plugin install logstash-input-cloudwatch WARNING: A maven settings file already exist at ~/.m2/settings.xml, please review the content to make sure it include your proxies configuration. Validating logstash-input-cloudwatch Installing logstash-input-cloudwatch Error Bundler::InstallError, retrying 1/10 An error occurred while installing logstash-core (5.5.2), and Bundler cannot continue. Make sure that `gem install logstash-core -v '5.5.2'` succeeds before bundling. Error Bundler::InstallError, retrying 2/10 An error occurred while installing logstash-core (5.5.2), and Bundler cannot continue. Make sure that `gem install logstash-core -v '5.5.2'` succeeds before bundling. Here are the things I did to make it work: Created maven ~/.m2/settings.xml file <?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://maven.ap

Read and parse CSV containing Key-value pairs using Akka Streams

Let's say we want to read and parse a CSV file containing Key value pairs. We will be using Alpakka's CSVParser for this. A snippet of a file (src/main/resources/CountryNicCurrencyKeyValueMap.csv) that shows mapping from Country NIC code to currency code with pipe (|) as field delimiter: AD|EUR AE|AED AF|AFN AG|XCD AI|XCD AL|ALL AM|AMD AN|ANG AO|AOA AQ|AQD AR|ARS AS|EUR AT|EUR AU|AUD AW|ANG AX|EUR AZ|AZN BA|BAM BB|BBD BD|BDT BE|EUR BF|XOF BG|BGN BH|BHD BI|BIF BJ|XOF BL|EUR BM|BMD BN|BND BO|BOB BR|BRL BS|BSD BT|INR Following is the code: import java.io.File import java.nio.charset.StandardCharsets import akka.actor.ActorSystem import akka.stream._ import akka.stream.alpakka.csv.scaladsl.CsvParsing import akka.stream.scaladsl.{FileIO, Flow, Sink} import akka.util.ByteString import scala.collection.immutable import scala.concurrent.{ExecutionContext, _} import scala.concurrent.duration._ implicit val system: ActorSystem = ActorSystem("TestApplication") implicit

Kafka performance tuning

Performance Tuning of Kafka is critical when your cluster grow in size. Below are few points to consider to improve Kafka performance: Consumer group ID : Never use same exact consumer group ID for dozens of machines consuming from different topics. All of those commits will end up on the same exact partition of __consumer_offsets , hence the same broker, and this might in turn cause performance problems. Choose the consumer group ID to group_id+topic_name . Skewed : A broker is skewed if its number of partitions is greater that the average of partitions per broker on the given topic. Example: 2 brokers share 4 partitions, if one of them has 3 partitions, it is skewed (3 > 2). Try to make sure that none of the brokers is skewed. Spread : Brokers spread is the percentage of brokers in the cluster that has partitions for the given topic. Example: 3 brokers share a topic that has 2 partitions, so 66% of the brokers have partitions for this topic. Try to achieve 100% broker spread

Migrating ElasticSearch 2.x to ElasticSearch 5.x

In my previous blog post, I described how to install and configure an ElasticSearch 5.x cluster. In this blog post, we will look at how to migrate data. Consult this table to verify that rolling upgrades are supported for your version of Elasticsearch. Full cluster upgrade (2.x to 5.x) We will have to do full cluster upgrade and restart . Install Elasticsearch Migration Helper on old cluster. This plugin will help you to check whether you can upgrade directly to the next major version of Elasticsearch, or whether you need to make changes to your data and cluster before doing so. cd /work/elk/elasticsearch-2.4.3/ curl -O -L https://github.com/elastic/elasticsearch-migration/releases/download/v2.0.4/elasticsearch-migration-2.0.4.zip ./bin/plugin install file:///work/elk/elasticsearch-2.4.3/elasticsearch-migration-2.0.4.zip Start old ElasticSearch: ./bin/elasticsearch & Browse elasticsearch-migration Click on "Cluster Checkup" > "Run checks now". C