Skip to main content

ElasticSearch [FORBIDDEN/12/index read-only / allow delete (api)]

While trying to index a document, if ElasticSearch throws following exception:
elasticsearch.exceptions.AuthorizationException: AuthorizationException(403, 'cluster_block_exception', 'blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];')
then, this means that ElasticSearch has cluster-wide locked writes to all indices.

As a temporary solution, you can unlock writes to the cluster (all indexes) using:
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
but, this might still result in locking of indices because underlying problem has not been fixed.

Most often it is caused by exceeding the disk watermark / quota.
In its default configuration, ElasticSearch will not allocate any more disk space when more than 90% of the disk are used overall (i.e. by ElasticSearch or other applications).
We can reduce this watermark extremely low using:
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
  "transient": {
    "cluster.routing.allocation.disk.watermark.low": "3gb",
    "cluster.routing.allocation.disk.watermark.high": "2gb",
    "cluster.routing.allocation.disk.watermark.flood_stage": "1gb",
    "cluster.info.update.interval": "1m"
  }
}
'

After doing that, we might need to unlock your cluster for write accesses (as mentioned above) if we had previously exceeded the watermark:

NOTE: Do not set the values too low because that might cause issues on the OS, since more important applications will not be able to properly allocate disk space any more.

In order to view the current disk usage use:
curl -XGET "http://localhost:9200/_cat/allocation?v&pretty"

Example output for 4 nodes cluster looks like:
shards disk.indices disk.used disk.avail disk.total disk.percent                       host          ip      node
   167          6gb    79.4gb      196gb    275.4gb           28 node1.singhaiuklimited.com 198.168.1.2 node1_mid
   167        4.9gb   163.6gb    111.8gb    275.4gb           59 node2.singhaiuklimited.com 198.168.1.3  node2_id
   167       10.7gb    65.1gb    210.3gb    275.4gb           23 node3.singhaiuklimited.com 198.168.1.4  node3_md
   167        8.4gb    50.7gb    224.6gb    275.4gb           18 node4.singhaiuklimited.com 198.168.1.5  node4_md

  • shards: 167: The cluster currently has 167 shards.
  • disk.indices: 6gb: The node node1_mid in the cluster currently uses 6 GB of disk space for indexes.
  • disk.used: 79.4gb: The disk ElasticSearch will store its data on has 79.4 GB used spaced. This does not mean that ElasticSearch uses all space; other applications (including the OS) might also use (part of) that space.
  • disk.avail: 196gb: The disk ElasticSearch will store its data on has 196 GB of free space. Remember that this will not shrink only if ElasticSearch is using data on said disk, other applications might also consume some of the disk space depending on how you set up ElasticSearch.
  • disk.total: 275.4gb: The disk ElasticSearch will store its data on has a total size of 275.4 GB (total size as in available space on the file-system without any files)
  • disk.percent: 28: Currently 28% of the total disk space available (disk.total) is used. This value is always rounded to full percentages.
  • host, ip, node: Which node this line is referring to.

Comments

Popular posts from this blog

MPlayer subtitle font problem in Windows

While playing a video with subtitles in mplayer, I was getting the following problem: New_Face failed. Maybe the font path is wrong. Please supply the text font file (~/.mplayer/subfont.ttf). Solution is as follows: Right click on "My Computer". Select "Properties". Go to "Advanced" tab. Click on "Environment Variables". Delete "HOME" variable from User / System variables.

Kafka performance tuning

Performance Tuning of Kafka is critical when your cluster grow in size. Below are few points to consider to improve Kafka performance: Consumer group ID : Never use same exact consumer group ID for dozens of machines consuming from different topics. All of those commits will end up on the same exact partition of __consumer_offsets , hence the same broker, and this might in turn cause performance problems. Choose the consumer group ID to group_id+topic_name . Skewed : A broker is skewed if its number of partitions is greater that the average of partitions per broker on the given topic. Example: 2 brokers share 4 partitions, if one of them has 3 partitions, it is skewed (3 > 2). Try to make sure that none of the brokers is skewed. Spread : Brokers spread is the percentage of brokers in the cluster that has partitions for the given topic. Example: 3 brokers share a topic that has 2 partitions, so 66% of the brokers have partitions for this topic. Try to achieve 100% broker spread

wget and curl behind corporate proxy throws certificate is not trusted or certificate doesn't have a known issuer

If you try to run wget or curl in Ununtu/Debian behind corporate proxy, you might receive errors like: ERROR: The certificate of 'apertium.projectjj.com' is not trusted. ERROR: The certificate of 'apertium.projectjj.com' doesn't have a known issuer. wget https://apertium.projectjj.com/apt/apertium-packaging.public.gpg ERROR: cannot verify apertium.projectjj.com's certificate, issued by 'emailAddress=proxyteam@corporate.proxy.com,CN=diassl.corporate.proxy.com,OU=Division UK,O=Group name,L=Company,ST=GB,C=UK': Unable to locally verify the issuer's authority. To connect to apertium.projectjj.com insecurely, use `--no-check-certificate'. To solution is to install your company's CA certificate in Ubuntu. In Windows, open the first part of URL in your web browser. e.g. open https://apertium.projectjj.com in web browser. If you inspect the certifcate, you will see the same CN (diassl.corporate.proxy.com), as reported by the error above