While trying to index a document, if ElasticSearch throws following exception:
As a temporary solution, you can unlock writes to the cluster (all indexes) using:
Most often it is caused by exceeding the disk watermark / quota.
In its default configuration, ElasticSearch will not allocate any more disk space when more than 90% of the disk are used overall (i.e. by ElasticSearch or other applications).
We can reduce this watermark extremely low using:
After doing that, we might need to unlock your cluster for write accesses (as mentioned above) if we had previously exceeded the watermark:
NOTE: Do not set the values too low because that might cause issues on the OS, since more important applications will not be able to properly allocate disk space any more.
In order to view the current disk usage use:
Example output for 4 nodes cluster looks like:
elasticsearch.exceptions.AuthorizationException: AuthorizationException(403, 'cluster_block_exception', 'blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];')then, this means that ElasticSearch has cluster-wide locked writes to all indices.
As a temporary solution, you can unlock writes to the cluster (all indexes) using:
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'but, this might still result in locking of indices because underlying problem has not been fixed.
Most often it is caused by exceeding the disk watermark / quota.
In its default configuration, ElasticSearch will not allocate any more disk space when more than 90% of the disk are used overall (i.e. by ElasticSearch or other applications).
We can reduce this watermark extremely low using:
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d' { "transient": { "cluster.routing.allocation.disk.watermark.low": "3gb", "cluster.routing.allocation.disk.watermark.high": "2gb", "cluster.routing.allocation.disk.watermark.flood_stage": "1gb", "cluster.info.update.interval": "1m" } } '
After doing that, we might need to unlock your cluster for write accesses (as mentioned above) if we had previously exceeded the watermark:
NOTE: Do not set the values too low because that might cause issues on the OS, since more important applications will not be able to properly allocate disk space any more.
In order to view the current disk usage use:
curl -XGET "http://localhost:9200/_cat/allocation?v&pretty"
Example output for 4 nodes cluster looks like:
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node 167 6gb 79.4gb 196gb 275.4gb 28 node1.singhaiuklimited.com 198.168.1.2 node1_mid 167 4.9gb 163.6gb 111.8gb 275.4gb 59 node2.singhaiuklimited.com 198.168.1.3 node2_id 167 10.7gb 65.1gb 210.3gb 275.4gb 23 node3.singhaiuklimited.com 198.168.1.4 node3_md 167 8.4gb 50.7gb 224.6gb 275.4gb 18 node4.singhaiuklimited.com 198.168.1.5 node4_md
- shards: 167: The cluster currently has 167 shards.
- disk.indices: 6gb: The node node1_mid in the cluster currently uses 6 GB of disk space for indexes.
- disk.used: 79.4gb: The disk ElasticSearch will store its data on has 79.4 GB used spaced. This does not mean that ElasticSearch uses all space; other applications (including the OS) might also use (part of) that space.
- disk.avail: 196gb: The disk ElasticSearch will store its data on has 196 GB of free space. Remember that this will not shrink only if ElasticSearch is using data on said disk, other applications might also consume some of the disk space depending on how you set up ElasticSearch.
- disk.total: 275.4gb: The disk ElasticSearch will store its data on has a total size of 275.4 GB (total size as in available space on the file-system without any files)
- disk.percent: 28: Currently 28% of the total disk space available (disk.total) is used. This value is always rounded to full percentages.
- host, ip, node: Which node this line is referring to.
Comments