When running low on disk space on Aiven Elasticsearch, there are two possible actions:
- Upgrade to a larger plan from Aiven console or with Aiven CLI client, or
- Clean up unnecessary indexes, when possible.
For example, for logs, it is often beneficial to create a separate, daily indexes, allowing easy and efficient clean-up of the oldest data.
If the service runs low on disk space, three different Elasticsearch mechanisms take effect:
-
cluster.routing.allocation.disk.watermark.low
: defaults to 85% of disk space used. When this limit is exceeded, Elasticsearch starts avoiding allocating new shards to the server. On a single-server Elasticsearch, this has no effect. On a multi-server cluster, amount of data is not always equally distributed, in which case this will help balancing disk usage between servers. -
cluster.routing.allocation.disk.watermark.high
: defaults to 90% of disk space used. Elasticsearch will actively try to move shards to other servers with more available disk space. On a single-server Elasticsearch, this has no effect. -
cluster.routing.allocation.disk.watermark.flood_stage
: defaults to 95% of disk space used. Elasticsearch will mark all indexes hosted on the server exceeding this limit as read-only, allowing deletes (index.blocks.read_only_allow_delete
).
If low
or high
thresholds are exceeded, and data is cleaned up, no further action is needed, as Elasticsearch will continue allowing writes.
If flood_state
is exceeded, you must manually unset read_only_allow_delete
for each affected index. This is done by updating index settings:
curl https://avnadmin:your-password@your-server-demoprj.aivencloud.com:12345/indexname/_settings -X PUT -H 'Content-Type: application/json' -d '{"index.blocks.read_only_allow_delete": null}'
This needs to be done separately for each index that was marked as read-only by Elasticsearch.
Aiven does not automatically unset this option to ensure no flipping around read-only and read-write state.