Hello,
I'm getting a weird error:
[2023-01-17T18:18:32.398997+00:00] main.ERROR: Child process failed with message: Elasticsearch engine returned an error response. item id: 20. Error type: "cluster_block_exception", reason "index [product_1_v1] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];".
This is a local DEV machine that is running multiple services.
ES version:
test:/home/test/public_html$ curl -X GET "localhost:9200"
{
"name" : "test-1",
"cluster_name" : "magento-test",
"cluster_uuid" : "b8-gIJwoQP6UijiKfUY2-g",
"version" : {
"number" : "7.17.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "bee86328705acaa9a6daede7140defd4d9ec56bd",
"build_date" : "2022-01-28T08:36:04.875279988Z",
"build_snapshot" : false,
"lucene_version" : "8.11.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
I have tried suggestions in this article:
But I'm not able to set the settings:
test:/home/test/public_html$ curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
"persistent": {
"cluster.routing.allocation.disk.watermark.low": "90%",
"cluster.routing.allocation.disk.watermark.low.max_headroom": "50GB",
"cluster.routing.allocation.disk.watermark.high": "95%",
"cluster.routing.allocation.disk.watermark.high.max_headroom": "20GB",
"cluster.routing.allocation.disk.watermark.flood_stage": "97%",
"cluster.routing.allocation.disk.watermark.flood_stage.max_headroom": "5GB",
"cluster.routing.allocation.disk.watermark.flood_stage.frozen": "97%",
"cluster.routing.allocation.disk.watermark.flood_stage.frozen.max_headroom": "5GB"
}
}
Output is:
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "persistent setting [cluster.routing.allocation.disk.watermark.flood_stage.max_headroom], not recognized"
}
],
"type" : "illegal_argument_exception",
"reason" : "persistent setting [cluster.routing.allocation.disk.watermark.flood_stage.max_headroom], not recognized"
},
"status" : 400
}
Deleting the shards did not help.
Please let me know if there are any other things I should try.
Thanks!