ElasticSearch bulk insert not OK using curl, message

Hello,
I'm trying to use curl and call ElasticSearch bulk API.
The call is like this:
curl -XPOST "http://localhost:9200/myindex/mytype/_bulk" --data-binary @req -H "Con
tent-Type: application/x-ndjson" --user elastic:elastic_pwd

req is a file, whose content is like this
{ "index" : {} }
{"ServerName":"WIN-562V873OTU7","DateAction":"2017-12-21T13:38:09","ID":"Simple ID","Timestamp":"2017-12-21T13:38:06","Value":"47"}

last file in req is ending with \n.

I have an error message:
{"took":15,"errors":true,"items":[{"index":{"_index":"myindex","_type":"mytype","_id":"7kKteWABgkkg9WN7eY86","status":403,"error":{"type":"cluster_block_exception","rea
son":"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"}}}]}

I found a topic on that issue, the solution was to send a PUT
http://localhost:9200/myindex/_settings/
{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}
after this made, the call to curl is OK.
But when sending another curl call, I have again the error message. (which is resolved using the PUT request to the URL ___/_settings).

I don't want to send this request each time. It's why I tried to have in elasticsearch.yml the setting:
index.blocks.read_only_allow_delete: false

When restarted elasticsearch, I have a clear message indicating that this setting is not accepted:
Found index level settings on node level configuration.

Since elasticsearch 5.x index level settings can NOT be set on the nodes
configuration like the elasticsearch.yaml, in system properties or command line
arguments.In order to upgrade all indices the settings must be updated via the
/${index}/_settings API. Unless all settings are dynamic all indices must be closed
in order to apply the upgradeIndices created in the future should use index templates
to set default values.

Please ensure all required values are updated on all indices by executing:

curl -XPUT 'http://localhost:9200/_all/_settings?preserve_existing=true' -d '{
"index.blocks.read_only_allow_delete" : "false"
}'

What should i do in this case?

Regards
Driss

Hi @Amouzigh_Driss,

This is not a expected behaviour to set read_only_allow_delete: false after every request, your index must be read-only or not.

Is there any other process that might be setting this index to read-only?

Cheers,
LG

Elasticsearch will only automatically change read_only_allow_delete to true starting with version 6.0, when it hits a used disk space high watermark (by default is 95% set by cluster.routing.allocation.disk.watermark.flood_stage).

Are you sure this is not version 6.x? If it's really 5.x, then it's like Luiz said, the setting read_only_allow_delete defaults to false and Elasticsearch will never change by it's own.

Hello Mr @thiago and @luiz.santos

Thank you for your answers.
You're right, i'm using elasticsearch-6.1.0 and have the message concerning the flood. ([o.e.c.r.a.DiskThresholdMonitor] [cTlqV5c] flood stage disk watermark [95%] exceeded on [cTlqV5c4QIyuT_6-AJNJZQ])

I tried to use elasticsearch-2.1.0 and the problem was partially resolved.
Partially, because I can change the read_only_allow_delete to false and update elasticsearch using cURL from command line Prompt without any problem.

When trying to have modifications in 'real life' using a C# .Net program (which call periodically cURL to have these modifications) , things are different. The first call using the program is OK, but the others fail. (same behaviour when restarting the program).

I checked that even when I have failure using my C# program, the flag is always false, and can call cURL from command line prompt.

Regards
Driss

What I can tell is that the setting read_only_allow_delete does not exists in 2.1 so setting it to either true or false does nothing.

Moreover, if you were getting the flood stage error before, in 6.1, it means your disk is 95% of usage and you should not be running an Elasticsearch node in such situation.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.