FORBIDDEN/12/index read-only

Hi team! I often encounter the following error when i insert documents to my index.

elasticsearch.exceptions.AuthorizationException: AuthorizationException(403, 'cluster_block_exception', 'blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];')

As a temporary workaround i use one of the following, i index successfully some docs, but again after a while i see the same error.

PUT _settings
    {"index": {"blocks": {"read_only_allow_delete": "false"}}}

PUT /_settings
{  "index.blocks.read_only_allow_delete": null}

How can i permanently fix it? Searching on the web didn't found a permanent solution to this issue, apart from the workarounds i mention above. I have enough disk space in the Elasticsearch instance and this error occurs even when my index is empty! What is wrong here?

Cluster info:

    {
      "name" : "rLNxkrB",
      "cluster_name" : "elasticsearch",
      "cluster_uuid" : "7GjE59rERPeuOyk1PAzDyQ",
      "version" : {
        "number" : "6.6.1",
        "build_flavor" : "default",
        "build_type" : "zip",
        "build_hash" : "1fd8f69",
        "build_date" : "2019-02-13T17:10:04.160291Z",
        "build_snapshot" : false,
        "lucene_version" : "7.6.0",
        "minimum_wire_compatibility_version" : "5.6.0",
        "minimum_index_compatibility_version" : "5.0.0"
      },
      "tagline" : "You Know, for Search"
    }

...and my index settings:

{ 
    "settings" : {
        "index" : {
            "number_of_shards" : 3, 
            "number_of_replicas" : 2 
        }
    }

... and disc usage info (my indices that i created are the yellow ones):

   health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   fertilization                   O_yvMFedS0SA3qsOd12GwA   3   2          0            0       783b           783b
yellow open   monitoring                      mausy_dxTTW1JiB88g7lWA   3   2          0            0       783b           783b
green  open   .reporting-2019.12.01           CMSHgEkDQ_S7kuz4fRI2Dg   1   0          1            0     10.8kb         10.8kb
yellow open   augmenta_snapshots              hqU4hp14SCmR6BtkJCHxyg   3   2          1            0     11.5kb         11.5kb
green  open   .reporting-2019.11.17           T5Q8qUJTTXeECng9bHWczw   1   0          2            0     20.5kb         20.5kb
green  open   .kibana                         viyg013TT0yanW3qmvTaNA   1   0         32            2      151kb          151kb
green  open   .reporting-2019.12.08           HZ0Fsxd5RBqimiJ_5dwjOw   1   0          6            2      388kb          388kb
green  open   .monitoring-kibana-6-2020.01.12 UqZsn-A9QJyfKUZfRkWfEw   1   0       3226            0    884.6kb        884.6kb
green  open   .monitoring-kibana-6-2020.01.11 aSBTioOrR7e6ejZwruZfaw   1   0       3111            0    886.2kb        886.2kb
green  open   .monitoring-kibana-6-2020.01.13 o0ixyC_nQt2TwsdT0v51CQ   1   0       3602            0    956.9kb        956.9kb
green  open   .monitoring-kibana-6-2020.01.10 VkVVfVwmREGXZcQpKBeHFA   1   0       3408            0    978.6kb        978.6kb
green  open   .monitoring-kibana-6-2020.01.16 F9KLSx4UQAuUTIzXybVG-A   1   0       3825            0        1mb            1mb
green  open   .monitoring-kibana-6-2020.01.14 9t1nOBx_QlWZPpJjRqhgyg   1   0       6304            0      1.5mb          1.5mb
green  open   .monitoring-kibana-6-2020.01.15 RHbxXJJhQE2_Z_3f4VyTxg   1   0       8639            0      2.1mb          2.1mb
green  open   .monitoring-es-6-2020.01.12     GToNe2rpT0-p5wdFyWnzIg   1   0     147730          306     55.9mb         55.9mb
green  open   .monitoring-es-6-2020.01.11     ikJa_VhZS9K419yIq4QIjA   1   0     142595          243     58.5mb         58.5mb
green  open   .monitoring-es-6-2020.01.10     jWRUwxBRRgOjrP8ivNJugg   1   0     161332          672     60.6mb         60.6mb
green  open   .monitoring-es-6-2020.01.16     bdLeDXy9SDmsNCIGfmoemw   1   0     139641          424     63.8mb         63.8mb
green  open   .monitoring-es-6-2020.01.13     TqjMNZcET1CwQRIdLzQuww   1   0     166226          651       67mb           67mb
green  open   .monitoring-es-6-2020.01.14     9_uLKbBuQ5-dvEnsRxoxIg   1   0     243080          812    101.7mb        101.7mb
green  open   .monitoring-es-6-2020.01.15     IXCPlrtgRPCoKBic-yt60Q   1   0     293130          536    130.2mb        130.2mb
yellow open   npk                             S1exfiWiThmtc7iCwdD1_Q   3   2      86832            0    182.1mb        182.1mb

Conciselly my disk space info:

"fs": {
        "timestamp": 1579175865660,
        "total": {
          "total_in_bytes": 10222829568,
          "free_in_bytes": 569589760,
          "available_in_bytes": 552812544

Thank you in advance!

No, you don't. Here's the disk usage you shared:

          "total_in_bytes": 10222829568,
          "free_in_bytes":    569589760,

Your disk is ~95% full, so Elasticsearch is switching to read-only mode to protect itself from the consequences of a completely full disk. The solution is to free up some disk space.

@DavidTurner really thank you for the quick response! I thought that 552 Mb as free space were enough but yes, if the threshold for switching to read-only mode is 95%, then it's ok. I 'm gonna increase the virtual disk space in the VM where elasticsearch is installed.

But it's weird, it seems that from 10 Gb the 9,5 are full and i don't know from what? From my post here above, all the indexes in my cluster have total size not bigger than 500 Mb. The other?

Can you share the output from GET _cluster/health and GET /_stats/translog,store?filter_path=_all.**.size_in_bytes?

Cluster health:

{
  "cluster_name": "elasticsearch",
  "status": "yellow",
  "timed_out": false,
  "number_of_nodes": 1,
  "number_of_data_nodes": 1,
  "active_primary_shards": 30,
  "active_shards": 30,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 24,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 55.55555555555556
}

... and the output from the second command:

{
  "_all": {
    "primaries": {
      "store": {
        "size_in_bytes": 764579223
      },
      "translog": {
        "size_in_bytes": 630941765
      }
    },
    "total": {
      "store": {
        "size_in_bytes": 764579223
      },
      "translog": {
        "size_in_bytes": 630941765
      }
    }
  }
}

Note: that i just increased the VM disk space from 10Gb to 30Gb and then i run the command you shared to me. So, these results now are after the disk space increase.

Ok, Elasticsearch thinks it's in charge of 764+630MB of data, i.e. just under 1.5GB. The other 8GB of disk usage seems to be nothing to do with Elasticsearch.

1 Like

Fine! Finally i found that the rest 8Gb were logs of the VM instance! I handled it and now i am ok!
Thanks again @DavidTurner for your help!

1 Like