Another unable to create index pattern or delete index pattern when index has been deleted

In my home lab, I am running 6.2.2 of Elasticsearch, Logstash and Kibana all on a single machine.

I really only had three logs or so going into it, pfsense, windows, and bro and three index patterns. But after I added a fourth index, I found I could not create a new index pattern. I tried to create it, but nothing happened. No errors pop up, nothing in the kibana.stderr log, nothing.

After some looking around, I found that all the indices were yellow and the logstash log was full of this error:

[2018-11-22T13:27:52,557][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})

After checking my logstash conf and comparing the indexes I was sending to ES and what was currently in ES I realized that half the logs in ES were no longer being used and those were the logs that were the basis of my current index patterns. So I decided to delete the old logs that were not in use and then remove the index-patterns and create new patterns for my current indices.

I then did a GET _cat/indices on the dev tools page and found that all my indices were yellow. When I checked on the health of the cluster, I had a lot of unassigned shards. By doing an GET _cluster/allocation/explain on the dev tools page, I found that ES was complaining about allocating shards to the same node (which is verboten). So since this is a home lab, I set the replicas to 0 for the indices I wanted to keep.

I then deleted the other indices I no longer needed.

At this point I continued to run into problems.

After deleting the indices, I had three index patterns I needed to delete:

pfsense2-2018
windows-2018
broids2-2018

On Management - Index Pattern I clicked on Refresh icon on top right I got an error screen with the following text:

blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];: [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];

Error: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];: [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
    at http://192.168.1.101:5601/bundles/commons.bundle.js?v=16588:1:293164
    at processQueue (http://192.168.1.101:5601/bundles/vendors.bundle.js?v=16588:58:132456)
    at http://192.168.1.101:5601/bundles/vendors.bundle.js?v=16588:58:133349
    at Scope.$digest (http://192.168.1.101:5601/bundles/vendors.bundle.js?v=16588:58:144239)
    at Scope.$apply (http://192.168.1.101:5601/bundles/vendors.bundle.js?v=16588:58:147018)
    at done (http://192.168.1.101:5601/bundles/vendors.bundle.js?v=16588:58:100026)
    at completeRequest (http://192.168.1.101:5601/bundles/vendors.bundle.js?v=16588:58:104697)
    at XMLHttpRequest.xhr.onload (http://192.168.1.101:5601/bundles/vendors.bundle.js?v=16588:58:105435)

I then tried clicking on the trash can icon to delete index pattern. This time I got the error screen with the following text:

OOPS! Looks like something went wrong, Refreshing may do the trick.

 Fatal Error
blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];: [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
Version: 6.2.2
Build: 16588
Error: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];: [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
    at http://192.168.1.101:5601/bundles/commons.bundle.js?v=16588:1:293164
    at processQueue (http://192.168.1.101:5601/bundles/vendors.bundle.js?v=16588:58:132456)
    at http://192.168.1.101:5601/bundles/vendors.bundle.js?v=16588:58:133349
    at Scope.$digest (http://192.168.1.101:5601/bundles/vendors.bundle.js?v=16588:58:144239)
    at Scope.$apply (http://192.168.1.101:5601/bundles/vendors.bundle.js?v=16588:58:147018)
    at done (http://192.168.1.101:5601/bundles/vendors.bundle.js?v=16588:58:100026)
    at completeRequest (http://192.168.1.101:5601/bundles/vendors.bundle.js?v=16588:58:104697)
    at XMLHttpRequest.xhr.onload (http://192.168.1.101:5601/bundles/vendors.bundle.js?v=16588:58:105435)

Based on some of the suggestions from the references below, I went to find out if the index pattern was in the .kibana index. So from dev tools to find out information about the index-pattern, I ran: GET .kibana/index-pattern/windows-2018

{
  "_index": ".kibana",
  "_type": "index-pattern",
  "_id": "windows-2018",
  "found": false
}

Same result from command line:

 curl -Xget "http://localhost:9200/.kibana/index-pattern/windows-2018"
{"_index":".kibana","_type":"index-pattern","_id":"windows-2018","found":false}

Which meant kibana thought the index-pattern was gone, but in the drop down for the discover tab and on the management - index patter page, they were still listed.

I restarted kibana and the patterns were still there but the indices are not. For reference, here are results of GET _cat/indices:

green open .kibana       72xu3JySRXOCUvjRAxsHMg 1 0      5 2  89.1kb  89.1kb
green open nxlog         Dmj4Mk8zQJmv02ueoGsT0Q 5 0    368 0 630.2kb 630.2kb
green open broids-2018   P0fI96T8Q-GxTjX4uQq4aA 5 0 926248 0   1.1gb   1.1gb
green open pfsense3-2018 b4Cn-rCFScSxYjL3gNaCwg 5 0 221315 0  83.3mb  83.3mb

Tried to delete the other index patterns via the gui and got the same errors.

tried to delete via the command line:

 curl -XDELETE "http://localhost:5601/api/saved_objects/index-pattern/pfsense2-2018" -H 'kbn-xsrf: true'
{"message":"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];: [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];","statusCode":403,"error":"Forbidden"}

I tried this also:

curl -XDELETE "http://localhost:9200/.kibana/index-pattern/pattern_name"

Same read-only error.

I also still cannot add an index pattern.

Now what? Is this a bug or am I missing something? Thanks.

Some of the research I've done on this issue:

@Bill_McConaghy/ @cjcenizal @jen-huang can you please take a look at this. I think it's a bug but I cannot reproduce it consistently.

Cheers,
Bhavya

hello reswob,

Can you look at whats happening with elasticsearch logs? and post them here?

Thanks,
Bhavya

NOTE: This is the entire log for the day.

Part 1

[2018-11-22T13:27:35,542][DEBUG][o.e.a.a.c.a.TransportClusterAllocationExplainAction] [HOME] explaining the allocation for [ClusterAllocationExplainRequest[useAnyUnassignedShard=true,includeYesDecisions?=false], found shard [[nxlog][4], node[null], [R], recovery_source[peer recovery], s[UNASSIGNED], unassigned_info[[reason=INDEX_CREATED], at[2018-11-20T03:09:28.950Z], delayed=false, allocation_status[no_attempt]]]
[2018-11-22T13:31:01,889][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [HOME] updating number_of_replicas to [0] for indices [pfsense3-2018]
[2018-11-22T13:31:25,583][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [HOME] updating number_of_replicas to [0] for indices [broids-2018]
[2018-11-22T13:31:41,660][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [HOME] updating number_of_replicas to [0] for indices [nxlog]
[2018-11-22T13:31:51,933][DEBUG][o.e.a.a.c.a.TransportClusterAllocationExplainAction] [HOME] explaining the allocation for [ClusterAllocationExplainRequest[useAnyUnassignedShard=true,includeYesDecisions?=false], found shard [[windows-2018][4], node[null], [R], recovery_source[peer recovery], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2018-11-17T18:08:04.807Z], delayed=false, allocation_status[no_attempt]]]
[2018-11-22T13:34:10,739][INFO ][o.e.c.m.MetaDataDeleteIndexService] [HOME] [broids2-2018/frHJ9KIYTgWZPxQ2cxrx7w] deleting index
[2018-11-22T13:34:34,175][INFO ][o.e.c.m.MetaDataDeleteIndexService] [HOME] [windows-2018/E2LeSXJ4SCC1y_FK28SyTw] deleting index
[2018-11-22T13:34:55,464][INFO ][o.e.c.m.MetaDataDeleteIndexService] [HOME] [pfsense2-2018/SJ0lJ7XERzacT_Rhp2teNA] deleting index
[2018-11-22T13:35:40,222][INFO ][o.e.c.m.MetaDataCreateIndexService] [HOME] [windows-2018] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2018-11-22T13:35:41,249][INFO ][o.e.c.m.MetaDataMappingService] [HOME] [windows-2018/f9oxeTVfTgWpjN7_MGCKmg] create_mapping [doc]
[2018-11-22T14:02:19,700][INFO ][o.e.c.m.MetaDataDeleteIndexService] [HOME] [windows-2018/f9oxeTVfTgWpjN7_MGCKmg] deleting index
[2018-11-22T14:02:49,350][INFO ][o.e.n.Node               ] [HOME] stopping ...
[2018-11-22T14:02:49,402][INFO ][o.e.n.Node               ] [HOME] stopped
[2018-11-22T14:02:49,402][INFO ][o.e.n.Node               ] [HOME] closing ...
[2018-11-22T14:02:49,420][INFO ][o.e.n.Node               ] [HOME] closed
[2018-11-22T14:03:37,024][INFO ][o.e.n.Node               ] [HOME] initializing ...
[2018-11-22T14:03:37,217][INFO ][o.e.e.NodeEnvironment    ] [HOME] using [1] data paths, mounts [[/home (/dev/mapper/cl-home)]], net usable_space [62.9gb], net total_space [220.6gb], types [xfs]
[2018-11-22T14:03:37,217][INFO ][o.e.e.NodeEnvironment    ] [HOME] heap size [990.7mb], compressed ordinary object pointers [true]
[2018-11-22T14:03:37,673][INFO ][o.e.n.Node               ] [HOME] node name [HOME], node ID [0PMVDyBZSemDBMNSxx-WqA]
[2018-11-22T14:03:37,674][INFO ][o.e.n.Node               ] [HOME] version[6.2.2], pid[1010], build[10b1edd/2018-02-16T19:01:30.685723Z], OS[Linux/3.10.0-514.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_161/25.161-b14]

Part 2:

[    2018-11-22T14:03:37,674][INFO ][o.e.n.Node               ] [HOME] JVM arguments [-Xms4g, -Xmx4g, -Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.X0xLmL7o, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:/var/log/elasticsearch/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch]
    [2018-11-22T14:03:38,461][INFO ][o.e.p.PluginsService     ] [HOME] loaded module [aggs-matrix-stats]
    [2018-11-22T14:03:38,461][INFO ][o.e.p.PluginsService     ] [HOME] loaded module [analysis-common]
    [2018-11-22T14:03:38,461][INFO ][o.e.p.PluginsService     ] [HOME] loaded module [ingest-common]
    [2018-11-22T14:03:38,461][INFO ][o.e.p.PluginsService     ] [HOME] loaded module [lang-expression]
    [2018-11-22T14:03:38,461][INFO ][o.e.p.PluginsService     ] [HOME] loaded module [lang-mustache]
    [2018-11-22T14:03:38,461][INFO ][o.e.p.PluginsService     ] [HOME] loaded module [lang-painless]
    [2018-11-22T14:03:38,461][INFO ][o.e.p.PluginsService     ] [HOME] loaded module [mapper-extras]
    [2018-11-22T14:03:38,462][INFO ][o.e.p.PluginsService     ] [HOME] loaded module [parent-join]
    [2018-11-22T14:03:38,462][INFO ][o.e.p.PluginsService     ] [HOME] loaded module [percolator]
    [2018-11-22T14:03:38,462][INFO ][o.e.p.PluginsService     ] [HOME] loaded module [rank-eval]
    [2018-11-22T14:03:38,462][INFO ][o.e.p.PluginsService     ] [HOME] loaded module [reindex]
    [2018-11-22T14:03:38,462][INFO ][o.e.p.PluginsService     ] [HOME] loaded module [repository-url]
    [2018-11-22T14:03:38,462][INFO ][o.e.p.PluginsService     ] [HOME] loaded module [transport-netty4]
    [2018-11-22T14:03:38,462][INFO ][o.e.p.PluginsService     ] [HOME] loaded module [tribe]
    [2018-11-22T14:03:38,462][INFO ][o.e.p.PluginsService     ] [HOME] no plugins loaded
    [2018-11-22T14:03:41,497][INFO ][o.e.d.DiscoveryModule    ] [HOME] using discovery type [zen]
    [2018-11-22T14:03:42,178][INFO ][o.e.n.Node               ] [HOME] initialized
    [2018-11-22T14:03:42,179][INFO ][o.e.n.Node               ] [HOME] starting ...
    [2018-11-22T14:03:42,413][INFO ][o.e.t.TransportService   ] [HOME] publish_address {192.168.1.101:9300}, bound_addresses {[::]:9300}
    [2018-11-22T14:03:42,447][INFO ][o.e.b.BootstrapChecks    ] [HOME] bound or publishing to a non-loopback address, enforcing bootstrap checks
    [2018-11-22T14:03:45,537][INFO ][o.e.c.s.MasterService    ] [HOME] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {HOME}{0PMVDyBZSemDBMNSxx-WqA}{06m-BuVsSp-Kb-9z2eGx2Q}{192.168.1.101}{192.168.1.101:9300}
    [2018-11-22T14:03:45,542][INFO ][o.e.c.s.ClusterApplierService] [HOME] new_master {HOME}{0PMVDyBZSemDBMNSxx-WqA}{06m-BuVsSp-Kb-9z2eGx2Q}{192.168.1.101}{192.168.1.101:9300}, reason: apply cluster state (from master [master {HOME}{0PMVDyBZSemDBMNSxx-WqA}{06m-BuVsSp-Kb-9z2eGx2Q}{192.168.1.101}{192.168.1.101:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
    [2018-11-22T14:03:45,566][INFO ][o.e.h.n.Netty4HttpServerTransport] [HOME] publish_address {192.168.1.101:9200}, bound_addresses {[::]:9200}
    [2018-11-22T14:03:45,566][INFO ][o.e.n.Node               ] [HOME] started
    [2018-11-22T14:03:46,871][INFO ][o.e.g.GatewayService     ] [HOME] recovered [4] indices into cluster_state
    [2018-11-22T14:04:05,152][INFO ][o.e.c.r.a.AllocationService] [HOME] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[broids-2018][2], [broids-2018][3]] ...]).
    [2018-11-22T14:04:06,506][INFO ][o.e.c.m.MetaDataCreateIndexService] [HOME] [windows-2018] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
    [2018-11-22T14:04:07,602][INFO ][o.e.c.m.MetaDataMappingService] [HOME] [windows-2018/EBftKQ4NT_OfkjL26NJgtA] create_mapping [doc]
    [2018-11-22T14:07:13,566][INFO ][o.e.c.m.MetaDataDeleteIndexService] [HOME] [windows-2018/EBftKQ4NT_OfkjL26NJgtA] deleting index

@reswob You hit high water mark for disk and ES made all your indices read only. See details here: https://www.elastic.co/guide/en/elasticsearch/reference/current/disk-allocator.html . A bit into the explanation you will see this example for removing the read only setting:
PUT /twitter/_settings
{
"index.blocks.read_only_allow_delete": null
}

2 Likes

That was the problem! Thanks. Was there something (obvious or not) in the log I missed that told you that?

@reswob this error from when you tried to refresh the index pattern

only shows up when you hit high water mark on a node and ES makes all indices on that node read only.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.