ELK stuck in Read Only mode

I am posting this in Kibana, but honestly not sure where the problem lies. Every piece is version 6.4.2.

Stack was working fine. My stack is configured as Beats>Logstash>Kibana? Honestly new so I am not sure what role Elasticsearch plays in the stack yet. I attempted to pull some logs from our main ASA just to test performance of the software. I let several days of indices build up to get some sort of baseline. I would consider these big indices ranging in size of 5 GB to 10 GB per day and millions of "documents" in each.

I left the system alone for a few days, then needed to scrape some web logs. Logging into Kibana was very sluggish and TOP on the system showed very high CPU load primarily from the Logstash process mainly. This may have been a mistake but I deleted every indice since I no longer cared about the information.

I tried to create new index for new Beat from web server and it just sites there "Creating Index..." and never progresses. I then attempted to delete the old ASA syslog Index Patter and received:

blocked by: [FORBIDDEN/12/index read-only / allow delete (api)]

Checked logs and there were mention of disk watermark threshold and Error 403's.:

[2018-10-22T08:43:41,407][INFO ][o.e.c.r.a.DiskThresholdMonitor] [inBmC6I] low disk watermark [85%] exceeded on [inBmC6IOSFaFJC7T-TOadA][inBmC6I][/var/lib/elasticsearch/nodes/0] free: 7.4gb[11.4%], replicas will not be assigned to this node

I believe I have deleted those indices and the disk is not full:

Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root   66G  8.9G   57G  14% /
devtmpfs               3.9G     0  3.9G   0% /dev
tmpfs                  3.9G     0  3.9G   0% /dev/shm
tmpfs                  3.9G  9.5M  3.9G   1% /run
tmpfs                  3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1             1014M  222M  793M  22% /boot
tmpfs                  783M   12K  783M   1% /run/user/42
tmpfs                  783M     0  783M   0% /run/user/1001

At this point I do not know what to check next. Thank you for anyone's time.

There is a built in protection, that Elasticsearch stops accepting writes once your disk exceeds 95% usage at some point.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.4/disk-allocator.html - this also includes a call how to set your index to be writable again.

Thanks Alexander. I know it is hard to believe but I did actually read that documentation prior to posting and here was my issue with it. I understand to remove the block I run this bit of script against my index:

PUT /my_index/_settings
{
  "index.blocks.read_only_allow_delete": null
}

My problem was that I deleted the previously huge indices which I suspected caused the problem in the first place so I was not sure how to then change that flag. Is there a script that will go through the stack and find where the block exists without having to know the indice name explicitly? Obviously my ignorance of the Elk structure is causing me issues here.

calling GET _all/_settings will return all the index settings, allowing you to check where this setting is configured.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.