I am posting this in Kibana, but honestly not sure where the problem lies. Every piece is version 6.4.2.
Stack was working fine. My stack is configured as Beats>Logstash>Kibana? Honestly new so I am not sure what role Elasticsearch plays in the stack yet. I attempted to pull some logs from our main ASA just to test performance of the software. I let several days of indices build up to get some sort of baseline. I would consider these big indices ranging in size of 5 GB to 10 GB per day and millions of "documents" in each.
I left the system alone for a few days, then needed to scrape some web logs. Logging into Kibana was very sluggish and TOP on the system showed very high CPU load primarily from the Logstash process mainly. This may have been a mistake but I deleted every indice since I no longer cared about the information.
I tried to create new index for new Beat from web server and it just sites there "Creating Index..." and never progresses. I then attempted to delete the old ASA syslog Index Patter and received:
blocked by: [FORBIDDEN/12/index read-only / allow delete (api)]
Checked logs and there were mention of disk watermark threshold and Error 403's.:
[2018-10-22T08:43:41,407][INFO ][o.e.c.r.a.DiskThresholdMonitor] [inBmC6I] low disk watermark [85%] exceeded on [inBmC6IOSFaFJC7T-TOadA][inBmC6I][/var/lib/elasticsearch/nodes/0] free: 7.4gb[11.4%], replicas will not be assigned to this node
I believe I have deleted those indices and the disk is not full:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 66G 8.9G 57G 14% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 9.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 222M 793M 22% /boot
tmpfs 783M 12K 783M 1% /run/user/42
tmpfs 783M 0 783M 0% /run/user/1001
At this point I do not know what to check next. Thank you for anyone's time.