I have noticed that my test cluster are doing something really wierd.
Instead of adding each event it seems to replace the one document/event in the index with a new one.
Like this; one event comes from logstash(syslog) and index counts that it has one event/doc. Next event replaces the old event and event/doc count are still one. And this continues all day.
This image is from monitoring view in Kibana, index rate is 307.86/s and doc count stays 1.
Anyone seen this behaivor? I would like to fix this without rebuild. Just so I know what to do if it ever happens again.
Funny part is that the .monitoring index works. But not the one I create myself or from Logstash output.
Setup info:
Version - 6.3.2
3 node cluster with seperated kibana node.
Seperated logstash cluster that parse with grok and kv filters.
Logstash output config, part:
if "firewalls" in [fields][sourcetype] {
elasticsearch {
hosts => ["node01.example.com:9200","node02.example.com:9200","node03.example.com:9200"]
sniffing => false
manage_template => false
index => "logs-firewalls-%{+YYYY.MM.dd}"
document_id => "fw"
}