ELK - Kibana showing data that was already deleted, won't show new data

Hi, I'm currently using the ELK stack, and I'm running into this problem quite often. My Logstash config file is looking like this:

input {
        file{
                path => "/home/me/logs/*.log"
                start_position => "beginning"
                sincedb_path => "/dev/null"
        }
}

filter {
        grok {
                match => [
                        "message", "%{TIMESTAMP_ISO8601:timestamp}\s+(?<typeOfMessage>(\w+))%{GREEDYDATA:message}
                ]
        }
        date{
                match => ["timestamp", "ISO8601"]
        }
}

output {
        elasticsearch{
                hosts => ["localhost:9200"]
        }
}

My problem is that when changing the content of the folder, I refresh the index pattern, and sometimes even restart the services, but the data won't update, even when the previous files were deleted. The tail of my ES log is looking like this:

[2019-05-17T14:49:31,607][INFO ][o.e.l.LicenseService     ] [mikael-IPMH110G] license [41ec273f-73ab-466c-a1c0-d6632204e344] mode [basic] - valid
[2019-05-17T14:49:31,623][INFO ][o.e.g.GatewayService     ] [mikael-IPMH110G] recovered [3] indices into cluster_state
[2019-05-17T14:49:32,072][INFO ][o.e.c.r.a.AllocationService] [mikael-IPMH110G] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-2019.05.16-000001][0], [.kibana_1][0]] ...]).
[2019-05-17T14:49:33,776][INFO ][o.e.c.m.MetaDataIndexTemplateService] [mikael-IPMH110G] adding template [.management-beats] for index patterns [.management-beats]
[2019-05-17T14:50:01,575][INFO ][o.e.c.r.a.DiskThresholdMonitor] [mikael-IPMH110G] low disk watermark [85%] exceeded on [xY9SrtBKRbyW6z1vAjR9QQ][mikael-IPMH110G][/var/lib/elasticsearch/nodes/0] free: 3.3gb[12%], replicas will not be assigned to this node
[2019-05-17T14:50:31,741][INFO ][o.e.c.r.a.DiskThresholdMonitor] [mikael-IPMH110G] low disk watermark [85%] exceeded on [xY9SrtBKRbyW6z1vAjR9QQ][mikael-IPMH110G][/var/lib/elasticsearch/nodes/0] free: 3.3gb[12%], replicas will not be assigned to this node
[2019-05-17T14:51:01,809][INFO ][o.e.c.r.a.DiskThresholdMonitor] [mikael-IPMH110G] low disk watermark [85%] exceeded on [xY9SrtBKRbyW6z1vAjR9QQ][mikael-IPMH110G][/var/lib/elasticsearch/nodes/0] free: 3.3gb[12%], replicas will not be assigned to this node
[2019-05-17T14:51:31,819][INFO ][o.e.c.r.a.DiskThresholdMonitor] [mikael-IPMH110G] low disk watermark [85%] exceeded on [xY9SrtBKRbyW6z1vAjR9QQ][mikael-IPMH110G][/var/lib/elasticsearch/nodes/0] free: 3.3gb[12%], replicas will not be assigned to this node
[2019-05-17T14:52:01,825][INFO ][o.e.c.r.a.DiskThresholdMonitor] [mikael-IPMH110G] low disk watermark [85%] exceeded on [xY9SrtBKRbyW6z1vAjR9QQ][mikael-IPMH110G][/var/lib/elasticsearch/nodes/0] free: 3.3gb[12%], replicas will not be assigned to this node

But for my understanding, at 85% the data should still update, right?
Is there anything else I should do so the data update?

how are your file name /home/me/logs/*.log
are there all new file every time? or just one log file.

reason being. logstash does something and once you read mylog1.log it will mark as read and if you replace the file with new data logstash won't read it.
Because inode of that file is still same and logstash has mark as it is already read.

Here is how it works in linux.
if you delete the file and recreate it it will still uses inode. you can test it.

touch /tmp/test1
ls -i /tmp/test1
rm /tmp/test1 ; touch /tmp/test1
ls -i /tmp/test1

I had same problem. what I started doing is creating file by day of the month. so I will have 30+ file and I remove them before bringing them to system. and hence won't have same inode.

Hmm, I was using the logs of many deployments, so I guess that it's safe to say that some dates were probably repeated. So if I append the origin name to the date, it should update the data, correct?

yes, as long as it is not using same inode on that log dir
you can test it. by placing different file*.log in to that dir and see if it picks that data up.
just have one line in that log file and go back to elasticsearch and see if that data in imported.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.