Lag in logs appearing

Hi Team
Been stuck in an issue for quite some time with no luck.
Currently, we are running elasticsearch, logstash, and kibana in production environment.
3 master, 1 client, 6 data node cluster.
All the nodes are 4 core, 8GB, with data nodes having additional 300GB on SSDs.
We have few number of time based indices on this cluster, with not much disk usage.
One index is an exception, which creates a daily index of around 250-300GB.
This index has 3 primary shards and 1 replica shard for each, totalling to 6 shards in total.
The issue is the logs are lagging by 24 hrs, and we have tried even increasing the number of data nodes, still no luck.
Any insights would be really helpful on how to solve this.

Hey,

your first task should be to find out where the lag is actually coming from. Is it happening when the data is transferred from the system where the log was generated to the recipient? Is it generated due to sitting in a queue and not being picked up. Is it generated after the document is indexed?

Unless you dont have any overview over your ingestion infrastructure, this one is quite the riddle and pretty much impossible to pinpoint. So visibility here is key. Out of my stomach I'd always assume this is not happening on the Elasticsearch side, but I don't have any had data to back that statement up either :slight_smile:

--Alex

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.