Data hours behind in Kibana

Hi,

I hope someone can put me in the right direction here, as im not entirely sure whether this is purely ES related or Kibana.

What we have been encountering is that whenever we have big amounts of data going to LS then to ES data get delayed by hours to be available in Kibana for search. I would pick "today" for time and auto refresh every 10 secs or so and I can only see data being updated for few seconds, but checking the monitoring I can see that indexing rate for that specific index is very high.

So my questions are:

  • is there a minimum number of ES data nodes to handle big amounts of data reaching up to 4k indexing rate per sec? More to come, some are logs and some are metrics. We have 4 data nodes, 2:1 primary and replica and 30s refresh interval (daily indexing).

  • Are there and preferred settings to tweak to try and get better results on ES, do I need to increase JVM heap to more than what I currently have 9GB and 30% allocation to index buffer size (indices.memory.index_buffer_size: 30%). Do I need to specify some thread-pool settings for indexing and bulk operations ?

  • Do I need more data nodes ?

We have been having this issue every time we test with big streams of data coming from many sources/VM beats and logs, and I really appreciate any help on what I need to look at to try and improve this.

We have 2 LS instances load sharing and Kafka in between the source and the ELK cluster.

Any help ? was anyone able to have a look at this or do I need to add more data or make it more clear.

Thanks in advance.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.