Data downfalls every 5 mins

Hi, everyone!

I'm facing some weird Elastic behavior, and didn't found solution to something similar in the Web.

There appear some data downfalls in about every 5 mins. The data itself is coming to box without such interrupts.

After whole server reboot 2-3 hours documents are collected normally, but then the problem is again active.
I have hourly indices. In beginning of every hour (while the next index is fresh?) there's some time interval, when data also is collected without such looses:

Each index is apprx. 3 Gb per size, 60-70 Gb per day. Each index contains 3 shards (with lower number of shards there's 5-10% less documents per index). 30 s refresh interval, 0 replicas (one node).

IO shouldn't be the bottle next, HDDs are located locally, 30-50 MB/s write speed, for buff/cache operations I dedicated 30 Gb of RAM, iowait is lower than 5%. CPU 10-12%.

JMV heap is 30 GB, with 20% buffer size (some space there's reserved for parallel reindexation when it may be necessary). Average heap use isn't bigger than 60%.

There isn't anything in logs. Only info about index creations, mapping updates..

So I'm really obsessed now and don't know where to look further.
Probably someone was expecting something similar and know how to deal with it?
Or any help with troubleshooting would be nice..


Given the regularity it could be GC?
Do you have Monitoring installed?

No, I've concentrated on data collection config, until now it was enough with API queries to get necessary monitoring info.
Previously, when server had lower amount of memory, I've seen GC pauses (mentioned in logs). But now I don't observe that.

Actually there was a problem with sending application.
I used ntopng instead of Logstash, and it wasn't capable to proceed such amount of requests.

After I migrated to Logstash and tuned it, I received a huge productivity boost.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.