I tried to do a load test on logstash which generated 10gb of data in 1 hour. In my system the logs are created every hour, a new index is created in my es everyday. But after watching the index size in elasticsearch i only found it to be of 2.2gb, seems like logstash couldn't parse the entire data file.
Is there a way to make sure logstash parse all the information? the system on which logstash is running is an amazon ec2 instance with 16v cpu and 30gb ram.
also i am running a trial version of aws elasticsearch with no dedicated master nodes and 1 t2.small.elasticsearch instance.
I would expect such a tiny Elasticsearch instance to be the bottleneck. t2 instances have very limited CPU and can not keep up with such a powerful Logstash node. You will need a considerably larger Elasticsearch cluster to be able to saturate Logstash fully.
Logstash monitoring would give you a good view into what is going on inside it and it is available as part of the free basic license. This however mean it is not available on AWS ES service, only Elastic Cloud.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.