When I started parsing my logs with ELK stack, the speed was good but sometimes, I was able to parse 50000 logs in a minute and sometimes, it was as low as 4000 in a minute. I parsed about 2.5 million data in two and a half hours. After parsing this data, the ELK stack has slowed down.
I am now getting a speed of 1000 logs per 10 minutes. I don't understand what's wrong with it. My elasticsearch server has 64GB RAM with 31GB as memory heap. So it shouldn't give such bad performance! Please help me out here. Thanks in advance!
I am not monitoring health of elasticsearch. I know that I need more data as currently I am load testing the architecture. And my numbers doesn't match with the ones from my previous parsing.
So I started monitoring my ES with Marvel and the status shown is yellow. So I guess that means that my shards are not replicated, but would that slow the elasticsearch this much?
These are the nginx access logs. But I guess I have found the error. The logstash completely stops sending data when it encounters data in the form of encoded unicode characters. I am now trying to solve that. Any help on that front would be great!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.