I have three indices which are sent across from Logstash to Elasticsearch. The logs are of the same format. The difference being they come from different servers and are received at different ports on logstash. The grok pattern for them is exactly the same.
The issue I face now is that every field in each of these three indices are tripling itself. When I remove one of the config files from logstash, the repetition becomes twice. If I add another file i.e four logstash configs , it quadruples. Really odd behaviour.
The "message" field in elasticsearch is normal but its only the fields that are multiplying. Any ideas why such odd behaviour?
All config files in the directory are concatenation, so each event is processed by all filters and go to all outputs unless you control this using conditionals. Why are you creating an index per server? Why not put all the events in a single index?
Thanks. Any idea why messages can get lost. I am using tcp protocol. So rsyslog send the messages to logstash which then parses and sends it to elasticsearch. Noticed quite a lot of messages being lost hence tried this approach.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.