We are trying to read a log file twice. Sending one copy as it is (just mapping the key to each field in logs using grok) to kafka/elastic with index xyz and second copy by removing some fields (we are also doing key mapping to each field in logs using grok) to Kafka/elastic with index abcd. The Kafka broker, topic and everything else is same.
I see most of the logs being published to both the index but I also see few missing in either of the index.
So, would like to get a expert view on this. As, if it is the correct thing to do and is logstash expected to work smoothly in this scenario.
I don't see an obvious reason why this wouldn't work, but then again it's a very abstract question. Maybe the Logstash configuration, Elasticsearch field mappings, error logs or examples for the missing events would give us a clue.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.