I have an interesting issue where data is being sent to the new index and also the old default index. I am using Elasticsearch 2.4 and here is my output file:
I checked both indices and there is an identical entry for each record and both records have the same record number.
It is very strange because I did this exact same thing in the lab and everything worked fine. I could even change the index name and update the output to reflect the new index name and it worked like a charm. Maybe there is a slight difference somewhere, but I cannot seem to find it.
I checked out the Elasticsearch logs and there is an error in there with the new index I created. It says "@timestamp" doesn't exist, but @timestamp shows in all the entries when I look at it with Kibana. If I comment out the new index and send everything to the default one, this error no longer ocurrs.
I tried this exact same method of adding an index in my lab environment and I have no issues with this Elasticsearch error and all of my data goes to the defined index and is not duplicated to the default one. I think there may be some mapping issue with the production Elasticstack, but I am not sure on that.
Yes the files reside in /etc/logstash/logstash.conf and there are other files in that directory.
We have it set up where each filter has it's own file. The same for output and input.
example: /etc/logstash/logstash.conf/01-input.conf
then all the filter conf's are numbered 10 through whatever and the output file is 99-output.conf
This helps with adding new things to the environment and makes things a little easier to find
What I found out from other articles is that there was an issue in the Kibana index within /elasticsearch/indices. I got rid of that Kibana index file and everything started working as expected. I lost my visualizations in the process, but that was no big deal to me.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.