I'm using logstash to move already parsed messages in a json format from redis to elasticsearch. To determine the index a document is indexed to, I have a @index field in that message and a @type field for the type of the document.
These fields however end up in elasticsearch, which I don't want to happen. I can remove them via remove_field in the filter, but that will mess up the index in the output section. Is there any way to tell logstash to ignore those fields for the output only?
You can copy these fields to the metadata and then delete them from the actual event. You can then use the metadata fields in the output without having them in the actual event being written to Elasticsearch.
Metadata works now at least for assigning the index. Now that the index works (assigning a deleted field to an index, causes errors and un-deletable indexes in cerebro), it seems the fields are still transported to elasticsearch.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.