Ignore specific fields?

Hey,

I'm using logstash to move already parsed messages in a json format from redis to elasticsearch. To determine the index a document is indexed to, I have a @index field in that message and a @type field for the type of the document.

In logstash, I can set the index in the output:

output {
  elasticsearch {
    hosts => ["http://127.0.0.1:9200"]
    index => "%{@index}"
  }
}

And the type in the filter via mutate:

filter{
    mutate {
        replace => { "type" => "%{@type}" }
    }
}

These fields however end up in elasticsearch, which I don't want to happen. I can remove them via remove_field in the filter, but that will mess up the index in the output section. Is there any way to tell logstash to ignore those fields for the output only?

You can copy these fields to the metadata and then delete them from the actual event. You can then use the metadata fields in the output without having them in the actual event being written to Elasticsearch.

1 Like

Metadata works now at least for assigning the index. Now that the index works (assigning a deleted field to an index, causes errors and un-deletable indexes in cerebro), it seems the fields are still transported to elasticsearch.

In my logstash config I have:

mutate { add_field => { "[@metadata][index]" => "%{@index}" } }
mutate {
    replace => { "type" => "%{@type}" }
}
mutate {
    remove_field => [ "%{@index}","%{@version}","%{@type}" ]
}

When I transmit some data and take a look into kibana, I still have the fields I tried to filter out:

t @index 		testindex-103
@timestamp 		June 20th 2017, 12:38:07.860
t @type 		filler
t @version 		1

What am I doing wrong here?

remove_field should list the names of the fields and not their contents, so do this instead:

remove_field => [ "@index","@version","@type" ]
1 Like

That was the problem, thanks for the quick reply. It does work now :smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.