How to pass temporary data from filter to output in logstash

How can I pass temporary data from a filter to output in logstash?

I did like this:

filter {
  mutate { add_field => { "[@metadata][TEMP_DATA]" => "%{custom_index}" } }
  mutate { remove_field => ["custom_index"] }
}

output {
  elasticsearch {
    index => "%{[@metadata][TEMP_DATA]}"
  }
}

But it did not work.
Does anyone have an idea how I could do that?

In what way did it not work?

It is weird because I can see at the Monitoring page in Kibana that the Logstash Node is receiving data. But when I go into the Discover page in Kibana I do not see any data.

I have created an index in Elasticsearch and when I check the index in Kibana I don't see any data.

I am using Logstash 5.5.2.

The output plugin looks as follows:

output {
  stdout {
    codec => rubydebug
  }
  elasticsearch {
    codec => json
    manage_template => false
    hosts => [ "localhost:9200" ]
    user => "elastic"
    password => "changeme"
    index => "%{[@metadata][TEMP_DATA]}"
    document_type => "ObjectEvent"
    document_id => "%{epc}"
  }
}

Use Elasticsearch's cat indices API to check what indices you have. Do you by any chance have an index named, literally, %{[@metadata][TEMP_DATA]}? Or %{custom_index}?

I already have done that and nothing shows up. I have used the stdout output plugin to display what I actually want to send to elasticsearch and it displays everything perfectly.

If I have something like this:

output {
  stdout {
    codec => rubydebug
  }
  elasticsearch {
    codec => json
    manage_template => false
    hosts => [ "localhost:9200" ]
    user => "elastic"
    password => "changeme"
    index => "%{custom_index}"
    document_type => "ObjectEvent"
    document_id => "%{epc}"
  }
}

And I don't remove the custom_index field from the filter, it works.
But I don't want to send the custom_index field to elasticsearch.
That's why I am using @metadata.

Hi @magnusbaeck,

I just tried something else. Instead of TEMP_DATA, I put another used %{[@metadata][endpoint]}.

output {
  stdout {
    codec => rubydebug
  }
  elasticsearch {
    codec => json
    manage_template => false
    hosts => [ "localhost:9200" ]
    user => "elastic"
    password => "changeme"
    index => "%{[@metadata][endpoint]}"
    document_type => "ObjectEvent"
    document_id => "%{epc}"
  }
}

and an index %{[@metadata][endpoint]} has been created. But with TEMP_DATA nothing changes.
Do you know an way to give a value to an index and not send this value as a field to elasticsearch?

Is this actually possible?

Comment out your elasticsearch output and change your stdout output to stdout { codec => rubydebug { metadata => true } }. What does an example event look like?

I also have done that. But this is not appearing.

I have actually two @metadata fields. One is [@metadata][EPC_t] and the another one is [@metadata][endpoint]. Just [@metadata][EPC_t] is been displayed as you can see.

It works! BUT... I don't know why.

I changed the position where I write the following code:

filter {
  mutate { add_field => { "[@metadata][endpoint]" => "%{endpoint}" } }
  mutate { remove_field => ["endpoint"] }
}

and placed this after the following code:

  if [EPC_t] {
    mutate { add_field => { "[@metadata][EPC_t]" => "%{EPC_t}" } }
    mutate { remove_field => ["EPC_t"] }
  }

If I try to put the addition of [@metadata][endpoint] field somewhere else before the EPC_t metadata field, it does not work.

Maybe I am missing some logstash basics? Or metadata basics? I don't get it. Both fields have nothing to do with each other....

it works...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.