Anyone here why beats data such as beat name, version etc is not included after indexing in elasticsearch?
You're talking about the beat
field and its subfields? It should be included. We can't say much without seeing your configuration and a sample event produced by a stdout { codec => rubydebug }
output.
thank you for your reply, magnus.
We have configured our filebeat to output in kafka only:
output.kafka:
hosts: ["host1:9092","host2:9093","host32:9094"]
codec.format:
string: '%{[message]}{[beats]}'
topic: 'mytopic'
while our logstash config is :
input {
kafka {
topics => ["mytopic"]
bootstrap_servers => "host1:9092,host2:9093,host3:9094"
}
}
output {
elasticsearch {
hosts => ["142.122.218.12:9200"]
index => "mycollection"
}
}
Filebeat stdout contains the beat field.
after indexing in elasticsearch, the beat field along its subfields are gone/does not exists.
Please supply everything I asked for. The sample event is missing.
Sorry if i forgot to include sample event.
But, i have found the solution.
Whenever we use filebeat to send event in logstash, its metadata is also included.
But, if the source comes from filebeat to kafka, by default, only the message field will be sent to kafka.
That is why we need to set the filebeat codec format and append the filebeat metadata such as input_type, filesource, .. etc.... see below config.
output.kafka:
hosts: ["host:9092"]
codec.format:
string: 'beat=%{[beat]} offset=%{[offset]} filesource=%{[source]} input_type=%{[input_type]} start=%{[message]}'
topic: 'mytopic'
We can now parse these fields using grok filter in logstash.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.