So using logstash exec input (JSON response) to output to an elasticsearch cluster (using the default mapping template).
Somehow even though the resultant EXEC event has a @timestamp, my mapping misses it and hence making it useless for my kibana! Can someone help me understand what I am missing?
Basically my EXEC input just makes a rest call to get another ES cluster stats and tries to just store the entire JSON into another ES cluster which has kibana built on top of it! The problem is that Kibana won't recognize the index since its missing the @timestamp in the mapping (but the data does exist)
Mark that is fine, its just a play area for me. Anyhow isn't it a basic function for a filter in LS to actually filter an EVENT or do some data transformation?
I still don't understand how adding a filter will help. I did try doing this but still it did not work!
filter {
if [type] == "esl_addr_cache_es_qa_index_stats" {
date {
match => [ "%{@timestamp}", "YYYY-MM-dd HH:mm:ss" ]
add_field => ["event_time", "%{@timestamp}"]
}
}
}
Here is my config (pay attention to exec input, its a json response of another ES index stats). LS default adds the @timestamp. I can even see it in my source document, just that the mapping is missing it!
here is my config:
Input using lumberjack (a.k.a logstash forwarder)
Output using elasticsearch
input
{
# Elasticsearch indices stats for ESL Address Cache - QA
exec {
command => "curl -s -X GET http://someesnode:9200/_stats"
codec => "json"
interval => 60
type => "esl_addr_cache_es_qa_index_stats"
}
}
filter {
if [type] == "esl_addr_cache_es_qa_index_stats" {
date {
match => [ "%{@timestamp}", "YYYY-MM-dd HH:mm:ss" ]
add_field => ["event_time", "%{@timestamp}"]
}
}
}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.