Logstash default template for elasticsearch output

So using logstash exec input (JSON response) to output to an elasticsearch cluster (using the default mapping template).

Somehow even though the resultant EXEC event has a @timestamp, my mapping misses it and hence making it useless for my kibana! Can someone help me understand what I am missing?

Basically my EXEC input just makes a rest call to get another ES cluster stats and tries to just store the entire JSON into another ES cluster which has kibana built on top of it! The problem is that Kibana won't recognize the index since its missing the @timestamp in the mapping (but the data does exist)

You probably want a date filter to force LS to use that field.

Mark as I mentioned its already there in the source. Here is a snippet shwing it from the ES indices stats I am trying to capture (JSON response)

"_source": {
"_shards": {
"total": 42,
"successful": 21,
"failed": 0
},
....
....
"@version": "1",
"@timestamp": "2015-07-29T19:35:49.557Z",
"type": "esl_addr_cache_es_qa_index_stats",
"event_time": "2015-07-29T19:35:49.557Z",
"host": "0.0.0.0",
"command": "curl -s -X GET http://someip:9200/_stats"
}

Right but it won't just take that and use it just because it is there.
You need to use a date filter.

Also, your cluster is open to the internet. THIS IS BAD.

In fact. @pavan_bkv: all your data is available online by anybody.

Mark that is fine, its just a play area for me. Anyhow isn't it a basic function for a filter in LS to actually filter an EVENT or do some data transformation?

I still don't understand how adding a filter will help. I did try doing this but still it did not work!

filter {
if [type] == "esl_addr_cache_es_qa_index_stats" {
date {
match => [ "%{@timestamp}", "YYYY-MM-dd HH:mm:ss" ]
add_field => ["event_time", "%{@timestamp}"]
}
}
}

Yes LS does do filtering, but it's up to you to tell it what to do otherwise it will just pass through what it gets without any intelligence.

If you post your entire config it might help.

Here is my config (pay attention to exec input, its a json response of another ES index stats). LS default adds the @timestamp. I can even see it in my source document, just that the mapping is missing it!

here is my config:

Input using lumberjack (a.k.a logstash forwarder)

Output using elasticsearch

input
{

    # Elasticsearch indices stats for ESL Address Cache - QA
    exec {
            command => "curl -s -X GET http://someesnode:9200/_stats"
            codec => "json"
            interval => 60
            type => "esl_addr_cache_es_qa_index_stats"
    }

}

filter {
if [type] == "esl_addr_cache_es_qa_index_stats" {
date {
match => [ "%{@timestamp}", "YYYY-MM-dd HH:mm:ss" ]
add_field => ["event_time", "%{@timestamp}"]
}
}
}

output
{
if [type] == "esl_addr_cache_es_qa_index_stats" {
elasticsearch {
bind_host => "someip"
index => "es-stats-logstash-%{+YYYY.MM.dd}"
cluster => "log_aggr_es_cluster"
codec => "json"
}
}
}

Ok now I understand! :smile:

It should be using that @timestamp given it's LS that is actually generating it. What does the mapping for the index look like?

Mark I can't put the entire mapping here, is there a way to attach as a file?

Put it in gist/pastebin/etc and link to it.

Thanks Mark. Fixed the issue with custom mapping

{
"template": "es-stats-logstash-*",
"settings": {
"index.refresh_interval": "5s"
},
"mappings": {
"esl_addr_cache_es_qa_index_stats": {
"properties": {
"@timestamp": {
"format": "dateOptionalTime",
"type": "date", "index" : "analyzed"
}
}
}
}
}