Order log in kibana based on log timestamp

Hi
I want to order my log in kibana based on log timestamp

my logstash.conf
filter {
grok {
match => [ "message", "%{DATESTAMP:timestamp}" ]
}
date {
locale => "en"
match => [ "timestamp", "YYYY-MM-dd HH:mm:ss" ]
target => ["timestamp"]
}
}

  "@version" => "1",
"@timestamp" => "2017-03-27T05:40:21.660Z",
      "host" => "Vishnu-Prasad.local",
      "path" => "/Users/tcstsb3/Downloads/log/2.log",
 "timestamp" => "2016-12-25T06:43:57.000Z"

Note : Im integrating multiple log file in logstash ,In kibana i want to showup based on ordering log time .
Above conf i got timestamp of log file ,
I Dont know how to filter that time and show proper order in kibana
Can you help me how to give filter in conf so that kibana will show proper order

Thanks in advance

Your date pattern is clearly wrong; you have a space in your pattern but a "T" in the timestamp field. As a shortcut you should be able to use "ISO8601" as the pattern.

Thank you friend :slight_smile:
here is my conf
filter {
grok {
match => [ "message", "%{DATESTAMP:timestamp}" ]
}
date {
locale => "en"
match => [ "timestamp", "ISO8601" ]
target => ["timestamp"]
}
}

{
"message" => "12-24-2016 12:13:57 INFO - HV000001: Hibernate Validator 5.2.4.Final\r",
"@version" => "1",
"@timestamp" => "2017-03-27T06:14:30.469Z",
"host" => "Vishnu-Prasad.local",
"path" => "/Users/tcstsb3/Downloads/log/2.log",
"timestamp" => "12-24-2016 12:13:57",
"tags" => [
[0] "_dateparsefailure"
]
}

now its showing tags _dateparsefailure
and in kibana also no more proper order in showing the log

But that's a completely different timestamp format. In this case "MM-dd-YYYY HH:mm:ss" should work. Note that you can list multiple patterns in the same date filter.

If you don't use the default @timestamp field for your event timestamps remember to reconfigure your Kibana index pattern so it uses that field instead.

I created new index.
now im getting this error
in kibana
Courier Fetch: 70 of 80 shards failed.
how to overcome ?

Index: logstash-2015.03.23 Shard: 0 Reason: SearchParseException[[logstash-2015.03.23][0]: from[-1],size[500]: Parse Failure [Failed to parse source [{"size":500,"sort":{"tstamp":"desc"},"query":{"filtered":{"query":{"query_string":{"analyze_wildcard":true,"query":""}},"filter":{"bool":{"must":[{"range":{"@timestamp":{"gte":1489861800000,"lte":1490466599999}}}],"must_not":[]}}}},"highlight":{"pre_tags":["@kibana-highlighted-field@"],"post_tags":["@/kibana-highlighted-field@"],"fields":{"":{}}},"aggs":{"2":{"date_histogram":{"field":"@timestamp","interval":"3h","pre_zone":"+05:30","pre_zone_adjust_large_interval":true,"min_doc_count":0,"extended_bounds":{"min":1489861800000,"max":1490466599999}}}},"fields":["*","_source"],"script_fields":{},"fielddata_fields":["_timestamp","@timestamp","tstamp"]}]]]; nested: SearchParseException[[logstash-2015.03.23][0]: from[-1],size[500]: Parse Failure [No mapping found for [tstamp] in order to sort on]];

its seems no mapping for tstamp,how to give mapping ?

Have you told Kibana to visualize or otherwise do something with a tstamp field?

Yes buddy

i modified existing index with adding new index called tstamp
then i clicked override changes

I don't understand what you did, but Kibana is complaining that you asked it to do something with tstamp but there was no such field.

i cleared that one buddy..

now my only concern is
if i refresh kibana showing this
Courier Fetch: 70 of 80 shards failed.
for this i searched in google they told to add
this line in elasticsearch.yml

threadpool.search.type: fixed
threadpool.search.size: 200
threadpool.search.queue_size: 20000

but still its showing that error

**note:**im using mac
using local ip

Don't change the threadpool settings unless you have a good reason and a solid understanding of why you're making the change.

The ES logs should contain details about why the shards are failing the queries.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.