Time to timestamp issue


(Subash) #1

Hi All,

I use filter to extract the @time into a field of its own called "@time"
message ==> "@time": "2017-12-04T17:44:34"
"@timestamp" => 2017-12-28T12:29:22.096Z,

.conf file content

input {
file {
path => "/home/sdc/PycharmProjects/Kibana_Pro/utility/MAYOPETMR01_2017-12-04.gz.log"
type => "log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}

filter {
date {
match => ["@time","EEE MMM dd HH:mm:ss YYYY"]
target => "@timestamp"
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "gesyslog_test"
document_type => "log"
}
stdout { codec => rubydebug }
}

Output:

{
"type" => "log",
"message" => "{"@time": "2017-12-04T17:44:34", "@code": "2212884484", "text": "Exception Class: Unknown Severity: Unknown\nFunction: ", "@systemID": "MA"NSP SCP:RfHubCanHWO::RfBias 5462", "detail": {"view_Level": "4", "seq_Num": "0", "name": null, "format": "1", "h_Name": "prtte1_Seq": "4767637676"}, "@type": "log"}",
"@version" => "1",
"path" => "/home/sdc/PycharmProjects/Kibana_Pro/utility/4444444_2017-12-04.gz.log",
"@timestamp" => 2017-12-28T12:29:22.096Z,
"host" => "sdc-VirtualBox"
}

It is not replacing it. Please help me


(Magnus Bäck) #2

The @time field obviously doesn't match the "EEE MMM dd HH:mm:ss YYYY" pattern you've given. Try "ISO8601" instead.


(Subash) #3

Thanks for replying, , the same issue is occuring after changing to "ISO8601".

Log file test_site.log

{"@type": "Log", "@source": "yeryryryr", "@systemID": "666666", "detail": {"view_Level": "4", "time_Seq": "1512536473", "suit_Name": null, "tag": "162", "host_Name": "test", "format": "1", "seq_Num": "1"}, "@time": "2017-12-06T05:01:13", "text": "Signal 15 was received, causing a system shutdown.", "@code": "501370064"}
{"@type": "LOG", "@source": "Uyryryryry", "@systemID": "666666", "detail": {"view_Level": "4", "time_Seq": "1512536473", "suit_Name": null, "tag": "130", "host_Name": "test", "format": "1", "seq_Num": "2"}, "@time": "2017-12-06T05:01:13", "text": "start script failed", "@code": "0"}
{"@type": "LOG", "@source":" yyyy", "@systemID": "666666", "detail": {"view_Level": "4", "time_Seq": "1512536473", "suit_Name": null, "tag": "225", "host_Name": "test", "format": "1", "seq_Num": "3"}, "@time": "2017-12-06T05:01:13", "text": "Exception Class: Unknown\nFunction: yryryyr", "@code": "200002379"}
{"@type": "Log", "@source": "testts1", "@systemID": "666666", "detail": {"view_Level": "4", "time_Seq": "1512536473", "suit_Name": null, "tag": "221", "host_Name": "test", "format": "1", "seq_Num": "4"}, "@time": "2017-12-06T05:01:13", "text": "Exception Class: Unknown\nFunction", "@code": "200002379"}
sdc@sdc-VirtualBox:~/PycharmProjects/Kibana_Pro/utility$

.conf file content :

sdc@sdc-VirtualBox:~/PycharmProjects/Kibana_Pro/utility$ cat /home/sdc/PycharmProjects/Kibana_Pro/utility/logstash.conf
input {
file {
path => "/home/sdc/PycharmProjects/Kibana_Pro/utility/test_site.log"
type => "log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}

filter {
date {
match => ["@time","ISO8601"]
#match => ["@time","yyyy-MM-dd HH:mm:ss"]
target => "@timestamp"
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]

sniffing => true

manage_template => false

index => "gesyslog_test"
document_type => "log"

}
stdout { codec => rubydebug }
}
sdc@sdc-VirtualBox:~/PycharmProjects/Kibana_Pro/utility$

Output message

root@sdc-VirtualBox:~# /usr/share/logstash/bin/logstash --path.settings=/etc/logstash -f /home/sdc/PycharmProjects/Kibana_Pro/utility/logstash.conf --path.data /usr/share/logstash/data
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
{
"type" => "log",
"@version" => "1",
"path" => "/home/sdc/PycharmProjects/Kibana_Pro/utility/test_site.log",
"host" => "sdc-VirtualBox",
"@timestamp" => 2017-12-29T11:42:23.968Z,
"message" => .....................

Could you please suggest me what needs to be corrected?

thanks,
Subash


(Subash) #5

Someone can look at this issue?


(Magnus Bäck) #6

It doesn't look like you're parsing the JSON input in any way (hence, there is no @time field to parse). What does the full output from the stdout plugin look like?


(Subash) #7

Full output file for a entry.

{
"type" => "log",
"message" => "{"@time": "2017-12-04T17:44:34", "@code": "2212884484", "text": "Exception Class: Unknown Severity: Unknown\nFunction: ", "@systemID": "MA"NSP SCP:RfHubCanHWO::RfBias 5462", "detail": {"view_Level": "4", "seq_Num": "0", "name": null, "format": "1", "h_Name": "prtte1_Seq": "4767637676"}, "@type": "log"}",
"@version" => "1",
"path" => "/home/sdc/PycharmProjects/Kibana_Pro/utility/4444444_2017-12-04.gz.log",
"@timestamp" => 2017-12-28T12:29:22.096Z,
"host" => "sdc-VirtualBox"
}


(Magnus Bäck) #8

Right, no @time field. Your event only has type, message, @version, path, @timestamp, and host fields. Use a json or json_lines codec in your file input, or process the message field with a json filter.


(Subash) #9

thanks Magnus.
i have updated the same. Now i am getting below error in /var/log/logstash/logstash-plain.log. And logstash is struck. Please advise me.

2018-01-12T18:14:41,753][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[gesyslog_test][3] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[gesyslog_test][3]] containing [2] requests]"})
[2018-01-12T18:14:41,753][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>4}


(Krunal Kalaria) #10

hi @uvarisubash try this i dont know in your case its working or not i hope its work!

date
{
match => ["@time", "UNIX_MS"]
target => "Time"
}


(Subash) #11

Thanks Krunal..

Now i am getting below error in /var/log/logstash/logstash-plain.log.

===============================================================================
[2018-01-12T19:13:43,359][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[gesyslog_test][4] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[gesyslog_test][4]] containing [29] requests]"})
[2018-01-12T19:13:43,359][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>29}
[2018-01-12T19:13:43,401][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>500, :url=>"http://localhost:9200/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s"}


(Magnus Bäck) #12

Your ES cluster is in bad health. Look in the ES logs to find out more.


(system) #13

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.