Dateparsefailure: change @timestamp to timestamp for apache php error log

Hi ,

I am using apache error log where timestamp is as "Wed May 10 17:21:26.820378 2017" Sample log is provided below:

Sample log:

[Wed May 10 17:21:26.820378 2017] [proxy_fcgi:error] [pid 6803:tid 139780532639488] [client 10.229.1.47:22320] [realclient 182.69.128.5] AH01071: Got error 'PHP message: Request: data/number/632296?stype=badge&btype=8 Start: 1494436886.4704 Method: GET Data: End: 1494436886.6548 Time Taken: 00:00:00.184\nPHP message: Request: data/businessid/632296/status/2?minrating=5&sindex=0&count=10&incl_blank=0 Start: 1494436886.6553 Method: GET Data: End: 1494436886.8191 Time Taken: 00:00:00.164\n', referer http://www.example.com/

Below is mine logstash grok filter:

input {
beats {
port => 5043
type => "apache-php"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

filter {
if [type] == "apache-php" {
grok {
match => { 'message' => '[(?%{DAY} %{MONTH:month} %{MONTHDAY:day} %{TIME:time} %{YEAR:year})] [.*:%{LOGLEVEL:loglevel}] [pid %{NUMBER:pid}:tid %{NUMBER:tid}] [client %{IP:client_ip}:%{NUMBER:client_port:int}] [realclient %{IP:real_client_ip}] %{GREEDYDATA:errormsg}' }
add_field => { "timestamp" => "%{day}-%{month}-%{year} %{time}" }
remove_field => [ "day", "month", "year", "time" ]
}
date {
match => [ "timestamp" , "dd-MMM-yyyy HH:mm:ss" , "dd-MMM-yyyy HH:mm:ss.SSS" , "ISO8601" , "dd-MMM-yyyy HH:mm:ss.SSSSSS" ]
target => "@timestamp"
remove_field => "timestamp"
}
}
}

output {
if [type] == "apache-php" {
elasticsearch {
hosts => "elasticsearchXXXXXXXXXX.us-east-1.es.amazonaws.com:443"
ssl => "true"
manage_template => false
index => "apache-php"
flush_size => 100
}
stdout { codec => rubydebug }
}
}

I am always getting _dateparsefailure error. What I am doing wrong here. I have searched but unable to make out what needs to be done to fix this.

What does the timestamp field look like? What does the Logstash log say about the date parsing?

This is what I see on dashboard:

tags beats_input_codec_plain_applied, _dateparsefailure

timestamp Thu May 11 11:27:35.906467 2017, 11-May-2017 11:27:35.906467

@timestamp May 11th 2017, 16:57:36.836 (this is what I do not want as I am synchronising old logs as well but thats getting current date time)

You didn't answer both my questions. In the future please answer all questions so we're not wasting any time.

It looks like you're capturing the timestamp in two ways. First I think you have (?<timestamp>%{DAY}... in your grok expression (but you didn't format your configuration as preformatted text so we can't tell), and then you add to the same field with add_field (turning it into an array). Either way should work fine but please pick one, don't do both.

My bad for not replying properly. I looked at the log and I have only one log file created as below:

root@ip-172-30-4-103:/var/log/logstash# pwd
/var/log/logstash
root@ip-172-30-4-103:/var/log/logstash# ls
logstash-plain.log

Logstash version

/usr/share/logstash/bin/logstash --version
logstash 5.4.0

OS : Ubuntu 16.04

I do not see any error in the log file after startup log, as below:

[2017-05-12T08:16:37,191][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.0.4-java/vendor/GeoLite2-City.mmdb"}
[2017-05-12T08:16:37,414][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.0.4-java/vendor/GeoLite2-City.mmdb"}
[2017-05-12T08:16:37,510][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>3, "pipeline.batch.size"=>250, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>750}
[2017-05-12T08:16:38,154][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5046"}
[2017-05-12T08:16:38,201][WARN ][logstash.inputs.beats ] Beats input: SSL Certificate will not be used
[2017-05-12T08:16:38,202][WARN ][logstash.inputs.beats ] Beats input: SSL Key will not be used
[2017-05-12T08:16:38,203][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5043"}
[2017-05-12T08:16:38,214][INFO ][logstash.pipeline ] Pipeline main started
[2017-05-12T08:16:38,283][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-05-12T08:16:41,127][INFO ][logstash.inputs.s3 ] Using default generated file for the sincedb {:filename=>"/usr/share/logstash/.sincedb_b35047b42086d04ce9bf9fac20401019"}
[2017-05-12T08:16:41,150][INFO ][logstash.inputs.s3 ] Using default generated file for the sincedb {:filename=>"/usr/share/logstash/.sincedb_cf18dec491ac193434632184239c16d8"}

As you suggested I made changes to the grok pattern now I am not getting dateparsefailure but still I am not getting @timestamp field modified as well.

PS: I am also very new to ELK stack.

What do your events look like now? Please copy/paste from the JSON tab in Kibana's discover panel. I want to see the raw JSON message. The result of a stdout { codec => rubydebug } output on the Logstash side is also fine.

Here is the json message from kibana:

{
"_index": "apache-php",
"_type": "apache-php",
"_id": "AVv770VXFCuZCGsi-3T3",
"_score": null,
"_source": {
"offset": 1735235,
"input_type": "log",
"pid": "9365",
"source": "/mnt/logs/example/error-2017-05-12-07_52_39.log",
"message": "[Fri May 12 09:13:29.611406 2017] [proxy_fcgi:error] [pid 9365:tid 140289033156352] [client 10.229.1.47:12458] [realclient 102.6.8.19] AH01071: Got error 'PHP message: Request: business/number/145738889100?stype=bzsite Start: 1494580407.1975 Method: GET Data: End: 1494580407.8891 Time Taken: 00:00:00.692\nPHP message: Request: review/templateid/145738889100/status/2?stype=bzsite&op=3&sindex=0&count=1 Start: 1494580407.8895 Method: GET Data: End: 1494580408.089 Time Taken: 00:00:00.200\nPHP message: Request: review/templateid/145738889100/status/2?stype=bzsite&op=3&sindex=0&count=25 Start: 1494580408.0896 Method: GET Data: End: 1494580408.5091 Time Taken: 00:00:00.420\nPHP message: Request: review/business/145738889100/summary Start: 1494580408.5114 Method: GET Data: End: 1494580409.6108 Time Taken: 00:00:01.099\n'",
"type": "apache-php",
"tid": "140289033156352",
"errormsg": "AH01071: Got error 'PHP message: Request: business/number/145738889100?stype=bzsite Start: 1494580407.1975 Method: GET Data: End: 1494580407.8891 Time Taken: 00:00:00.692\nPHP message: Request: review/templateid/145738889100/status/2?stype=bzsite&op=3&sindex=0&count=1 Start: 1494580407.8895 Method: GET Data: End: 1494580408.089 Time Taken: 00:00:00.200\nPHP message: Request: review/templateid/145738889100/status/2?stype=bzsite&op=3&sindex=0&count=25 Start: 1494580408.0896 Method: GET Data: End: 1494580408.5091 Time Taken: 00:00:00.420\nPHP message: Request: review/business/145738889100/summary Start: 1494580408.5114 Method: GET Data: End: 1494580409.6108 Time Taken: 00:00:01.099\n'",
"tags": [
"beats_input_codec_plain_applied"
],
"client_port": 12458,
"@timestamp": "2017-05-12T09:13:29.611Z",
"loglevel": "error",
"@version": "1",
"beat": {
"hostname": "web-host-1",
"name": "web-host-1",
"version": "5.4.0"
},
"host": "web-host-1",
"real_client_ip": "102.6.8.19",
"client_ip": "10.229.1.47"
},
"fields": {
"@timestamp": [
1494580409611
]
},
"sort": [
1494580409611
]
}

Okay, and what's wrong? Assuming the original log file's timezone is UTC then things look fine.

Yeah from JSON it looks perfect but when I see the kibana UI and not json output I see things like below

Log message timestamp "[Fri May 12 09:59:17.419071 2017]"
and
@timestamp May 12th 2017, 15:29:17.880 .
Logstash and Web server system time is in UTC only.

Does it convert the timezone as well to my browser localtime zone (IST)? Cause I wanted to see the same timestamp value by default as it is coming in log message.

Yeah from JSON it looks perfect but when I see the kibana UI and not json output I see things like below

Then please say that so we don't have to play 20 questions.

Does it convert the timezone as well to my browser localtime zone (IST)?

Yes, Kibana does that by default.

Cause I wanted to see the same timestamp value by default as it is coming in log message.

There's a Kibana setting for setting the timezone adjustment. However, Logstash converts the original timestamp to UTC for storage in Elasticsearch, so unless the input is UTC (which is happens to be in your case) you won't be able to see the original timestamp.

1 Like

Perfect thank you :smile: that is what I was missing.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.