Failed to parse [timestamp]

(Andrii Cherkasov) #1

Hey Guys
Hope you all are doing great!

I am having an issue parsing date in tomcat logs. Elasticsearch throws errors in debug and stops accepting new messages.
Here is an example of log lines:

2017-02-27 13:53:14 INFO SAMLDefaultLogger:127 - AuthNRequest;SUCCESS;1111.11.114.239;https://myhost

And here is an error message seen in elasticsearch.log

[2017-04-11 03:36:17,476][DEBUG][action.bulk ] [node-0] [logstash-2017.02.27][1] failed to execute bulk item (index) index {[logstash-2017.02.27][SP][AVtcld6y5SO0hmeim_Sn], source[{"message":"2017-02-27 13:53:14 INFO SAMLDefaultLogger:127 - AuthNRequest;SUCCESS;;https://myhost:443/saml/somedata;;","@version":"1","@timestamp":"2017-02-27T21:53:14.000Z","type":"SP","file":"/myfile/catalina.out","host":"myhost","offset":"17851028","token":"prodLogs","purpose":"app","timestamp":"2017-02-27 13:53:14","severity":"INFO","className":"SAMLDefaultLogger:127","logmessage":"AuthNRequest;SUCCESS;;https://hostm:443/saml/meta;;;"}]}
MapperParsingException[failed to parse [timestamp]]; nested: IllegalArgumentException[Invalid format: "2017-02-27 13:53:14" is malformed at " 13:53:14"];
at org.elasticsearch.index.mapper.FieldMapper.parse(
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(
at org.elasticsearch.index.mapper.DocumentParser.parseAndMergeUpdate(
at org.elasticsearch.index.mapper.DocumentParser.parseDynamicValue(

Here is how I parse date in logstash:

  date {
    match => [ "timestamp", "yyyy-MM-dd HH:mm:ss", "dd-MMM-yyyy HH:mm:ss.SSS"  ]
    locale => "en"           

It looks like it parses the date fine but then it just doesn't like the format.

Am I missing something obvious?

(Jimmy Kuang) #2

Your date filter shows a period between ss.SSS. The log shows a comma ss,SSS.

(Jimmy Kuang) #3

Disregard my previous comment, I was looking at the wrong log.

(Jimmy Kuang) #4

I used your log input and date filter and I didn't get this error message. Please provide the full Logstash configuration file. What version of Logstash and ES are you using?

(Andrii Cherkasov) #5

Hey Jimmy

Thank you for the response!

I am using logstash 1:2.2.2-1 and ES 2.4.4-1

Here is the config:

filter {
if [type] == "SP" {
multiline {
patterns_dir => ["/etc/logstash/patterns/"]
negate => true
what => "previous"

if "_grokparsefailure" in [tags] {

drop { }


  grok {
    break_on_match => true
    patterns_dir => ["/etc/logstash/patterns/"]
    match => [ "message", "%{MYTOMCATLOG}" ] 
    match => [  "message", "%{UNKNOWNLOG}" ]
  date {
    match => [ "timestamp", "yyyy-MM-dd HH:mm:ss", "dd-MMM-yyyy HH:mm:ss.SSS"  ]
    locale => "en"           


And here is the patterns I use


MYTOMCATLOG %{MY_TOMCAT_DATESTAMP:timestamp} %{LOGLEVEL:severity} +%{NOTSPACE:className} +- +%{JAVALOGMESSAGE:logmessage}
UNKNOWNLOG %{UNKNOWN_DATESTAMP:timestamp} +%{LOGLEVEL:severity} +%{NOTSPACE:action} +%{NOTSPACE:className} +%{GREEDYDATA:message}

There are two kinds of date in my log:
2016-09-15 05:27:54
15-Sep-2016 05:27:42.567

When I start harvesting logs it works for some time and then logstash stops accepting new data.

(Jimmy Kuang) #6

What made you aware that Logstash stop accepting new data? You see the log file itself grow in size, but the data wasn't getting ingested into ES?

(Andrii Cherkasov) #7

Logstash-forwarder starts showing messages like the following, and after this no new entries in ES log but there is still data to be parsed, it simply stops working...

2017/04/12 09:28:20.947736 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
2017/04/12 09:28:20.948149 Connecting to []:5000 (
2017/04/12 09:28:21.011529 Connected to
2017/04/12 09:28:21.019767 Read error looking for ack: EOF
2017/04/12 09:28:21.019868 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
2017/04/12 09:28:21.020183 Connecting to []:5000 (
2017/04/12 09:28:36.020538 Failed to tls handshake with read tcp i/o timeout
2017/04/12 09:28:37.020932 Connecting to []:5000 (
2017/04/12 09:28:51.335528 Connected to
2017/04/12 09:28:56.348703 Read error looking for ack: EOF
2017/04/12 09:28:56.348798 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
2017/04/12 09:28:56.349145 Connecting to []:5000 (
2017/04/12 09:28:56.437510 Connected to
2017/04/12 09:28:56.443756 Read error looking for ack: EOF
2017/04/12 09:28:56.443837 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
2017/04/12 09:28:56.444092 Connecting to []:5000 (
2017/04/12 09:29:11.444418 Failed to tls handshake with read tcp i/o timeout
2017/04/12 09:29:12.444777 Connecting to []:5000 (
2017/04/12 09:29:26.753509 Connected to
2017/04/12 09:29:31.763917 Read error looking for ack: EOF
2017/04/12 09:29:31.764016 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
2017/04/12 09:29:31.764346 Connecting to []:5000 (
2017/04/12 09:29:31.827521 Connected to
2017/04/12 09:29:31.834425 Read error looking for ack: EOF
2017/04/12 09:29:31.834501 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
2017/04/12 09:29:31.834760 Connecting to []:5000 (

I have just installed new ELK 5 (previously I was working with ELK 4*) stack with the same config and it worked fine, all the data was parsed as it should without any error messages, so I think that I'm fine now.

Thank you very much for help!


(Jimmy Kuang) #8

I see an error in the log.

Failed to tls handshake with read tcp i/o timeout

I would recommend checking your network and disk partitions as the tls handshake failed and i/o timeout. There should be some OS logs that tell you what happened.

(Andrii Cherkasov) #9


Logstash-forwarder and logstash are on the same server (as well as elasticsearch) so there should be no traffic limitation. It simply stops accepting new messages after the errors about timestamp appear in elasticsearch's log. But looks like the problem gone after upgrade to ELK 5.

(system) #10

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.