Date filter recognizing custom grok pattern?

I am dealing with a log file that has a special timestamp,

2014-12-14 23:59:40.227 -8

ISO8601 will recognize the date and time but not the time zone, so I built my own,
and when I tried to use the date filter, and tried to have date filter recognizing my customized timestamp,
I was given a error when trying to start up logstash, "Illegal pattern component". I have included the pattern directory under grok, I tried to include it under date plugin but wasn't allowed to.
Has anyone ever run into this situation, Thanks in advance!

The date filter only accepts the Joda-Time tokens plus the Logstash-specific ISO8601, UNIX, UNIX_MS, and TAI64N tokens. Grok patterns are not supported.

In your case you'll probably want to use one or a couple of filters to transform "2014-12-14 23:59:40.227 -8" to "2014-12-14 23:59:40.227 -0800" which Joda-Time's Z token will match (or if you can use ISO8601).

Thank you Magnus!
Would you be able to suggest some filters I could look into? Also, why won't it work if I simply have the following Joda-time tokens:

"YYYY-MM-dd HH:mm:ss.sss Z"

THank again for the help, that definitely saved me an hour or two in the wrong direction!

You may be able to just append "00" to the timestamp with mutate:

mutate {
  replace => {
    "timestamp" => "%{timestamp}00"
  }
}

(This obviously assumes that the minute part of the timezone offset always is zero.)

If Joda-Time really requires that leading zero you can e.g. do this:

grok {
  match => ["timestamp", "\s(?<tzsign>[+-])%{INT:tzoffset:int}$"]
}
if [tzoffset] < 10 {
  mutate {
    replace => {
      "timestamp" => "%{timestamp} %{tzsign}0%{tzoffset}00"
    }
  }
} else {
  mutate {
    replace => {
      "timestamp" => "%{timestamp} %{tzsign}%{tzoffset}00"
    }
  }
}
mutate {
  remove_field => ["tzsign", "tzoffset"]
}

Thank you again Magbus! I really appreciate you taking the time going over this with me.
This is probably dumb, Can I use mutate before I use grok? Again, time zone is an individual field in my log.
Because if I can mutate first, I want it so that I can use TIMESTAMP_ISO8601 pattern directly instead of having to write my own. Are there ways I can make the time stamp from looking like this:

2014-12-14 23:59:40.227 -8
to
2014-12-14 23:59:40.227-800

before calling grok? If I have this form, then I can user ISO8601 to match timestamp in date filter as well.

Thank you again.

Can I use mutate before I use grok?

Yes, filters can be applied in any order.

Are there ways I can make the time stamp from looking like this:

2014-12-14 23:59:40.227 -8
to
2014-12-14 23:59:40.227-800

before calling grok?

That's what my example did.

Thank you Sir :slight_smile:

Hello,

I tried this but had a problem inserting the leading zero and got an error when starting Logstash: "comparison of string with 10 failed".

I had to convert tzoffset to an integer to do the "less than" comparison and convert it back to a string to add it to the timestamp field. Otherwise Elasticsearch threw a "failed to fetch the shards" error and complained that I had a field defined as both a string and an integer.

It looks a bit clumsy but it worked. Is there a better way to do this?

	mutate {
		convert => { "tzoffset" => "integer"}
	}
	if [tzoffset] < 10 {
		mutate {
		convert => { "tzoffset" => "string"}
		}
		mutate {
		replace => {
			"timestamp" => "%{timestamp} %{tzsign}0%{tzoffset}00"
			}
		}
	} else {
		mutate {
		convert => { "tzoffset" => "string"}
		}
		mutate {
		replace => {
			"timestamp" => "%{timestamp} %{tzsign}%{tzoffset}00"
			}
		}
	}
	mutate {
		remove_field => ["tzsign", "tzoffset"]
	}

Thank you

I had to convert tzoffset to an integer to do the "less than" comparison

Yes, that's necessary.

and convert it back to a string to add it to the timestamp field.

No, that's not needed.

Otherwise Elasticsearch threw a "failed to fetch the shards" error and complained that I had a field defined as both a string and an integer.

Since you're not being specific I can't really comment, but since you're unconditionally deleting the tzoffset field the conversion back to a string of that field can't result in the error message above.

To be more specific, I was writing the log data to a new index after having done the < 10 comparison and adding a leading zero where needed, then adding the tzoffset field to the end of the timestamp field.

When trying to view this index in Kibi I got an error: Courier fetch 5 of 5 shards failed.

Looking at the index pattern gave me another error, telling me I had a field defined as both a string and an integer.

"Mapping conflict! A field is defined as several types (string, integer, etc) across the indices that match this pattern. You may still be able to use these conflict fields in parts of Kibana, but they will be unavailable for functions that require Kibana to know their type. Correcting this issue will require reindexing your data."

When I deleted the index and added the lines to my filter to convert tzoffset back to a string before adding it to the end of timestamp, I could then view the indexed log data in kibi.

However, I still saw the mapping conflict error when I looked at the Index pattern. I have data in older indexes from before I made the adjustments to the timezone and the field which has a conflict is "timestamp", which is the event generation timestamp field which I use to overwrite the default @timestamp.

I have modified the filter and have removed the lines to convert tzoffset back to a string, and also removed the "timestamp" field as it is redundant. I can now view the Index without getting an error so yesterday's shard fetch error seems to have been unrelated.

I still get the mapping conflict though. This field (which had the conflict) does not exist in new events but it does in old data. Unless there is a way to remove it from the Index Pattern in Kibana / Kibi and remove the mapping conflict I will just wait for the old data to expire, then refresh the Index Pattern once this field no longer exists in the Indices (we are using Curator to delete indices past our retention limit).

Thank you

I have an xml .
I am parsing this xml using grok filters in logstash.
I have a field ts="1466594115009". It gets parsed as a string.
ts field represents timestamp.
I need to convert this field to use as @timestamp and plot graphs against 't' which represents response time.
Kindly help me.

I currently used
date {
match => [ "ts" , "UNIX_MS" ]
}

But I am getting Illegal argumnet exception p.

@Sridhar_Yadav_Manoha, please start a new thread for your unrelated question. When you do, please include the full error message with surrounding context and ideally a minimal configuration example that exhibits the problem.