I am using grok filter to parse logs.In that there is a "time" field for which I am using "TIMESTAMP_ISO8601" as the grok pattern.
By default logstash should send this "time" field as "string" but it is sending it as date.
I want that logstash should send this "time" field as string
The "time" field has the following format:--2017-11-08T12:27:21.000Z
It's not clear what you mean. When Logstash sends events to Elasticsearch (if that's what you're talking about) they're sent as JSON. There is no data type for timestamps in JSON so timestamps can only be represented as strings or numbers.
Instead of describing the situation try giving a concrete example.
That's because Elasticsearch autodetects the field as a date field based on what the string looks like. If you don't want that you can use an index template to force the time field to be a string.
If you disable dynamic mapping you have to define all fields upfront, before you index any documents. So yes, it solves your problem but it does a lot more too.
I don't think you quite understood what I wrote and I don't know how to explain it differently without just repeating myself. I suggest you try things out yourself and discover what works and what doesn't.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.