How to map a date field in Logstash config file?

Hi All,

I am trying to map a date field into Elasticsearch to filter the data using "Time picker" in Kibana Dashboard. My CSV file date format is mm/dd/yy with field name "Service Begin Date". So in my config file, I am trying to customize the date field as per ELK standard as below :

        date {
                match => ["Service Begin Date", "MM/dd/yyyy"]
                target => "Service Begin Date"
        }

But I am not able to filter the data using "Time Picker". I had gone through the Elasticsearch documentation about Date Format,
(https://www.elastic.co/guide/en/elasticsearch/reference/current/dynamic-field-mapping.html#date-detection). I am attaching data file with this post and please find the config file below.

Thanks in advance!

  1. my config file :
    input {
            file {
                    path => "/home/snayak/ccfl-data/all_csv_forIndexing/*.csv"
                    start_position => "beginning"
                    ignore_older => 3600
                    sincedb_path => "/dev/null"
            }
    }

    filter {
            csv {
                    separator => ","
                    columns => ["Individual ID", "Individual County", "Age", "Sex", "Race", "Ethnicity", "Intpreter Needed", "Oral Language", "Written Language", "Program Code", "Program Name", "Provider ID", "Provider Name", "Provider County", "Service Code", "Service Name", "Service Begin Date", "Service End Date", "Frequency", "Units", "Rate", "Amount", "Payment Date", "Fund Code"]
            }
            mutate {
                    convert => { "Individual ID" => "integer" }
            }
            mutate {
                    convert => {"Age" => "integer"}
            }
            mutate {
                    convert => {"Program Code" => "integer"}
            }
            mutate {
                    convert => {"Provider ID" => "integer"}
            }
            mutate {
                    convert => {"Service Code" => "integer"}
            }
            mutate {
                    convert => {"Units" => "integer"}
            }
            mutate {
                    convert => {"Rate" => "integer"}
            }
            mutate {
                    convert => {"Amount" => "integer"}
            }


            date {
                    match => ["Service Begin Date", "MM/dd/yyyy"]
                    target => "Service Begin Date"
            }
            date {
                    match => ["Service End Date", "MM/dd/yyyy"]
                    target => "Service End Date"

            }
            date {
                    match => ["Payment Date", "MM/dd/yyyy"]
                    target => "Payment Date"
            }

    }

    output {
            elasticsearch {
                    hosts => "http://localhost:9200"
                    index => "nfocus"
            }
    stdout {}
}

Data File:

There are likely several things at work here; I'll try to address each to get you headed in the right direction. I'm an engineer on the Logstash team, so my answer may be a little fuzzy around Kibana/UI.

Kibana Time Picker

From what I understand, the Kibana Time Picker can be associated with a single date field for indexes matching the view's index pattern. That field must have the date datatype, and should be something that is intrinsic to the timing of each document.

The time filter restricts the search results to a specific time period. You can set a time filter if your index contains time-based events and a time-field is configured for the selected index pattern.
-- Setting the Time Filter

You can use date filters on any date field by following the instructions for manual filters:

  1. Click Add Filter . A popup will be displayed for you to create the filter.
  2. Choose a field to filter by. This list of fields will include fields from the index pattern you are currently querying against.
  3. Choose an operation for your filter. [e.g. is between]
  4. Choose the value(s) for your filter. Values from your indices may be suggested as selections if you are filtering against an aggregatable field.

Logstash Date Format Parsing

The format string given to the Date Filter Plugin's match directive doesn't appear to line up with the date formats of the relevant columns in the attached screenshots. Since each date filter is configured to replace the source value with the parsed result, it is likely that one or more of the filters is failing to parse the input, tagging the event with _dateparsefailure, and leaving the field unmodified.

The dates of those columns as represented by the UI of whatever tool you're using is M/d/yy (see: Logstash Date Filter docs):

M: minimal-digit month. Example: 1 for January and 12 for December.

d: minimal-digit day. Example: 1 for the 1st of the month.

yy: two-digit year. Example: 15 for the year 2015.

It is possible that the UI you are using is parsing the source file's dates in one format and presenting them in a different format, so you'll need to open the CSVs using a plaintext tool like Notepad (Windows), Gedit (Linux), or TextEdit (Mac) to inspect the actual data and determine the actual format of those columns.


Elasticsearch Field Type

It is likely that the field was created on the index as text, and that additional documents that are added to the index will be coerced to a text representation to fit into the existing field. You will need to create a new index if this is the case, and can inspect the existing mapping using the GET Mapping API.

When Elasticsearch is instructed to add a document to an index that does not have a mapping for one of the document's fields, it attempts to auto-detect the field type from the value of the field on the document, and creates the field on the index using the detected type. Field types are immutable and cannot be changed once added to an index, so it's important to get it right the first time. To instruct an index about a field before inserting a document, you can use the PUT Mapping API; since Elasticsearch will also auto-generate indices when instructed to add a document to an index that doesn't yet exist, you can use the Put Template API to register a template that will be used during index creation.

The Logstash Elasticsearch Output plugin will manage an index template for you using the template and template_name directives. Since this is a template, it will affect new indices matching the given pattern that are created by Elasticsearch, but will not modify existing indices. You should not need to specify the format of the field in the mapping,
but date_optional_time will reliably parse the field as it is output by Logstash.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.