Sort by date in kibana in the visualize option

Hello
I am trying to create a visualize where it is sorted by date, I used the filter in the date logstash, but I cannot sort the crescent in the kibana visualize

Logstash

input {
    beats {
        port => "5064"
    }
}

filter {
  if [App] == "Deployment" {
    grok {
      patterns_dir => "/opt/elk/applications/logstash/config/patterns"
      match => [ "message", "^%{WORD}\:%{DATA:Domain}%{SPACE}%{WORD}\:%{HOSTNAME:Hostname}%{SPACE}%{WORD}\:%{GREEDYDATA:Deploy}%{SPACE}Data\:%{GREEDYDATA:Data}" ]
    }

   date{
      match => [ "Data", "dd/MM/yy HH-mm" ]
      locale => "en-US"
      timezone => "Brazil/East"
      target => "Data"
}
    mutate {
       remove_field => [ 'wls_timezone' , 'message', 'wls_host', 'wls_rawtime', 'wls_diagcontid' , 'tags' , "host.name" , "source" ]
       remove_tag => ["beats_input_codec_plain_applied" , "audit" , "_dateparsefailure" , "wls_audit_8_out" , "_grokparsefailure" , "source" , "host.name" ]

    }
  }

##END PIPELINE
}

output {
    elasticsearch {
        hosts => ["elk1:9200", "elk2:9200"]
        index => "wl_audit_deployment_teste-%{+YYYY.MM.dd}"
    }
}

in the date field it does not sort by increasing or decreasing the date 10/12/19 16:45 appears in the middle of the field

It looks like you need to adjust the mappings for the destination index to make Elasticsearch understand your date as a date rather than a keyword field.

Since you're using a dynamic index pattern, I'd recommend setting up an index template that will automatically apply to any new index that is created and matches the pattern.

You can read more about index templates in the Elasticsearch documentation:

  • If you're using Elasticsearch 7.8 or later, refer to the new composable index templates here
  • Otherwise, you'll want to refer to the legacy index templates

On both of these pages the first example has a date field that you can refer to.

Hi, thank you very much for your reply,
about your answer, I would change the index already created, right? wouldn't I have to put this in the logstash with a mutate filter or something?

Elasticsearch does not support changing the mapping for a field on an index that already exists. What you'll need to do is create a new index with the updated mappings and then reindex your old data into that one. (hint: there is a Reindex API that makes this simpler). You shouldn't need to change your Logstash configuration at all.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.