Numeric field (long) shown as string in kibana discovery

Hi! We are installing and testing the ELK stack in a test env in out company.
I am working towards migrating rsyslog logs from Graylog to ELK.
But even when some fields are mapped as long in elasticsearch when i go to Kibana Discovery they are shown as string.
Here is my logstash config:

 input {
   tcp {
     port => 5141
     type => syslog
 filter {
   if [type] == "syslog" {
     grok {
       match => { "message" => '<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{NOTSPACE:servicio}\_%{NOTSPACE:apache_tipo_log}: %{GREEDYDATA:syslog_message}' }
   grok {
        match => { "syslog_message" => '%{IPORHOST:clientip} %{HTTPDUSER:apache_httpuser} %{USER:apache_user} \[%{HTTPDATE:timestamp_apache}\] "(?:%{WORD:HTTP_method} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:rawrequest})" %{NUMBER:response_code} (?:%{NUMBER:bytes:int}|-) %{NUMBER:response_time_sec:int}\/%{NUMBER:response_time_us:int}' }
   mutate {
        convert => {
             "bytes" => "integer"
             "response_time_sec" => "integer"
             "response_time_us" => "integer"
 #output {
 #    stdout { codec => rubydebug }
 output {
   elasticsearch {
    hosts => [ "https://ELASTICSERVER:9200" ]
         user => "USER"
         password => "USERPASS"
         ssl => true
         cacert => "CERLOCATION"
     manage_template => false
     index => "syslog-%{+YYYY.MM.dd}"

At first the info was parsed without any forced type but i changed it as you can see (about 3 days ago)
I know that if an index have a field mapped as certain type and you change the data to be created as another type then you have to wait until a new index is created or delete the current index and create the new one. With that being said my config creates a new index each day.
So, even when the data is being created as long i still see the data in discovery as string type.
This is the json of the last created index as seen from the kibana config -> elasticsearch -> index management -> mappings

         "response_time_sec": {
           "type": "long"
         "response_time_us": {
           "type": "long"

What am i doing wrong?

Does anyone have any idea about this?

If you are saying that you have rolled to a new index, and the index mapping shows the fields as "long", and kibana still refers to them as strings, then try doing a index refresh in the index pattern management page in kibana.

That did the trick! Thanks! I see it both in the config of the index inside Kibana and in the discovery tab as a number.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.