Logstash mapping wrong field types

Hi ,
I'm new to logstash and ELK stack. So apologies in advance if this question sounds obvious.

Problem: I have fields that are being detected as strings by Kibana even after the template.json file explicitly maps the field as long.

Details of my setup:
I'm trying to push custom logs from HA proxy to security-onion elk stack. For the default setup , every thing works. My logs get picked up by logstash and displayed by kibana. Now I decided to create custom conf files and template to make data aggregatable and get better visualization.
I have a custom log that I use grok to split and extract values. These then go through an output filter in to elastic search via a custom template.
My Kibana instance detects all the fields split by the grok filter but int fields get detected as string . This is causing mappings conflicts and issues when trying to visualize the fields.
So far I have tried :

  1. refreshing the index from kibana > management
  2. Re-indexing the indexes.
    Any suggestions to trouble shoot this problem would be very much appreciated.

Thanks in advance
JC

My grok pattern eg.

 %{NUMBER:hap_actconn:INT}/%{NUMBER:hap_feconn:INT}/%{NUMBER:hap_beconn:INT}/%{NUMBER:hap_srvconn:INT}/%{NUMBER:hap_retries:INT}

when logstash outputs to stdout (for debug), the fields look like this

   "hap_feconn" => "6",
   "hap_actconn" => "6",
   "syslog-sourceip" => "172.17.XXX.YYY",
   "hap_retries" => "0",

Output filter -

output {
  if "HAProxy" in [tags] and "test_data" not in [tags] and "import" not in [tags] {
   # stdout { codec => rubydebug }
    elasticsearch {
      hosts => elasticsearch
      index => "logstash-haproxy-%{+YYYY.MM.dd}"
      template_name => "logstash-haproxy"
      template => "/logstash-haproxy-template.json"
      template_overwrite => true
    }
  }
}

field mappings in logstash-haproxy-template.json

{
  "template":"logstash-haproxy",
  "index_patterns": ["logstash-haproxy-*"],
  "version":50001,
  "order" : 0,
  "settings":{
    "number_of_replicas":0,
    "number_of_shards":1,
    "index.refresh_interval":"30s"
  },
  "mappings":{
    "doc":{
       "dynamic": false,
       "date_detection": false,
       "properties":{
        "@timestamp":{
          "type":"date"
        },
        "@version":{
          "type":"keyword"
        },
        "hap_actconn":{
          "type":"long"
        },
        "hap_backend_name":{
          "type":"text",
          "fields":{
            "keyword":{
              "type":"keyword"
            }
          }
        },
        "hap_backend_queue":{
          "type":"long"
        },
        "hap_beconn":{
          "type":"long"
        },
        "hap_bytes_read":{
          "type":"long"
        },
        "hap_captured_request_cookie":{
          "type":"text"
        },
        .
        .
        "http_version":{
          "type":"long"
        }
      }
    }
  }
}

When querying using kibana > dev tools for hap_actconn
GET /logstash-*/_mapping/field/hap_actconn

"logstash-haproxy-2020.mm.dd" : {
    "mappings" : {
      "doc" : {
        "hap_actconn" : {
          "full_name" : "hap_actconn",
          "mapping" : {
            "hap_actconn" : {
              "type" : "text",
              "fields" : {
                "keyword" : {
                  "type" : "keyword",
                  "ignore_above" : 256
                }
              }
            }
          }
        }
      }
    }
  },

I am not really an elasticsearch person, I just do logstash, but that appears to set the type of [@version][hap_actconn], not [hap_actconn]

@Badger,

Thank you for looking in to this.
That was a copy paste error while I was creating the post. I have corrected it now. Thank you for spotting it.

"@timestamp":{
          "type":"date"
        },
        "@version":{
          "type":"keyword"
        },
        "hap_actconn":{
          "type":"long"
        },

JC

Managed to solve the problem.
Though not sure exactly what was wrong.

All I had to do was change the :INT in grok pattern to :int . I also had to delete the old indexes.

Now my grok pattern looks like
%{NUMBER:hap_actconn:int}/%{NUMBER:hap_feconn:int}/%{NUMBER:hap_beconn:int}/%{NUMBER:hap_srvconn:int}/%{NUMBER:hap_retries:int}

But I'm having similar problem with my IP: fields. I guess the problem was there earlier but I did not notice the problem in-between the int fields !

If anyone have any suggestions, any help with this would be greatly appreciated.

JC

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.