How map datatype dynamically for fields as per input logs in elasticsearch

Hi Folks,

I might be asking asking silly question, Please help me on this.

I have configured log-stash to listen on specific port and codec as "JSON" because my system will logs in JSON format. I am getting JSON as expected and pushing it in to elastic search as per the JSON tags.

Here the problem is - All fields datatypes are creating as "string" and getting analyzed. Due to this kibanna cant able to visualize it.

My Objective is - I want to create non-analysed properties and if its json value is IP want to map datatype as "IP" dynamically in elasticsearch.

Is there any way to put some config in logstash itself to do this.? or any other way to achieve it .?

I have done enough google couldn't able to find solution and achieve it, Any Advice.

Thanks in advance.

logstash config:
    `    input {
          tcp {
            port => 5400
            codec => json
          }
        }

    filter
    {
    json {
            source => "message"
        }
    }

    output {
        elasticsearch { hosts => "10.10.10.10:9200" }
        stdout { codec => rubydebug }
    }`



   Sample logstash stdout:
    {
                       "name" => "DefaultProfile",
                    "version" => "1.0",
              "isoTimeFormat" => "yyyy-MM-dd'T'HH:mm:ss.SSSZ",
                       "type" => "Event",
                   "category" => "98",
                 "protocolID" => "6",
                        "sev" => "1",
                        "src" => "172.17.4.76",
                        "dst" => "10.10.10.11",
                    "srcPort" => "54251",
                    "dstPort" => "1182",
                  "relevance" => "5",
                "credibility" => "5",
             "startTimeEpoch" => "1489501784642",
               "startTimeISO" => "2017-03-14T19:59:44.642+05:30",
           "storageTimeEpoch" => "1489501784642",
             "storageTimeISO" => "2017-03-14T19:59:44.642+05:30",
               "deploymentID" => "0bc51aa8-7700-11e4-b770-ab7c6e9deeba",
               "devTimeEpoch" => "1489474998000",
                 "devTimeISO" => "2017-03-14T12:33:18.000+05:30",
              "srcPreNATPort" => "0",
              "dstPreNATPort" => "0",
             "srcPostNATPort" => "0",
             "dstPostNATPort" => "0",
                "hasIdentity" => "false",
                    "payload" => "<190>id=firewall sn=0006B1221EB8 time=\"2017-03-14 12:33:18\" fw=10.10.10.10 pri=6 c=262144 m=98 msg=\"Connection Opened\" n=154677963 src=172.17.4.76:54251:X0 dst=10.10.10.11:1182:X1 proto=tcp/1182 ",
                   "srcIPLoc" => "other",
                   "dstIPLoc" => "other",
                 "hasOffense" => "false",
                   "domainID" => "17",
                  "eventName" => "Connection Opened.",
           "lowLevelCategory" => "Session Opened",
          "highLevelCategory" => "Access",
           "eventDescription" => "Connection Opened.",
               "protocolName" => "tcp",
                  "logSource" => "SonicWall @ 10.10.10.10",
                 "srcNetName" => "other",
                 "dstNetName" => "other",
              "logSourceType" => "SonicWALL SonicOS",
             "logSourceGroup" => "Customer A",
        "logSourceIdentifier" => "10.10.10.10",
                   "@version" => "1",
                 "@timestamp" => "2017-03-14T14:28:56.360Z",
                       "host" => "1.1.1.1",
                       "port" => 41140
    }

You need to modify the index template that's used. Make a copy of the standard Logstash index template and make the adjustments you need and either install the template yourself (disabling Logstash's index template handling) or point Logstash to your template.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.