Incorrect mapping not matching grok filers and fields .conf file

Hey folks,

I have .conf as

input {
  beats { port => 5044}
}

filter {
  grok {
  match => [
        "message", "%{TIMESTAMP_ISO8601:timestamp_sting}%{SPACE}%{GREEDYDATA:line}"
 ]
}

date {
        match => ["timestamp_sting", "ISO8601"]
 }

mutate {
        remove_field => [message, timestamp_sting]
 }
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "changeme"
  }
stdout {
        codec => rubydebug
        }
}

Output of

GET /filebeat-7.1.1-2008.09.15/_search
{
"query": {
"match_all" :{}
}  }

is link
http://ge.tt/9lhWdfw2
file name

[Logstash_output.txt]

/root/logstash-7.1.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/elasticsearch-template-es7x.json
{
"index_patterns" : "logstash-",
"version" : 60001,
"settings" : {
"index.refresh_interval" : "5s",
"number_of_shards": 1
},
"mappings" : {
"default" : {
"_all" : { "enabled" : false },
"dynamic_templates" : [ {
"message_field" : {
"path_match" : "message",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text",
"norms" : false
}
}
}, {
"string_fields" : {
"match" : "
",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text", "norms" : false,
"fields" : {
"keyword" : { "type": "keyword", "ignore_above": 256 }
}
}
}
} ],
"properties" : {
"@timestamp": { "type": "date"},
"@version": { "type": "keyword"},
"geoip" : {
"dynamic": true,
"properties" : {
"ip": { "type": "ip" },
"location" : { "type" : "geo_point" },
"latitude" : { "type" : "half_float" },
"longitude" : { "type" : "half_float" }
}
}
}
}
}
}

Did you have a question?

1 Like

The problem is that the mapping or parsing is not done according to the format defined under logstash .conf file, when I get the index pattern its is extracting fields which >50 are not defined by me, but some default template which it is using. I have tried running logstash with stdin /stdout as cli and it works great , but when stdin is from file-beat and destination as elasticsearch the mapping is mixed up. Kindly assist.

This is an issue with your filebeat configuration. For example the [host] object is added by the host metadata processor.

If you are unable to figure out which processors are adding which fields you should ask a question in the filebeat forum, including an example of a document from elasticsearch.

1 Like

I tried to remove the host.metadata not extra fields are dropped down to 34 from 51. I opened new link here

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.