Mapper_parsing_exception error on logstash

Hello all,

I have a machine that is currently running ELK (v6.3), and some other machines that are running Metricbeat (v6.3). Currently, Metricbeat is sending information over to Logstash, which in turn sends logs to Elasticsearch. I believe I am having difficulty with the Logstash/Elasticsearch connection. Here is my output:

Metricbeat Output

2018-06-29T10:51:16.399-0600        INFO        [monitoring]        log/log.go:124        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3400,"time":{"ms":136}},"total":{"ticks":9740,"time":{"ms":369},"value":9740},"user":{"ticks":6340,"time":{"ms":233}}},"info":{"ephemeral_id":"83012df1-8afb-440b-ba1c-680aa2e791b7","uptime":{"ms":810444}},"memstats":{"gc_next":7066368,"memory_alloc":3704256,"memory_total":1062082824,"rss":57344}},"libbeat":{"config":{"module":{"running":3}},"output":{"events":{"acked":51,"batches":3,"total":51},"read":{"bytes":18},"write":{"bytes":7640}},"pipeline":{"clients":6,"events":{"active":0,"published":51,"total":51},"queue":{"acked":51}}},"metricbeat":{"system":{"cpu":{"events":3,"success":3},"filesystem":{"events":2,"success":2},"fsstat":{"events":1,"success":1},"load":{"events":3,"success":3},"memory":{"events":3,"success":3},"network":{"events":9,"success":9},"process":{"events":24,"success":24},"process_summary":{"events":3,"success":3},"uptime":{"events":3,"success":3}}},"system":{"load":{"1":0.05,"15":0.2,"5":0.24,"norm":{"1":0.05,"15":0.2,"5":0.24}}}}}}
2018-06-29T10:51:46.398-0600        INFO        [monitoring]        log/log.go:124        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3550,"time":{"ms":148}},"total":{"ticks":10110,"time":{"ms":361},"value":10110},"user":{"ticks":6560,"time":{"ms":213}}},"info":{"ephemeral_id":"83012df1-8afb-440b-ba1c-680aa2e791b7","uptime":{"ms":840444}},"memstats":{"gc_next":4712976,"memory_alloc":2363440,"memory_total":1102560616,"rss":12288}},"libbeat":{"config":{"module":{"running":3}},"output":{"events":{"acked":48,"batches":3,"total":48},"read":{"bytes":18},"write":{"bytes":7212}},"pipeline":{"clients":6,"events":{"active":0,"published":48,"total":48},"queue":{"acked":48}}},"metricbeat":{"system":{"cpu":{"events":3,"success":3},"load":{"events":3,"success":3},"memory":{"events":3,"success":3},"network":{"events":9,"success":9},"process":{"events":24,"success":24},"process_summary":{"events":3,"success":3},"uptime":{"events":3,"success":3}}},"system":{"load":{"1":0.11,"15":0.2,"5":0.23,"norm":{"1":0.11,"15":0.2,"5":0.23}}}}}}
2018-06-29T10:52:16.402-0600        INFO        [monitoring]        log/log.go:124        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3650,"time":{"ms":103}},"total":{"ticks":10450,"time":{"ms":344},"value":10450},"user":{"ticks":6800,"time":{"ms":241}}},"info":{"ephemeral_id":"83012df1-8afb-440b-ba1c-680aa2e791b7","uptime":{"ms":870444}},"memstats":{"gc_next":4515632,"memory_alloc":3811456,"memory_total":1143106640,"rss":45056}},"libbeat":{"config":{"module":{"running":3}},"output":{"events":{"acked":51,"batches":3,"total":51},"read":{"bytes":24},"write":{"bytes":7475}},"pipeline":{"clients":6,"events":{"active":0,"published":51,"total":51},"queue":{"acked":51}}},"metricbeat":{"system":{"cpu":{"events":3,"success":3},"filesystem":{"events":2,"success":2},"fsstat":{"events":1,"success":1},"load":{"events":3,"success":3},"memory":{"events":3,"success":3},"network":{"events":9,"success":9},"process":{"events":24,"success":24},"process_summary":{"events":3,"success":3},"uptime":{"events":3,"success":3}}},"system":{"load":{"1":0.13,"15":0.19,"5":0.23,"norm":{"1":0.13,"15":0.19,"5":0.23}}}}}}

Logstash Output

[2018-06-29T10:46:50,514][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"metricbeat", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x35edcf7f>], :response=>{"index"=>{"_index"=>"metricbeat", "_type"=>"doc", "_id"=>"nl5wTGQB2Z650IphrkNS", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [host]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:149"}}}}}
[2018-06-29T10:46:50,514][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"metricbeat", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x83cb494>], :response=>{"index"=>{"_index"=>"metricbeat", "_type"=>"doc", "_id"=>"oV5wTGQB2Z650IphrkNS", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [host]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:149"}}}}}
[2018-06-29T10:46:50,514][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"metricbeat", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x26fcb2e9>], :response=>{"index"=>{"_index"=>"metricbeat", "_type"=>"doc", "_id"=>"ol5wTGQB2Z650IphrkNS", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [host]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:149"}}}}}

I have created a metricbeat index on Kibana. I understand that I am receiving a mapper_parsing_exception, but I'm confused as to why. From my understanding, I thought that the mapping style was set to dynamic by default. I haven't created any templates for ELK to use.

Regardless, I tried to create a template to resolve the issue (I then deleted my old metricbeat index and created a new one):

PUT /_template/metricbeat*
{
    "index_patterns" : ["metricbeat*"],
    "mappings" : {
        "host" : {
            "dynamic" : "true"
        }
    }
}

This did not work either. In addition, a solution similar to the above would not work since I would like to have dynamic mapping across the board.

What error message do you get in the elasticsearch log when this occurs?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.