Could Not Index event to Elasticsearch

Hello, I am running Logstash with 11 pipelines for 9 different log types, when I send those logs from Filebeat to Logstash, I tail the Logstash logs and get this error. Nonetheless, in Elasticsearch and Kibana I receive well-structured logs continuously. How can I get rid of this error?

[2021-09-15T13:09:39,436][WARN ][logstash.outputs.elasticsearch][cppdevices][62492695d933728f438151c7ae27d1c22f6d509536c592174d0b10ab262b3e15] 
Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"69399857", :_index=>"soleil", :routing=>nil}, {"ndc"=>{},
"device"=>{"domain"=>"i06-m-cx", "member"=>"falconx1.1", "family"=>"dt", "name"=>"i06-m-cx/dt/falconx1.1"}, "@timestamp"=>2021-03-26T07:36:50.336Z,
"@version"=>"1", "host"=>{"name"=>"localhost.localdomain"}, "thread"=>"5680", "input"=>{"type"=>"log"}, "log"=>{"flags"=>["multiline"], "offset"=>72678,
"file"=>{"path"=>"/vagrant/assembly/ds_RecordingManager/cppdevices/cpplogs-20210326-22h46/i06-m-cx_dt_falconx1.1.log.1"}},
"agent"=>{"ephemeral_id"=>"3c90d315-7268-46bc-9ee2-05c1c3a8b1ee", "id"=>"5f61175d-e407-49d3-b9a3-3efa77d25554", "hostname"=>"localhost.localdomain",
"type"=>"filebeat", "version"=>"7.14.0", "name"=>"localhost.localdomain"}, "ecs"=>{"version"=>"1.10.0"}, "event"=>{}, "tags"=>["cppdevices",
"beats_input_codec_plain_applied"], "message"=>{}, "level"=>"INFO"}], :response=>{"index"=>{"_index"=>"soleil", "_type"=>"_doc", "_id"=>"69399857",
"status"=>400, "error"=>{"type"=>"mapper_parsing_exception", 
"reason"=>"failed to parse field [message] of type [text] in document with id '69399857'. Preview of field's value: '{}'",
"caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on 
a START_OBJECT at 1:723"}}}}}

Welcome to our community! :smiley:
FYI it's Elasticsearch, the s is not camel cased.

Take a look at Can't get text on a START_OBJECT and see if there are any helpful hints.

1 Like

Hello Mark, Thanks for the welcome message :smiley:. I checked out the link you sent and I read carefully to try to solve the problem I am encoutering. I have 11 pipelines that include 9 that treat and structure different log types. Previously, I structured every log type alone, and now I reassembled my logs and used the distributor pattern with a dispatching pipeline that dispatches my logs to their respective pipelines and a fallback pipeline plus the 9 other pipelines that process those logs. Why would structuring work just fine without any errors when I was dealing with 1 log type at a time, and now that I assemble everything I get this error? Why wasn't there a mapping problem before? On the other hand, if I want to perform mapping, should the mapping file be common for all log types, or should it be unique to every log type knowing that I have only 1 index? Should I create indices for every log type and perform the mapping for each one?

Hello, after reading the articles, mapping should have solved my issue but upon creating these mapping files of type JSON and explicitly designating a type for each field, I still received the error " Could not idex events to ElasticSearch"

[2021-09-21T07:59:02,134][WARN ][logstash.outputs.elasticsearch][cppdevices][ddfca0e0830cc67ad422a07327a2b66558e87a53962e37ee60acb52071b7c8d0] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"130668566", :_index=>"cppdevices", :routing=>nil}, {"input"=>{"type"=>"log"}, "@version"=>"1", "log"=>{"file"=>{"path"=>"/vagrant/assembly/ds_RecordingManager/cppdevices/cpplogs-20210326-14h46/i06-m-cx_dt_falconx1.1.log"}, "offset"=>6053, "flags"=>["multiline"]}, "thread"=>"9524", "@timestamp"=>2021-03-26T09:46:17.627Z, "ndc"=>{}, "agent"=>{"name"=>"localhost.localdomain", "ephemeral_id"=>"81d0e76c-afb1-48a3-b985-7c0e68680b31", "hostname"=>"localhost.localdomain", "version"=>"7.14.0", "type"=>"filebeat", "id"=>"5f61175d-e407-49d3-b9a3-3efa77d25554"}, "event"=>{}, "ecs"=>{"version"=>"1.10.0"}, "device"=>{"name"=>"i06-m-cx/dt/falconx1.1", "domain"=>"i06-m-cx", "family"=>"dt", "member"=>"falconx1.1"}, "host"=>{"name"=>"localhost.localdomain"}, "level"=>"INFO", "tags"=>["cppdevices", "beats_input_codec_plain_applied"], "message"=>{}}], :response=>{"index"=>{"_index"=>"cppdevices", "_type"=>"_doc", "_id"=>"130668566", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [message] of type [text] in document with id '130668566'. Preview of field's value: '{}'", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get 
text on a START_OBJECT at 1:735"}}}}}

The logs reach Kibana structured, so everything seems to work fine. Nonetheless this status 400 error pops up out , how can I get rid of it? Is there an error I'm not paying attention to? Appreciate the help.