Prevent collision of field types of structured logs

I'm collecting logs to filebeat, sending them to logstash and from there to elastic. As the logs are structured, I'm using the json filter in logstash to parse the message.

Let's say I'm having two messages/logs in json format:

message => "{"foo": "bar"}"

and

message => "{"foo.addr": "127.0.0.1"}"

And my filter is

filter {
    json {
            source => "message"
            target => "mylogs"
            skip_on_invalid_json => true
    }
}

I get the following error:

Could not dynamically add mapping for field [foo.addr]. Existing mapping for [mylogs.foo] must be of type object but found [text].

I need a solution where I don't need to know the field names of the log message in advanced. I thought to the de_dot filter to get something like mylogs.foo and mylogs.foo_addr, but logstash's de_dot filter requires a concrete filed name to work with:

Sub-fields must be manually specified in the array.

How can I prevent this type collision for any field that might be parsed from the message?

Can you share the error you got and the output document?

Because in Logstash having a document with:

"{"foo": "bar"}"

And another with:

"{"foo.addr": "127.0.0.1"}"

Would not result in a mapping error, in the first case you would end up with the food field, and the second one would result in a field named foo.addr which has a literal dot in its name.

Sorry for the late reply, and the missing information.

Here are some more details and things to reproduce:

logstash.conf:

input {
    http {
        host => "0.0.0.0"
        port => 5044
    }
}
filter {
        json {
            source => "message"
            target => "logs"
            skip_on_invalid_json => true
        }
}
output {
    opensearch {
        ...
    }
}

Doing this:

curl -XPOST --insecure 'localhost:5044' -H 'Content-Type: application/json' -d '{"foo": "bar"}'
curl -XPOST --insecure 'localhost:5044' -H 'Content-Type: application/json' -d '{"foo.addr": "bar"}'

I get the following error (from opensearch/elasticsearch, not from logstash):

[2022-11-29T16:47:53,144][WARN ][logstash.outputs.opensearch][main][51178d5a010d5db4ecb8fd69e0f46c9f336d6e30941d744f1e9f6ce89320ac40] Could not index event to OpenSearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"my-first-index", :routing=>nil}, {"headers"=>{"http_version"=>"HTTP/1.1", "request_path"=>"/", "content_type"=>"application/json", "content_length"=>"19", "http_user_agent"=>"curl/7.85.0", "http_accept"=>"*/*", "request_method"=>"POST", "http_host"=>"localhost:5044"}, "host"=>"10.88.0.1", "@version"=>"1", "@timestamp"=>2022-11-29T16:47:52.968Z, "foo.addr"=>"bar"}], :response=>{"index"=>{"_index"=>"my-first-index", "_type"=>"_doc", "_id"=>"TkdJxIQBApPBqyr9qzwW", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Could not dynamically add mapping for field [foo.addr]. Existing mapping for [foo] must be of type object but found [text]."}}}}

OpenSearch/OpenDistro are AWS run products and differ from the original Elasticsearch and Kibana products that Elastic builds and maintains. You may need to contact them directly for further assistance.

(This is an automated response from your friendly Elastic bot. Please report this post if you have any suggestions or concerns :elasticheart: )

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.