Docker Logs keep getting dropped with tried to parse field [image] as object, but found a concrete value error

I moved the issue to Docker integration no longer collects logs due to "object mapping for [container.image] tried to parse field [image] as object, but found a concrete value" error · Issue #5450 · elastic/integrations · GitHub

1 Like

The internal team is looking at it, they see the issue with no ETA on a fix yet.

Maybe we could relax those constraints?

@Luca_Belluccini

Yes I thought I fixed that I did on my side. I just didn't post it back.

You can see in the image that I did actually change it. I'll fix the text as well

Done fixed :white_check_mark:

1 Like

Sorry my bad I've not checked :smiling_face:
Thank you for sharing the workaround!

1 Like

After implementing the temporary workaround, I kept getting a further somewhat similar error causing (some of) the events to be dropped:

failed to parse field [event.dataset] of type [constant_keyword] in document with id 'MTqjvIYBCKefJtjV4yIM'. Preview of field's value: 'elasticsearch.server'","caused_by":{"type":"illegal_argument_exception","reason":"[constant_keyword] field [event.dataset] only accepts values that are equal to the value defined in the mappings [docker.container_logs], but got [elasticsearch.server]"}

So I extended the workaround by another processor to just drop this field:

- drop_fields:
    fields: ["event.dataset"]
    ignore_missing: true

Maybe that's related and can be fixed at the same time. My context is that I run fleet server, kibana and elasticsearch in a containers and the container logs are picked up by en elastic agent with the Docker integration using an ndjson parser (and now a rename and drop_fields processor).

Hi @Raphasle Welcome to the community

Interesting.... can you show a sample log line that has that issue?

I would probably not drop the event.dataset field it is a pretty important field.

I would probably just set it to the correct value when not the correct value something like this... you will need to check the syntax (I did not get a chance)

add_fields will overwrite

- add_fields:
    when.not.equals.event.dataset: "docker.container_logs"
    target: ''
    fields:
      event.dataset: "docker.container_logs"

Indeed, setting the field to the anyways constant value is nicer.

One of the elasticsearch docker log statements that caused such an error was:

{"log":"{"@timestamp":"2023-03-07T17:59:59.196Z", "log.level": "WARN", "data_stream.dataset":"deprecation.elasticsearch","data_stream.namespace":"default","data_stream.type":"logs","elasticsearch.elastic_product_origin":"kibana","elasticsearch.event.category":"api","elasticsearch.http.request.x_opaque_id":"8121dfe1-4417-44ff-b8e5-9c796ca3c145;kibana:application:management:","event.code":"open_system_index_access","message":"this request accesses system indices: [.async-search, .security-7, .security-profile-8, .tasks, .transform-internal-005, .transform-internal-007], but in a future major version, direct access to system indices will be prevented by default" , "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"deprecation.elasticsearch","process.thread.name":"elasticsearch[b4ed9f9676e7][transport_worker][T#1]","log.logger":"org.elasticsearch.deprecation.cluster.metadata.IndexNameExpressionResolver","trace.id":"07162c3ff1e0f665772161f75ac8ce83","elasticsearch.cluster.uuid":"JTty1lfFSl2mwlDH4uX7kg","elasticsearch.node.id":"iVX4NNmxTjmun9VLOab2YA","elasticsearch.node.name":"b4ed9f9676e7","elasticsearch.cluster.name":"Test"}\n","stream":"stdout","time":"2023-03-07T17:59:59.197403077Z"}

The full "message" showing up then in a .ds-logs-elastic_agent.filebeat-default index was:

ExpandCannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2023, time.March, 7, 17, 59, 59, 196000000, time.UTC), Meta:{"input_id":"filestream-docker-0af2355f-95f6-4b98-a06b-614cb1c73d82-docker-b4ed9f9676e7c4961f59c341b2cd106ed06e75278b46e1ad925a4ef1fdf6e894","raw_index":"logs-docker.container_logs-default","stream_id":"docker-container-logs-es01-b4ed9f9676e7c4961f59c341b2cd106ed06e75278b46e1ad925a4ef1fdf6e894"}, Fields:{"agent":{"ephemeral_id":"a4445584-0e54-4798-b796-f0adbabcb011","id":"6fcc8131-3506-4af9-90f8-054700a299f4","name":"some-hostname","type":"filebeat","version":"8.6.0"},"container":{"id":"b4ed9f9676e7c4961f59c341b2cd106ed06e75278b46e1ad925a4ef1fdf6e894","image":{"name":"some.private.registry/elasticsearch:8.6.2"},"labels":{"com_docker_compose_config-hash":"e9757ec0a983b15dfd096bdb863a692a27c05fde28a17861c540ebfd7fab1dd9","com_docker_compose_container-number":"1","com_docker_compose_depends_on":"","com_docker_compose_image":"sha256:04485c81cc2d3ae1ae68e8be87f8a5043d71496332680a5b2e58664be95461c5","com_docker_compose_oneoff":"False","com_docker_compose_project":"octopus","com_docker_compose_project_config_files":"/usr/local/bin/docker-files/octopus/docker-compose.yml","com_docker_compose_project_environment_file":"/usr/local/bin/docker-files/octopus/octopus.env","com_docker_compose_project_working_dir":"/usr/local/bin/docker-files/octopus","com_docker_compose_replace":"0a5d7f7238b908b667f6f012a2be92978ab997791b677af599f8086636269079","com_docker_compose_service":"es01","com_docker_compose_version":"2.16.0","org_label-schema_build-date":"2023-02-13T09:35:20.314882762Z","org_label-schema_license":"Elastic-License-2.0","org_label-schema_name":"Elasticsearch","org_label-schema_schema-version":"1.0","org_label-schema_url":"https://www.elastic.co/products/elasticsearch","org_label-schema_usage":"https://www.elastic.co/guide/en/elasticsearch/reference/index.html","org_label-schema_vcs-ref":"2d58d0f136141f03239816a4e360a8d17b6d8f29","org_label-schema_vcs-url":"https://github.com/elastic/elasticsearch","org_label-schema_vendor":"Elastic","org_label-schema_version":"8.6.2","org_opencontainers_image_created":"2023-02-13T09:35:20.314882762Z","org_opencontainers_image_documentation":"https://www.elastic.co/guide/en/elasticsearch/reference/index.html","org_opencontainers_image_licenses":"Elastic-License-2.0","org_opencontainers_image_ref_name":"ubuntu","org_opencontainers_image_revision":"2d58d0f136141f03239816a4e360a8d17b6d8f29","org_opencontainers_image_source":"https://github.com/elastic/elasticsearch","org_opencontainers_image_title":"Elasticsearch","org_opencontainers_image_url":"https://www.elastic.co/products/elasticsearch","org_opencontainers_image_vendor":"Elastic","org_opencontainers_image_version":"8.6.2"},"name":"es01"},"data_stream":{"dataset":"docker.container_logs","namespace":"default","type":"logs"},"data_stream.dataset":"deprecation.elasticsearch","data_stream.namespace":"default","data_stream.type":"logs","ecs":{"version":"8.0.0"},"ecs.version":"1.2.0","elastic_agent":{"id":"6fcc8131-3506-4af9-90f8-054700a299f4","snapshot":false,"version":"8.6.0"},"elasticsearch.cluster.name":"Test","elasticsearch.cluster.uuid":"JTty1lfFSl2mwlDH4uX7kg","elasticsearch.elastic_product_origin":"kibana","elasticsearch.event.category":"api","elasticsearch.http.request.x_opaque_id":"8121dfe1-4417-44ff-b8e5-9c796ca3c145;kibana:application:management:","elasticsearch.node.id":"iVX4NNmxTjmun9VLOab2YA","elasticsearch.node.name":"b4ed9f9676e7","event":{"dataset":"docker.container_logs"},"event.code":"open_system_index_access","host":{"architecture":"x86_64","containerized":false,"hostname":"some-hostname","id":"e8a3e697270844118813786bfb68962c","ip":["many IPs"],"mac":["many mac addresses"],"name":"some-hostname","os":{"codename":"buster","family":"debian","kernel":"4.19.0-23-amd64","name":"Debian GNU/Linux","platform":"debian","type":"linux","version":"10 (buster)"}},"input":{"type":"filestream"},"log":{"file":{"path":"/var/lib/docker/containers/b4ed9f9676e7c4961f59c341b2cd106ed06e75278b46e1ad925a4ef1fdf6e894/b4ed9f9676e7c4961f59c341b2cd106ed06e75278b46e1ad925a4ef1fdf6e894-json.log"},"offset":2288689},"log.level":"WARN","log.logger":"org.elasticsearch.deprecation.cluster.metadata.IndexNameExpressionResolver","message":"this request accesses system indices: [.async-search, .security-7, .security-profile-8, .tasks, .transform-internal-005, .transform-internal-007], but in a future major version, direct access to system indices will be prevented by default","process.thread.name":"elasticsearch[b4ed9f9676e7][transport_worker][T#1]","service.name":"ES_ECS","stream":"stdout","trace.id":"07162c3ff1e0f665772161f75ac8ce83"}, Private:(*input_logfile.updateOp)(0xc0017a9e30), TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:mapstr.M(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [data_stream.dataset] of type [constant_keyword] in document with id 'Wdk7vYYBcPMgqNc3Mkbf'. Preview of field's value: 'deprecation.elasticsearch'","caused_by":{"type":"illegal_argument_exception","reason":"[constant_keyword] field [data_stream.dataset] only accepts values that are equal to the value defined in the mappings [docker.container_logs], but got [deprecation.elasticsearch]"}}, dropping event!

But I am sure it's not related to this particular log messgage, e.g. rolling over an index causes 15 log messages that get dropped due to a mismatch of event.dataset.

I have the same problem after upgrade to 8.6.2.

The container logs was drop because the strucutre of the log is :

container.image : "nameOfImage"

And the integration seems to need :

container.image : 
    name : "nameOfImage"

So I used this script in the pipeline :

def temp = ctx.container.image;
ctx.container.image = [];
ctx.container.image.name = temp;

But it seems to have a very high impact on the performance of my cluster.
The CPU is alway at 100% used and CPU credit is empty.

Do you think it's because all logs should be refactored by the cluster when is received ?

Do you think it will be better if I used the "rename" processor instead of the modification in the pipeline by using "script" ?

Thanks,

I would always use simple processors over a script when you can...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.