Docker Logs keep getting dropped with tried to parse field [image] as object, but found a concrete value error

Ok, at this point can we call this a bug?

I just deployed a brand new instance on a brand new vm. Same error happens there.

Specifically:

  • Set up a new Ubuntu 22.04 vm. (Well, cloned an image.)
  • Updated it, then installed Docker and configured the Elastic apt repo.
  • Installed Elasticsearch.
  • Installed Kibana.
  • Configured a Fleet agent on the vm.
  • Ran docker swarm init on the vm.
  • Deployed a Swarm stack.
  • Added the Docker integration to the Fleet agent.
  • Watched the /opt/Elastic/Agent/elastic-agent-20230302-1.ndjson log until I saw the error show up.
object mapping for [container.image] tried to parse field [image] as object, but found a concrete value
{"log.level":"warn","@timestamp":"2023-03-02T04:12:30.043Z","message":"Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2023, time.March, 2, 4, 12, 28, 497605774, time.UTC), Meta:{\"input_id\":\"filestream-docker-c75a5ab3-056f-4ab5-aa79-4b18884d1eae-docker-997a42efd8598650e935b9274366a7584518912aa023a5ca9717c2f3ce9a2468\",\"raw_index\":\"logs-docker.container_logs-default\",\"stream_id\":\"docker-container-logs-bugstacker_portainer.1.r4sfr111xt99n2vx3h8oatz5l-997a42efd8598650e935b9274366a7584518912aa023a5ca9717c2f3ce9a2468\"}, Fields:{\"agent\":{\"ephemeral_id\":\"8174e92c-ae9a-48bd-bb63-882e2cf7c5e5\",\"id\":\"8e9a4ff9-1ca6-4ec5-99f7-76b2cec0e8bc\",\"name\":\"swarmelasticbug\",\"type\":\"filebeat\",\"version\":\"8.6.2\"},\"container\":{\"id\":\"997a42efd8598650e935b9274366a7584518912aa023a5ca9717c2f3ce9a2468\",\"image\":\"portainer/portainer-ce:2.17.1@sha256:9fa1ec78b4e29d83593cf9720674b72829c9cdc0db7083a962bc30e64e27f64e\",\"labels\":{\"com_docker_desktop_extension_api_version\":\"\\u003e= 0.2.2\",\"com_docker_desktop_extension_icon\":\"https://portainer-io-assets.sfo2.cdn.digitaloceanspaces.com/logos/portainer.png\",\"com_docker_extension_additional-urls\":\"[{\\\"title\\\":\\\"Website\\\",\\\"url\\\":\\\"https://www.portainer.io?utm_campaign=DockerCon\\u0026utm_source=DockerDesktop\\\"},{\\\"title\\\":\\\"Documentation\\\",\\\"url\\\":\\\"https://docs.portainer.io\\\"},{\\\"title\\\":\\\"Support\\\",\\\"url\\\":\\\"https://join.slack.com/t/portainer/shared_invite/zt-txh3ljab-52QHTyjCqbe5RibC2lcjKA\\\"}]\",\"com_docker_extension_detailed-description\":\"\\u003cp data-renderer-start-pos=\\\"226\\\"\\u003ePortainer\\u0026rsquo;s Docker Desktop extension gives you access to all of Portainer\\u0026rsquo;s rich management functionality within your docker desktop experience.\\u003c/p\\u003e\\u003ch2 data-renderer-start-pos=\\\"374\\\"\\u003eWith Portainer you can:\\u003c/h2\\u003e\\u003cul\\u003e\\u003cli\\u003eSee all your running containers\\u003c/li\\u003e\\u003cli\\u003eEasily view all of your container logs\\u003c/li\\u003e\\u003cli\\u003eConsole into containers\\u003c/li\\u003e\\u003cli\\u003eEasily deploy your code into containers using a simple form\\u003c/li\\u003e\\u003cli\\u003eTurn your YAML into custom templates for easy reuse\\u003c/li\\u003e\\u003c/ul\\u003e\\u003ch2 data-renderer-start-pos=\\\"660\\\"\\u003eAbout Portainer\\u0026nbsp;\\u003c/h2\\u003e\\u003cp data-renderer-start-pos=\\\"680\\\"\\u003ePortainer is the worlds\\u0026rsquo; most popular universal container management platform with more than 650,000 active monthly users. Portainer can be used to manage Docker Standalone, Kubernetes, Docker Swarm and Nomad environments through a single common interface. It includes a simple GitOps automation engine and a Kube API.\\u0026nbsp;\\u003c/p\\u003e\\u003cp data-renderer-start-pos=\\\"1006\\\"\\u003ePortainer Business Edition is our fully supported commercial grade product for business-wide use. It includes all the functionality that businesses need to manage containers at scale. Visit \\u003ca class=\\\"sc-jKJlTe dPfAtb\\\" href=\\\"http://portainer.io/\\\" title=\\\"http://Portainer.io\\\" data-renderer-mark=\\\"true\\\"\\u003ePortainer.io\\u003c/a\\u003e to learn more about Portainer Business and \\u003ca class=\\\"sc-jKJlTe dPfAtb\\\" href=\\\"http://portainer.io/take5?utm_campaign=DockerCon\\u0026amp;utm_source=Docker%20Desktop\\\" title=\\\"http://portainer.io/take5?utm_campaign=DockerCon\\u0026amp;utm_source=Docker%20Desktop\\\" data-renderer-mark=\\\"true\\\"\\u003eget 5 free nodes.\\u003c/a\\u003e\\u003c/p\\u003e\",\"com_docker_extension_publisher-url\":\"https://www.portainer.io\",\"com_docker_extension_screenshots\":\"[{\\\"alt\\\": \\\"screenshot one\\\", \\\"url\\\": \\\"https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-1.png\\\"},{\\\"alt\\\": \\\"screenshot two\\\", \\\"url\\\": \\\"https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-2.png\\\"},{\\\"alt\\\": \\\"screenshot three\\\", \\\"url\\\": \\\"https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-3.png\\\"},{\\\"alt\\\": \\\"screenshot four\\\", \\\"url\\\": \\\"https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-4.png\\\"},{\\\"alt\\\": \\\"screenshot five\\\", \\\"url\\\": \\\"https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-5.png\\\"},{\\\"alt\\\": \\\"screenshot six\\\", \\\"url\\\": \\\"https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-6.png\\\"},{\\\"alt\\\": \\\"screenshot seven\\\", \\\"url\\\": \\\"https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-7.png\\\"},{\\\"alt\\\": \\\"screenshot eight\\\", \\\"url\\\": \\\"https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-8.png\\\"},{\\\"alt\\\": \\\"screenshot nine\\\", \\\"url\\\": \\\"https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-9.png\\\"}]\",\"com_docker_stack_namespace\":\"bugstacker\",\"com_docker_swarm_node_id\":\"1iyyzpqzcrx24at81fq9gqqem\",\"com_docker_swarm_service_id\":\"tue3i6n9krenke6axufqiq7c8\",\"com_docker_swarm_service_name\":\"bugstacker_portainer\",\"com_docker_swarm_task\":\"\",\"com_docker_swarm_task_id\":\"r4sfr111xt99n2vx3h8oatz5l\",\"com_docker_swarm_task_name\":\"bugstacker_portainer.1.r4sfr111xt99n2vx3h8oatz5l\",\"io_portainer_server\":\"true\",\"org_opencontainers_image_description\":\"Docker container management made simple, with the world’s most popular GUI-based container management platform.\",\"org_opencontainers_image_title\":\"Portainer\",\"org_opencontainers_image_vendor\":\"Portainer.io\"},\"name\":\"bugstacker_portainer.1.r4sfr111xt99n2vx3h8oatz5l\"},\"data_stream\":{\"dataset\":\"docker.container_logs\",\"namespace\":\"default\",\"type\":\"logs\"},\"ecs\":{\"version\":\"8.0.0\"},\"elastic_agent\":{\"id\":\"8e9a4ff9-1ca6-4ec5-99f7-76b2cec0e8bc\",\"snapshot\":false,\"version\":\"8.6.2\"},\"event\":{\"dataset\":\"docker.container_logs\"},\"host\":{\"architecture\":\"x86_64\",\"containerized\":false,\"hostname\":\"swarmelasticbug\",\"id\":\"68d8fef98e5540419935d1d9d3d1601a\",\"ip\":[\"192.168.1.130\",\"fe80::d4ba:ceff:fec1:c069\",\"172.17.0.1\",\"172.18.0.1\",\"fe80::42:46ff:fe86:cc9c\",\"fe80::c8e8:93ff:fe48:c29a\",\"fe80::82d:f8ff:fee4:4db1\",\"fe80::28dd:23ff:fe56:a159\"],\"mac\":[\"02-42-46-86-CC-9C\",\"02-42-C1-A0-92-4F\",\"0A-2D-F8-E4-4D-B1\",\"2A-DD-23-56-A1-59\",\"CA-E8-93-48-C2-9A\",\"D6-BA-CE-C1-C0-69\"],\"name\":\"swarmelasticbug\",\"os\":{\"codename\":\"jammy\",\"family\":\"debian\",\"kernel\":\"5.15.0-67-generic\",\"name\":\"Ubuntu\",\"platform\":\"ubuntu\",\"type\":\"linux\",\"version\":\"22.04.2 LTS (Jammy Jellyfish)\"}},\"input\":{\"type\":\"filestream\"},\"log\":{\"file\":{\"path\":\"/var/lib/docker/containers/997a42efd8598650e935b9274366a7584518912aa023a5ca9717c2f3ce9a2468/997a42efd8598650e935b9274366a7584518912aa023a5ca9717c2f3ce9a2468-json.log\"},\"offset\":13461},\"message\":\"2023/03/02 04:12AM ERR github.com/portainer/portainer/api/internal/endpointutils/endpointutils.go:172 \\u003e final error while detecting storage classes | error=\\\"unsupported environment type\\\" stack_trace=[{\\\"func\\\":\\\"(*ClientFactory).CreateClient\\\",\\\"line\\\":\\\"157\\\",\\\"source\\\":\\\"client.go\\\"},{\\\"func\\\":\\\"(*ClientFactory).createCachedAdminKubeClient\\\",\\\"line\\\":\\\"132\\\",\\\"source\\\":\\\"client.go\\\"},{\\\"func\\\":\\\"(*ClientFactory).GetKubeClient\\\",\\\"line\\\":\\\"77\\\",\\\"source\\\":\\\"client.go\\\"},{\\\"func\\\":\\\"storageDetect\\\",\\\"line\\\":\\\"133\\\",\\\"source\\\":\\\"endpointutils.go\\\"},{\\\"func\\\":\\\"InitialStorageDetection.func1\\\",\\\"line\\\":\\\"171\\\",\\\"source\\\":\\\"endpointutils.go\\\"},{\\\"func\\\":\\\"goexit\\\",\\\"line\\\":\\\"1594\\\",\\\"source\\\":\\\"asm_amd64.s\\\"}]\\n\",\"stream\":\"stderr\"}, Private:(*input_logfile.updateOp)(0xc000684540), TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:mapstr.M(nil)}} (status=400): {\"type\":\"mapper_parsing_exception\",\"reason\":\"object mapping for [container.image] tried to parse field [image] as object, but found a concrete value\"}, dropping event!","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"log.logger":"elasticsearch","log.origin":{"file.line":429,"file.name":"elasticsearch/client.go"},"service.name":"filebeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}

@leandrojmp Er, there is no install folder in an 8.6.2 install. Also, I did find the file in a previous post. It's in /opt/Elastic/Agent/data/elastic-agent-913c02/components/filebeat.yml. Though, I'm not 100% positive that's the correct file to edit. I removed the kube processor line and the error is still showing up after an agent restart.

Maybe Elastic changed it, If I'm not wrong they were working on a new version of the Agent.

I'm still on 8.5, didn't updated to 8.6 yet.

I just checked on of my servers that isn't in swarm mode. The same error is present in the elastic agent logs there.

Hopefully I'm not jumping the gun, but I went ahead and posted a bug. Docker integration no longer collects logs due to "object mapping for [container.image] tried to parse field [image] as object, but found a concrete value" error · Issue #2347 · elastic/elastic-agent · GitHub

That sounds like the right approach, thanks!

Do you mean the v2 of the protocol in Agent? I think that was the major recent change but shouldn't affect the mapping of documents.

Just driving by.... I've helped on a number of these mapping issues and they're always tough to figure out.

The ECS fields for image have been an object for quite some time (perhaps that was already discussed)

Question If you remove the add metadata processors do you see the same error?

Have you checked to see if your raw JSON logs already have container.image in them? Even without the processors?

Assuming /opt/Elastic/Agent/data/elastic-agent-913c02/components/filebeat.yml actually is the correct file to edit, I tried removing just the kube processor, and then all the processors.

#processors:
#  - add_host_metadata:
#      when.not.contains.tags: forwarded
#  - add_cloud_metadata: ~
#  - add_docker_metadata: ~
#  - add_kubernetes_metadata: ~

Restarted elastic-agent both times, and that error still showed up.

I need to check some other things later....

About the other thing I asked, did you look at the raw container logs and see if there's any fields related to container??

Are the raw logs in JSON format?

Ah, sorry, I didn't reply to those specifically...

So, the raw Docker logs are in json format.

I'm 90% the container.image field is added by Elastic Agent/Filebeat. The Docker logs just contain the log message, timestamp, and what stream it came from.

For example, these logs from a test httpd container:

{"log":"AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.19.0.4. Set the 'ServerName' directive globally to suppress this message\n","stream":"stderr","time":"2023-03-05T18:57:54.639249021Z"}
{"log":"AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.19.0.4. Set the 'ServerName' directive globally to suppress this message\n","stream":"stderr","time":"2023-03-05T18:57:54.640026129Z"}
{"log":"[Sun Mar 05 18:57:54.639590 2023] [mpm_event:notice] [pid 1:tid 140406970104648] AH00489: Apache/2.4.55 (Unix) configured -- resuming normal operations\n","stream":"stderr","time":"2023-03-05T18:57:54.640050495Z"}
{"log":"[Sun Mar 05 18:57:54.639634 2023] [core:notice] [pid 1:tid 140406970104648] AH00094: Command line: 'httpd -D FOREGROUND'\n","stream":"stderr","time":"2023-03-05T18:57:54.640055484Z"}

There isn't any extra info in them.

I did a quick skim of the logs on my little test vm. I only see the log, stream, and time fields. When the log message happens to be json, it is escaped with backslashes.

Does that answer you questions?

Yes but I needed to ask... (Long story there)

Is this standalone or fleet managed?

I will try to replicate when I get a chance... Don't think just editing out the processor in the filebeat.yml like you did will work..

Yeah, but the change in question is about the location of the filebeat.yml file used by the agent, on 8.5.1 it is on the following path:

/opt/Elastic/Agent/data/elastic-agent-026915/install/filebeat-8.5.1-linux-x86_64

But as the OP said, a similar path does not exist on version 8.6.1, so it seems that this changed, it looks like that the file is now on a path like the following one:

/opt/Elastic/Agent/data/elastic-agent-913c02/components/filebeat.yml

Fleet managed.

And assume you are using the Docker integration

Working on replicating...

Yep. The Docker integration.

@jerrac
cc @leandrojmp @xeraa

I believe I have reproduced exactly your issue.... both good :slight_smile: .. and of course bad :frowning:

I will look closer but my recommendation for the time being go back to 8.5.3... I may be able to provide you with a workaround if I can...

What I did is install agent 8.5.3 with docker and generated some logs by starting a postgress docker container ... Everything Looks good

Then I simply upgraded the Agent to 8.6.2 and it broke the integration with the same error

 Flags:0x1, Cache:publisher.EventCache{m:mapstr.M(nil)}} (status=400): {\"type\":\"mapper_parsing_exception\",\"reason\":\"object mapping for [container.image] tried to parse field [image] as object, but found a concrete value\"}, dropping event!","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"log.origin":{"file.line":429,"file.name":"elasticsearch/client.go"},"service.name":"filebeat","ecs.version":"1.6.0","log.logger":"elasticsearch","ecs.version":"1.6.0"}

I am going to test a few more things....

Note1: Interesting it only looks the container logs are broken ... not container metrics so that is +1 in that category.

1 Like

@jerrac I have a super simple workaround !!! It Works ...

I will still work behind the scenes to get this fixed

So in the docker integration

Under Container Logs -> Advanced Setting Processors I added this

- rename:
    fields:
      - from: "container.image"
        to: "container.image.name"
    ignore_missing: true
    fail_on_error: false


The processors renames the field before it is sent to elastic and now here are the Fixed Logs :slight_smile:

4 Likes

Awesome. It worked on my home cluster. :slight_smile:

Also, thanks for confirming that it wasn't just me. Not very often that happens...

FYI, it does take a bit for the change to start working. I'm guessing I'm just impatient, but figured I'd note that just in case others are as impatient as I am. I could easily see myself thinking it isn't working and breaking it while trying to fix it...

What's a while?... It does have to redeploy the changes to the agent which can take a minute or two.

How long delay are you seeing mine took a couple minutes at most... If it is under heavy load it might take longer...

It was 3-5 minutes. Like I said, I was impatient. Though at least one of my nodes might still be under heavy load... I'm pushing cheap vps's as hard as I can...

1 Like

Like I said in the other thread, you can't downgrade the agent version through fleet, but you certainly can just remove it and add back the version you want.

In the fleet UI where it gives you the commands, just change the version number.

To uninstall it's just

/opt/Elastic/Agent/elastic-agent uninstall