Docker Logs keep getting dropped with tried to parse field [image] as object, but found a concrete value error

When investigating why I couldn't find my docker logs in Elastic, I found that Elastic Agent has been dropping them. It keeps logging stuff like:

{"log.level":"warn","@timestamp":"2023-02-22T18:48:50.007-0800","message":"Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2023, time.February, 23, 2, 48, 40, 563765513, time.UTC), Meta:{\"input_id\":\"filestream-docker-5955b4bf-384a-4e94-9535-6b6a47fe12be-docker-d3cb87b0cb202c49f3176bbb923902dc0bea04c32223f025d4741e8d2ea1c5fe\",\"raw_index\":\"logs-docker.container_logs-tipperthecat\",\"stream_id\": <<<<< log info I don't want public >>>>> \"os\":{\"codename\":\"jammy\",\"family\":\"debian\",\"kernel\":\"5.15.0-60-generic\",\"name\":\"Ubuntu\",\"platform\":\"ubuntu\",\"type\":\"linux\",\"version\":\"22.04.1 LTS (Jammy Jellyfish)\"}},\"input\":{\"type\":\"filestream\"},\"log\":{\"file\":{\"path\":\"/var/lib/docker/containers/d3cb87b0cb202c49f3176bbb923902dc0bea04c32223f025d4741e8d2ea1c5fe/d3cb87b0cb202c49f3176bbb923902dc0bea04c32223f025d4741e8d2ea1c5fe-json.log\"},\"offset\":169697},\"message\":\"127.0.0.1 - - [22/Feb/2023:18:48:40 -0800] \\\"GET /robots.txt HTTP/1.1\\\" 200 237 \\\"-\\\" \\\"curl/7.64.0\\\"\\n\",\"stream\":\"stdout\"}, Private:(*input_logfile.updateOp)(0xc0019468a0), TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:mapstr.M(nil)}} (status=400): {\"type\":\"mapper_parsing_exception\",\"reason\":\"object mapping for [container.image] tried to parse field [image] as object, but found a concrete value\"}, dropping event!","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"service.name":"filebeat","ecs.version":"1.6.0","log.logger":"elasticsearch","log.origin":{"file.line":429,"file.name":"elasticsearch/client.go"},"ecs.version":"1.6.0"}

The key line, I think, being:

"reason\":\"object mapping for [container.image] tried to parse field [image] as object, but found a concrete value\"}, dropping event!",

Google turned up results for people who had messed with their mappings or were creating new mappings.

I haven't done either. This is a few weeks old 8.6.1 single-node instance on my home desktop. I haven't done any customization beyond basic fleet integration configuration.

I did try reinstalling the Docker integration via its settings tab. Didn't help.

System info:

Elasticsearch/Kibana 8.6.1 on an Ubuntu 22.04 vm. Elastic Agent 8.6.1 on a couple other Ubuntu 22.04 vms. None of the ELK stuff is in Docker.

Anyone have any ideas?

1 Like

I've found the same problem on my cluster at work.

On that cluster I had a custom ingest pipeline set up, so I removed that. It didn't change anything.

I double checked that I wasn't just filtering things out by accident by looking at the docker logs datastream index. For the one with the date 2023.02.22 on it, there are 0 docs. For the index from 2023.01.19, there are 6+ million docs.

My work cluster is running 3 8.6.1 nodes on Ubuntu 18.04, The agents that are supposed to be grabbing docker logs are installed at the system level. Not in docker.

Any ideas?

Edit:
this sure sounds like my issue, but how in the world did a mapping get messed up when I have been letting Elastic Agent/Kibana/etc manage all that for me?

It could also just be a bug? In the end, the data you get doesn't fit the mapping you have. Now the question is why and who is at fault here.

I looked at the first log line you had that included the (status=400): {\"type\":\"mapper_parsing_exception\",\"reason\":\"object mapping for [container.image] tried to parse field [image] as object, but found a concrete value\"}: There's no container.image in the log message. Which I find a bit confusing but do you happen to have a log line with container.image to figure that out?

@xeraa I removed that part of the log message. container.name is in there, though.

Ended up taking part of today off work due to a cold, but before that I did try deleting the docker logs datastream (it was empty) so that Agent would recreate it from scratch. Same issue cropped right back up. So either deleting the datastream wouldn't effect the mapping, or Agent is the one setting the mapping up wrong. Could also be both if Agent is setting up the mapping another way...

Yeah, so I would make sure the data that you get matches what is in the index (template). Not sure where that mismatch is coming from but somehow this doesn't seem to fit together.

Um, so, if I'm getting that error, doesn't that mean the data is NOT matching the template?

If I look at my logs, I have something like:

"container": {
...
"name":"name_of_container.taskslot.taskid",
...
}

And if that isn't matching the template, then doesn't that mean the Elastic Agent is formatting the data wrong? Since I have NOT done any customization of mappings or templates.

Sorry, trying to do too many things in parallel. Let me restart with the original error message:

object mapping for [container.image] tried to parse field [image] as object, but found a concrete value

This basically says that you wanted to store some structure like { "container": { "image": { "name": "some-value" } } }, so container and image are both an object and not a concrete value like a keyword field. But then you tried to store something like { "container": { "image": "some-value" } }, so a concrete value and not an object. This is not possible.

So somehow you need to figure out why the data arriving is not consistent / not matching what you have in your index (template). Could be fixable through an ingest pipeline or somewhere at the source.

Ok, found another example of that error from just a few minutes ago.

It's from a completely separate instance of ES 7.x I have running for an app.

Here is the somewhat sanitized log line:

{"log.level":"warn","@timestamp":"2023-02-28T10:02:48.594-0800","message":"Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2023, time.February, 28, 18, 2, 47, 585511795, time.UTC), Meta:{"input_id":"filestream-docker-f25e7a82-6ad5-4bdd-a42e-4a77d35257c7-docker-6811692bdedb8ac80cfc9712bf8dad612112b90ce5bf5e0d7ede4167fa3ee861","raw_index":"logs-docker.container_logs-testswarmstack","stream_id":"docker-container-logs-projectbook_projectbook-es.1.t06e5as3a0h6lrctz0aamm4yc-6811692bdedb8ac80cfc9712bf8dad612112b90ce5bf5e0d7ede4167fa3ee861"}, Fields:{"agent":{"ephemeral_id":"c7411b53-b096-4acd-ae54-80631c2076d3","id":"ca3b3d79-8062-4ccb-87e4-cb25103cc152","name":"sw-test-worker-01","type":"filebeat","version":"8.6.2"},"container":{"id":"6811692bdedb8ac80cfc9712bf8dad612112b90ce5bf5e0d7ede4167fa3ee861","image":"docker.elastic.co/elasticsearch/elasticsearch:7.14.1@sha256:2dcd2f31e246a8b13995ba24922da2edc3d88e65532ff301d0b92cb1be358af5","labels":{"alertdrop_identifier":"projectbook-es-{{ .Task.Slot }}","com_docker_stack_namespace":"projectbook","com_docker_swarm_node_id":"nci87t7dkn0n85k3o80g9dy5x","com_docker_swarm_service_id":"xqz1ovg84n5y1tof34egl6zxm","com_docker_swarm_service_name":"projectbook_projectbook-es","com_docker_swarm_task":"","com_docker_swarm_task_id":"t06e5as3a0h6lrctz0aamm4yc","com_docker_swarm_task_name":"projectbook_projectbook-es.1.t06e5as3a0h6lrctz0aamm4yc","project_image_type":"elasticsearch","project_project":"projectbook","project_service_green_count":"3","project_service_red_count":"1","project_service_type":"longrun","project_service_yellow_count":"2","org_label-schema_build-date":"2021-08-26T09:01:05.390870785Z","org_label-schema_license":"Elastic-License-2.0","org_label-schema_name":"Elasticsearch","org_label-schema_schema-version":"1.0","org_label-schema_url":"https://www.elastic.co/products/elasticsearch","org_label-schema_usage":"https://www.elastic.co/guide/en/elasticsearch/reference/index.html","org_label-schema_vcs-ref":"66b55ebfa59c92c15db3f69a335d500018b3331e","org_label-schema_vcs-url":"https://github.com/elastic/elasticsearch","org_label-schema_vendor":"Elastic","org_label-schema_version":"7.14.1","org_opencontainers_image_created":"2021-08-26T09:01:05.390870785Z","org_opencontainers_image_documentation":"https://www.elastic.co/guide/en/elasticsearch/reference/index.html","org_opencontainers_image_licenses":"Elastic-License-2.0","org_opencontainers_image_revision":"66b55ebfa59c92c15db3f69a335d500018b3331e","org_opencontainers_image_source":"https://github.com/elastic/elasticsearch","org_opencontainers_image_title":"Elasticsearch","org_opencontainers_image_url":"https://www.elastic.co/products/elasticsearch","org_opencontainers_image_vendor":"Elastic","org_opencontainers_image_version":"7.14.1"},"name":"projectbook_projectbook-es.1.t06e5as3a0h6lrctz0aamm4yc"},"data_stream":{"dataset":"docker.container_logs","namespace":"testswarmstack","type":"logs"},"ecs":{"version":"8.0.0"},"elastic_agent":{"id":"ca3b3d79-8062-4ccb-87e4-cb25103cc152","snapshot":false,"version":"8.6.2"},"event":{"dataset":"docker.container_logs"},"host":{"architecture":"x86_64","containerized":false,"hostname":"sw-test-worker-01","id":"24736967f71a4771a861e7db1a31d901","ip":["fe80::10d1:79ff:fea9:9021","10.225.225.33","fe80::250:56ff:feae:3f6b","172.16.30.33","fe80::250:56ff:feae:6430","172.18.0.1","fe80::42:7aff:fe0c:430","172.17.0.1","fe80::24b4:43ff:fe75:ae39","fe80::1021:dff:fe2d:d924","fe80::c849:57ff:fe4a:c1a8","fe80::94ae:20ff:fed4:14c9","fe80::8c6:9cff:fe6c:efa6","fe80::70df:6fff:feaa:d821","fe80::9cb5:d2ff:fef5:ae52","fe80::1c75:17ff:feac:f1b","fe80::9022:dbff:feb5:941","fe80::7c66:ddff:fe45:c7bc","fe80::7caa:58ff:fe9f:7db1","fe80::b806:95ff:fec3:6bd5","fe80::d85d:e8ff:feb6:3121","fe80::98a0:d7ff:fe09:6753","fe80::c844:48ff:feb4:ca9a","fe80::d423:3fff:fe81:4752","fe80::1c8c:51ff:fedf:c87a","fe80::3c9d:9fff:fe24:9377","fe80::e8cf:2ff:fe8c:67d0","fe80::7c99:caff:fefa:ebd1","fe80::7cc6:2bff:fe58:7f","fe80::58d6:79ff:fe85:56b4","fe80::dc93:67ff:fe5e:852","fe80::dc6d:4aff:fe12:e5fe","fe80::90da:d7ff:fe6e:539f","fe80::1c1e:f1ff:fee0:5ce","fe80::843b:7aff:fe6f:7bb","fe80::d0d7:94ff:fe5e:dbd9","fe80::40fb:92ff:feaa:8bc7","fe80::a0b5:48ff:fed9:a539","fe80::a8b7:4dff:feac:154b","fe80::28ef:2fff:fedf:9e9a","fe80::d0bd:56ff:fe70:9ca5"],"mac":["00-50-56-AE-3F-6B","00-50-56-AE-64-30","02-42-7A-0C-04-30","02-42-B0-3D-49-07","0A-C6-9C-6C-EF-A6","12-21-0D-2D-D9-24","12-D1-79-A9-90-21","1E-1E-F1-E0-05-CE","1E-75-17-AC-0F-1B","1E-8C-51-DF-C8-7A","26-B4-43-75-AE-39","2A-EF-2F-DF-9E-9A","3E-9D-9F-24-93-77","42-FB-92-AA-8B-C7","5A-D6-79-85-56-B4","72-DF-6F-AA-D8-21","7E-66-DD-45-C7-BC","7E-99-CA-FA-EB-D1","7E-AA-58-9F-7D-B1","7E-C6-2B-58-00-7F","86-3B-7A-6F-07-BB","92-22-DB-B5-09-41","92-DA-D7-6E-53-9F","96-AE-20-D4-14-C9","9A-A0-D7-09-67-53","9E-B5-D2-F5-AE-52","A2-B5-48-D9-A5-39","AA-B7-4D-AC-15-4B","BA-06-95-C3-6B-D5","CA-44-48-B4-CA-9A","CA-49-57-4A-C1-A8","D2-BD-56-70-9C-A5","D2-D7-94-5E-DB-D9","D6-23-3F-81-47-52","DA-5D-E8-B6-31-21","DE-6D-4A-12-E5-FE","DE-93-67-5E-08-52","EA-CF-02-8C-67-D0"],"name":"sw-test-worker-01","os":{"codename":"jammy","family":"debian","kernel":"5.15.0-58-generic","name":"Ubuntu","platform":"ubuntu","type":"linux","version":"22.04.1 LTS (Jammy Jellyfish)"}},"input":{"type":"filestream"},"log":{"file":{"path":"/var/lib/docker/containers/6811692bdedb8ac80cfc9712bf8dad612112b90ce5bf5e0d7ede4167fa3ee861/6811692bdedb8ac80cfc9712bf8dad612112b90ce5bf5e0d7ede4167fa3ee861-json.log"},"offset":18760},"message":"{\\"type\\": \\"server\\", \\"timestamp\\": \\"2023-02-28T18:02:47,585Z\\", \\"level\\": \\"INFO\\", \\"component\\": \\"o.e.p.PluginsService\\", \\"cluster.name\\": \\"master-projectbook\\", \\"node.name\\": \\"testprojectbookswarm1\\", \\"message\\": \\"loaded module [x-pack-security]\\" }\\n","stream":"stdout"}, Private:(*input_logfile.updateOp)(0xc00165fb60), TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:mapstr.M(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"object mapping for [container.image] tried to parse field [image] as object, but found a concrete value"}, dropping event!","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"log.logger":"elasticsearch","log.origin":{"file.line":429,"file.name":"elasticsearch/client.go"},"service.name":"filebeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}

If you dig into that, the image field looks like:

"image":"docker.elastic.co/elasticsearch/elasticsearch:7.14.1@sha256:2dcd2f31e246a8b13995ba24922da2edc3d88e65532ff301d0b92cb1be358af5",

I tracked down the mapping for that data stream and found this:

{
  "_meta": {
    "package": {
      "name": "docker"
    },
    "managed_by": "fleet",
    "managed": true
  }
}

If I track down the actual index via hidden indexes on the index management screen, I see this info for the mapping:

{
  "mappings": {
    "_meta": {
      "managed_by": "fleet",
      "managed": true,
      "package": {
        "name": "docker"
      }
    },
    "_data_stream_timestamp": {
      "enabled": true
    },
    "dynamic_templates": [
      {
        "strings_as_keyword": {
          "match_mapping_type": "string",
          "mapping": {
            "ignore_above": 1024,
            "type": "keyword"
          }
        }
      }
    ],
    "date_detection": false,
    "properties": {
      "@timestamp": {
        "type": "date"
      },
      "container": {
        "properties": {
          "id": {
            "type": "keyword",
            "ignore_above": 1024
          },
          "image": {
            "properties": {
              "name": {
                "type": "keyword",
                "ignore_above": 1024
              }
            }
          },
          "name": {
            "type": "keyword",
            "ignore_above": 1024
          },
          "runtime": {
            "type": "keyword",
            "ignore_above": 1024
          }
        }
      },
      "data_stream": {
        "properties": {
          "dataset": {
            "type": "constant_keyword"
          },
          "namespace": {
            "type": "constant_keyword"
          },
          "type": {
            "type": "constant_keyword"
          }
        }
      },
      "ecs": {
        "properties": {
          "version": {
            "type": "keyword",
            "ignore_above": 1024
          }
        }
      },
      "event": {
        "properties": {
          "agent_id_status": {
            "type": "keyword",
            "ignore_above": 1024
          },
          "dataset": {
            "type": "constant_keyword",
            "value": "docker.container_logs"
          },
          "ingested": {
            "type": "date",
            "format": "strict_date_time_no_millis||strict_date_optional_time||epoch_millis"
          },
          "module": {
            "type": "constant_keyword",
            "value": "docker"
          }
        }
      },
      "host": {
        "properties": {
          "architecture": {
            "type": "keyword",
            "ignore_above": 1024
          },
          "ip": {
            "type": "ip"
          },
          "mac": {
            "type": "keyword",
            "ignore_above": 1024
          },
          "name": {
            "type": "keyword",
            "ignore_above": 1024
          },
          "os": {
            "properties": {
              "family": {
                "type": "keyword",
                "ignore_above": 1024
              },
              "full": {
                "type": "keyword",
                "ignore_above": 1024,
                "fields": {
                  "text": {
                    "type": "match_only_text"
                  }
                }
              },
              "kernel": {
                "type": "keyword",
                "ignore_above": 1024
              },
              "name": {
                "type": "keyword",
                "ignore_above": 1024,
                "fields": {
                  "text": {
                    "type": "match_only_text"
                  }
                }
              },
              "platform": {
                "type": "keyword",
                "ignore_above": 1024
              },
              "version": {
                "type": "keyword",
                "ignore_above": 1024
              }
            }
          },
          "type": {
            "type": "keyword",
            "ignore_above": 1024
          }
        }
      },
      "input": {
        "properties": {
          "type": {
            "type": "keyword",
            "ignore_above": 1024
          }
        }
      },
      "log": {
        "properties": {
          "file": {
            "properties": {
              "path": {
                "type": "keyword",
                "ignore_above": 1024
              }
            }
          },
          "offset": {
            "type": "long"
          }
        }
      },
      "message": {
        "type": "keyword",
        "ignore_above": 1024
      },
      "service": {
        "properties": {
          "address": {
            "type": "keyword",
            "ignore_above": 1024
          },
          "type": {
            "type": "keyword",
            "ignore_above": 1024
          }
        }
      },
      "stream": {
        "type": "keyword",
        "ignore_above": 1024
      }
    }
  }
}

So, as far as I can tell, the mapping should be correct, and the data should be correct. The data is a keyword, and the mapping for container.image is a keyword.

The only thing that might be an object in the data is the list of labels. I am curious why there is no mapping for the labels.... But I'd think the error would be about labels, not image, if that was the issue.

Any ideas?

Yeah, this should be container.image.name, which is also the name in ECS; and what your error message is telling you; I misread the error at first as well (I'll fix that).

And while it's a bit hard to read in the mapping, this is also what your mapping specifies on the container part:

"image": {
    "properties": {
        "name": {
            "type": "keyword",
            "ignore_above": 1024
        }
    }
}

Ok, yeah, now I see how the data coming in is not correct.

Which leaves the question, why?

If it were just my work cluster I'd think there was some leftover configuration buried somewhere (I've had it running since 6.x I think.) But my brand new home cluster has the exact same issue.

How do I dig up where that misconfiguration is when all the config is handled by Agent and Fleet? I've tried grepping through /opt/Elastic/Agent a few times, and never had much luck figuring out where the actual Filebeat config is...

At some point we switched that naming convention: beats/fields.yml at 7d27bf4a2a79ed99763a7f0963e47cf0b1092e30 · elastic/beats · GitHub

Is there any chance you have either an old Beat or an old config running somewhere?

On the work cluster, maybe... But the specific servers I've been seeing this on are younger than the commit you linked. And I know I've never run anything other than Elastic Agent on them.

Plus, I started a brand new namespace for them. (As in the Advanced options collapse at the top of the integration settings, as well as the Fleet policy settings screen.) Wouldn't that have started up a completely fresh data stream without any leftovers? Or am I totally not understanding how that all works?

On my home cluster,. I started it at 8.6.1. Never had anything else pointed at it. No way it would have any configuration except what Agent provides.

But this is also in the integrations of Agent: integrations/base-fields.yml at 34e722e82ab66dcdea59d2630cbfa5d0d4891037 · elastic/integrations · GitHub

And this is also there on the main branch. I'm just a bit unsure why and why this is hitting you now — I don't think I've seen this one before.

This all started right around when I upgraded my work cluster to 8.6.1. Though it's a little fuzzy since I took a few days to get around to upgrading all the Elastic Agents.

I'll check tomorrow, but I'm wondering if I'm seeing this where I have docker standalone deployed... So far I've just been looking at vm's with Docker Swarm on them.

Is there any chance Swarm mode is making Filebeat think it's looking at a container in Kube?

The only other thought is that I did have a test Kube cluster running a while ago. But I'm pretty sure those logs have long since aged out. I don't think I have any of them even in my snapshots... And that wouldn't explain my home cluster running into the same issue either... I have yet to find time to set up Kube at home for learning it...

When Filebeat starts up, it should tell you what modules it activates, what files are being harvested,... That might give you a clue.

With Swarm you're probably in the less charted waters :sweat_smile:

Ok, off topic, but....

Yeah, I know. It's actually really annoying that Kube gets all the attention. For small organizations like my workplace, or at home, Kube is overkill and I honestly think it's a terrible idea. Swarm as it's warts, but it works. For large organizations that have the manpower to properly run it, Kube is great. But all the attention on Kube leaves smaller orgs in the dust.

It's not just Kube vs. Swarm either. When you're small, even companies like Elastic or GitLab don't really have any options for you. (Off on another topic... Elastic's all or nothing pricing scheme has it's benefits, but when you're in a budget crunch tight enough you can't get $10/month out of your boss for something like off site backups, there's no way we can afford even the cheapest level. )


Back on topic.

I found

/opt/Elastic/Agent/data/elastic-agent-b8553c/components/filebeat.yml.

In it I see that the add_kubernetes_metadata processor is configured by default.

I'd guess that's part of the issue, but it still is odd it only started to crop up with 8.6.1. I'm 90% sure I've seen that configured by default in various filebeat.yml files I've seen for years...

How would I go about getting that out of the Elastic Agent config?

Er, on the topic of my minor complaining above, just to be clear, I do appreciate the community/free tier's of Elastic and GitLab. I'm just frustrated that some features I could really use are behind a paywall that I can't get over. I'm well aware of the need for companies to make money and would happily pay if I could.

Elastic's all or nothing pricing scheme

Our answer to that is Elastic Cloud. More scalable and easier to manage than any on-prem licensing, support,...

For disabling it: I think there is a toggle called "Add metadata" in the Fleet UI.

We did look at cloud as well as on-prem.

Anyway, where is the toggle? I can't find it. Only configuration for processors I'm seeing is the text boxes in some of the integration settings screens. They're empty.

In the docs I found Add Kubernetes metadata | Fleet and Elastic Agent Guide [8.6] | Elastic but there isn't any mention of how to disable it via fleet.

I don't think this is possible to disable it via the UI, at least not on version 8.5.1, which is the one I'm using.

You can however disable it by manually editing the filebeat.yml file in the elastic agent server.

It will be on a path similar to this:

/opt/Elastic/Agent/data/elastic-agent-026915/install/filebeat-8.5.1-linux-x86_64