@matschaffer , thanks for your input.
The issue is not that the mapping doesn't exist, but that it is of type 'long', yet Metricbeat is returning a value of 'max' and that can also be seen from the cluster stats API, so to me Metricbeat is doing the right thing. I don't have any cgroup based restrictions enforced, so I assume that is why the value is 'max'. I have tried building an ingest pipeline to clean this up, but it hasn't helped. (probably further to travel down that route)
The exact error I am seeing is below
{\"type\":\"mapper_parsing_exception\",\"reason\":\"failed to parse field [elasticsearch.node.stats.os.cgroup.memory.limit.bytes] of type [long] in document with id 'XXXXXXXXX'. Preview of field's value: 'max'\",\"caused_by\":{\"type\":\"illegal_argument_exception\",\"reason\":\"For input string: \\\"max\\\"\"}}, dropping event!","service.name":"metricbeat","ecs.version":"1.6.0"}
My distro is Debian 11, so I'm not sure if that makes any difference either.
I'm deploying via Ansible, so I don't have a straightforward Docker Compose to provide you, but a subset of my metricbeat config is as below:
metricbeat.config:
modules:
path: ${path.config}/modules.d/*.yml # Default metricsets we want on everything, system and docker
# Reload module configs as they change:
reload.enabled: false
# This will use hints to find things based on label, as well as what I have below regarding Elastic Stack modules.
metricbeat.autodiscover:
providers:
- type: docker
hints.enabled: true
templates:
# Replaces deprecated internal Elasticsearch monitoring - https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-elasticsearch.html
- condition:
contains:
docker.container.image: elasticsearch
config:
- module: elasticsearch
period: 10s
hosts: ["redacted"]
xpack.enabled: true
enabled: true
scope: node
output.elasticsearch:
output config goes here