I feel that using the "co.elastic.logs/enabled:'false'" method to turn off the log collection of the namespace is not optimal. If there are a lot of namespaces, then I must increase them one by one.
I think the error can be ignored in your case, but you can add a condition to ignore events without container ids, so the error doesn't happen.
Try this configuration:
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
templates:
- condition:
and:
- has_fields: ['kubernetes.container.id']
- or:
- equals:
kubernetes.namespace: "front"
- equals:
kubernetes.namespace: "back"
config:
- type: container
paths:
- "/var/log/containers/*-${data.kubernetes.container.id}.log"
setup.template.enabled: false
processors:
- add_cloud_metadata:
- add_host_metadata:
output.elasticsearch:
hosts: ${ELASTICSEARCH_HOST}
Try with this to add node annotations:
add_resource_metadata:
node:
enabled: true
include_annotations:
- "someannotation"
- "someotherannotation"
Take into account that you wouldn't need to change the configuration, you would only need to add co.elastic.logs/enabled:'true'
annotation to the namespaces you want to collect logs from.
I added has_ fields: [' kubernetes.container.id '] still reporting errors.
However, if you say that this error is not a big problem, let it be for the time being.
OK, that's easy.
Sorry, I didn't understand you.
I also need a field container.id, can you advice me, how to do that?
Thank you for your reply.
What do you mean? Filebeat should be adding the kubernetes.container.id
field to collected logs.
Kubernetes matedata all appeared, but not yet container.id.
kubernetes.container.id Where should field be added?
Umm, is this container being stopped at this moment? Looking at the code the only case where it seems possible to have the container name but not its id is when the pod is being stopped: https://github.com/elastic/beats/blob/7fbbdca91b5cdfcb943ff7f7b7312219ae9986c0/libbeat/autodiscover/providers/kubernetes/pod.go#L339
If this is a normal running container this may be a bug. Are you missing the kubernetes.container.id
in all events?
Is there a solution to this problem?
This seems unexpected to me, could you try with a released version, like 7.9.0?
This missing container field also occurs on APM agent (without kubernetes metadata).
Filebeat 7.9.1 will also not be generated container.id.
Hey @wajika,
I have tried to reproduce this and there seems to be actually some problem on Beats with the container ids. I have opened an issue in Github for further investigation: https://github.com/elastic/beats/issues/20982
Thanks!
okay, thank you
hello
@jsoriano
I found container.id after upgrading filebeat to 7.9.2, but container.id still does not exist in metricbeat version 7.9.2. Will you add container.id next time?
Hey @wajika,
container.id
field should be also present in events created by Metricbeat 7.9.2.
One thing that might be happening is that your configuration is only matching the pod events, that don't contain information about specific containers (a pod can contain multiple containers, sharing the same network namespace). How is the autodiscover configuration you are using in Metricbeat?
Why do other namespace data appear?
apiVersion: v1 kind: ConfigMap metadata: name: metricbeat-daemonset-config namespace: kube-system labels: k8s-app: metricbeat data: metricbeat.yml: |- metricbeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: true metricbeat.autodiscover: providers: - type: kubernetes templates: - condition: or: - equals: kubernetes.namespace: back - equals: kubernetes.namespace: front processors: - add_docker_metadata: host: "unix:///var/run/docker.sock" - add_kubernetes_metadata: in_cluster: true host: ${NODE_NAME} output.elasticsearch: hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}'] --- apiVersion: v1 kind: ConfigMap metadata: name: metricbeat-fields-config namespace: kube-system labels: k8s-app: metricbeat data: fields.yml: |- - name: service type: group description: > kubernetes service metrics release: experimental fields: - name: name type: keyword description: Service name. - name: cluster_ip type: keyword description: Internal IP for the service. - name: external_name type: keyword description: Service external DNS name - name: external_ip type: keyword description: Service external IP - name: load_balancer_ip type: keyword description: Load Balancer service IP - name: type type: keyword description: Service type - name: ingress_ip type: keyword description: Ingress IP - name: ingress_hostname type: keyword description: Ingress Hostname - name: created type: date description: Service creation date --- apiVersion: v1 kind: ConfigMap metadata: name: metricbeat-daemonset-config namespace: kube-system labels: k8s-app: metricbeat data: metricbeat.yml: |- metricbeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: true metricbeat.autodiscover: providers: - type: kubernetes templates: - condition: or: - equals: kubernetes.namespace: back - equals: kubernetes.namespace: front processors: - add_docker_metadata: host: "unix:///var/run/docker.sock" - add_kubernetes_metadata: in_cluster: true host: ${NODE_NAME} processors: - add_docker_metadata: host: "unix:///var/run/docker.sock" - add_kubernetes_metadata: output.elasticsearch: hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}'] --- apiVersion: v1 kind: ConfigMap metadata: name: metricbeat-daemonset-modules namespace: kube-system labels: k8s-app: metricbeat data: system.yml: |- - module: system period: 10s metricsets: - cpu - load - memory - network - process - process_summary #- core #- diskio #- socket processes: ['.*'] process.include_top_n: by_cpu: 5 # include top 5 processes by CPU by_memory: 5 # include top 5 processes by memory - module: system period: 1m metricsets: - filesystem - fsstat processors: - drop_event.when.regexp: system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)' kubernetes.yml: |- - module: kubernetes metricsets: - node - system - pod - container - volume period: 10s enabled: true host: ${NODE_NAME} hosts: ["https://${NODE_NAME}:10250"] bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token ssl.verification_mode: "none" - module: kubernetes metricsets: - proxy period: 10s host: ${NODE_NAME} hosts: ["localhost:10249"] --- apiVersion: apps/v1 kind: DaemonSet metadata: name: metricbeat namespace: kube-system labels: k8s-app: metricbeat spec: selector: matchLabels: k8s-app: metricbeat template: metadata: labels: k8s-app: metricbeat spec: serviceAccountName: metricbeat terminationGracePeriodSeconds: 30 hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: metricbeat image: metricbeat:master-SNAPSHOT args: [ "-c", "/etc/metricbeat.yml", "-e", "-system.hostfs=/hostfs", ] env: - name: ELASTICSEARCH_HOST value: 192.168.10.145 - name: ELASTICSEARCH_PORT value: "9200" - name: ELASTICSEARCH_USERNAME value: - name: ELASTICSEARCH_PASSWORD value: - name: ELASTIC_CLOUD_ID value: - name: ELASTIC_CLOUD_AUTH value: - name: NODE_NAME valueFrom: fieldRef: fieldPath: status.hostIP securityContext: runAsUser: 0 resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: config mountPath: /etc/metricbeat.yml readOnly: true subPath: metricbeat.yml - name: data mountPath: /usr/share/metricbeat/data - name: modules mountPath: /usr/share/metricbeat/modules.d readOnly: true - name: dockersock mountPath: /var/run/docker.sock - name: proc mountPath: /hostfs/proc readOnly: true - name: cgroup mountPath: /hostfs/sys/fs/cgroup readOnly: true volumes: - name: proc hostPath: path: /proc - name: cgroup hostPath: path: /sys/fs/cgroup - name: dockersock hostPath: path: /var/run/docker.sock - name: config configMap: defaultMode: 0600 name: metricbeat-daemonset-config - name: modules configMap: defaultMode: 0600 name: metricbeat-daemonset-modules - name: data hostPath: path: /var/lib/metricbeat-data type: DirectoryOrCreate --- apiVersion: v1 kind: ConfigMap metadata: name: metricbeat-deployment-config namespace: kube-system labels: k8s-app: metricbeat data: metricbeat.yml: |- metricbeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: true metricbeat.autodiscover: providers: - type: kubernetes templates: - condition: or: - equals: kubernetes.namespace: back - equals: kubernetes.namespace: front processors: - add_docker_metadata: host: "unix:///var/run/docker.sock" - add_kubernetes_metadata: output.elasticsearch: hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}'] --- apiVersion: v1 kind: ConfigMap metadata: name: metricbeat-deployment-modules namespace: kube-system labels: k8s-app: metricbeat data: kubernetes.yml: |- - module: kubernetes metricsets: - state_node - state_deployment - state_replicaset - state_statefulset - state_pod - state_container - state_cronjob - state_resourcequota - state_service - state_persistentvolume - state_persistentvolumeclaim - state_storageclass # Uncomment this to get k8s events: #- event period: 10s #add_metadata: true host: ${NODE_NAME} hosts: ["kube-state-metrics:8080"] --- apiVersion: apps/v1 kind: Deployment metadata: name: metricbeat namespace: kube-system labels: k8s-app: metricbeat spec: selector: matchLabels: k8s-app: metricbeat template: metadata: labels: k8s-app: metricbeat spec: serviceAccountName: metricbeat hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: metricbeat image: metricbeat:master-SNAPSHOT args: [ "-c", "/etc/metricbeat.yml", "-e", ] env: - name: ELASTICSEARCH_HOST value: 192.168.10.145 - name: ELASTICSEARCH_PORT value: "9200" - name: ELASTICSEARCH_USERNAME value: - name: ELASTICSEARCH_PASSWORD value: - name: ELASTIC_CLOUD_ID value: - name: ELASTIC_CLOUD_AUTH value: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName securityContext: runAsUser: 0 resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: config mountPath: /etc/metricbeat.yml readOnly: true subPath: metricbeat.yml - name: fields-config mountPath: /usr/share/metricbeat/fields.yml readOnly: true subPath: fields.yml - name: modules mountPath: /usr/share/metricbeat/modules.d readOnly: true volumes: - name: config configMap: defaultMode: 0600 name: metricbeat-deployment-config - name: fields-config configMap: defaultMode: 0600 name: metricbeat-fields-config - name: modules configMap: defaultMode: 0600 name: metricbeat-deployment-modules
Due to the mapping problem of some fields, I temporarily use the master tag.
about container.id , I can only see it on kibana metric UI, but not on discover.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.