Autodiscover hints.enabled logs all pods

Hey!

I've attempted to implement kubernetes logging to elasticsearch.

What I want to achieve:

  1. Ship all the logs from pods in the default namespace.
  2. Configure templates per pod type

That was my first attempt and it works (ships all pods ). Then I've decided to enable autodiscover option and limit it to only log pods from default namespace. Here is my config:

 filebeat.autodiscover:
 providers:
   - type: kubernetes
     host: ${NODE_NAME}
     #hints.enabled: true
     templates:
      - condition:
          equals:
            kubernetes.namespace: default
        config:
            type: container
            paths:
              - /var/log/container/*-${data.kubernetes.container.id}.log

processors:
  - add_cloud_metadata:
  - add_host_metadata:

output.elasticsearch:
  hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
  username: ${ELASTICSEARCH_USERNAME}
  password: ${ELASTICSEARCH_PASSWORD}

Unfortunately it doesn't log anything at all. Even though I have a pod in the default namespace there is nothing shipped to ES. If I uncomment hints.enabled it logs everything (both logs from my desired pod and filebeat logs - filebeat is form the kube-system namespace)

Is it enough information to determinate what might be a problem?

What I've been able to find out is that logs in my cluster are located in a different folder

For example:
/var/lib/docker/containers/64b4e054eb0007977ec7124a326dbbd3c257c9e75ceb3b8c0d611e7851f10a4d/64b4e054eb0007977ec7124a326dbbd3c257c9e75ceb3b8c0d611e7851f10a4d-json.log

I've just a newly created AWS EKS cluster (Kubernetes version 1.14), newly started ES and Kibana (version 7.4), running outside of the cluster.

In the cluster I've deployed this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: spring-boot-deployment
  annotations:
    co.elastic.logs/enabled: false
    # co.elastic.logs/module: logstash
spec:
  selector:
    matchLabels:
      app: spring-boot-app
  replicas: 3
  template:
    metadata:
      labels:
        app: spring-boot-app
    spec:
      containers:
      - name: spring-boot-app
        image: myimage
        ports:
        - containerPort: 8080
          name: server
        - containerPort: 8081
          name: management

My current configmap for filebeat (v 7.40, running in the cluster:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    # logging.level: debug
    filebeat.autodiscover:
      providers:
        - type: kubernetes
          host: ${NODE_NAME}
          hints.enabled: true
    processors:
      - add_cloud_metadata:
      - add_host_metadata:

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
---

Even though I annotated my deployment with co.elastic.logs/enabled: false Filebeat still picks it up and ships to my ES instance.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.