Issues pushing kubernetes ingress-nginx logs using filebeat deamonset pods

Hello all,

We're facing issues pushing kubernetes ingress-nginx logs using filebeat deamonset pods.
using filebeat v6.2.4, 6.3.0, & 6.4.0.

please let us know if this is the correct configs to push only ingress-nginx pods logs to ELK, not all namespace pods.

filebeat.yml:
filebeat.config:
  prospectors:
    # Mounted `filebeat-prospectors` configmap:
    path: ${path.config}/prospectors.d/*.yml
    # Reload prospectors configs as they change:
    reload.enabled: false
  modules:
    path: ${path.config}/modules.d/*.yml
    # Reload module configs as they change:
    reload.enabled: false
  filebeat.autodiscover:
    providers:
     - type: docker
       templates:
         - condition:
             equals:
               docker.container.labels.io.kubernetes.container.name: "nginx-ingress-controller"
          config:
            - module: nginx
              access:
                input:
                  type: docker
                  containers.stream: stdout
                  containers.ids:
                    - "${data.docker.container.id}"
              error:
                input:
                  type: docker
                  containers.stream: stderr
                  containers.ids:
                    - "${data.docker.container.id}"
  processors:
   - drop_event:
      when.not:
        or:
          - contains:
               docker.container.labels.io.kubernetes.container.name: "nginx-ingress-controller"

  processors:
    - add_cloud_metadata:

  cloud.id: ${ELASTIC_CLOUD_ID}
  cloud.auth: ${ELASTIC_CLOUD_AUTH}

  output:
    logstash:
      hosts: ["elk.lktest1.com:5044"]

prospectors file;---

kubernetes.yml:
- type: docker
  containers:
    ids: "*"
    path: /var/lib/docker/containers
  fields:
    type: k8s_nginx_ingress_ctrls
  registry_file: "/var/lib/filebeat/registry"
  multiline:
    pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
    negate: true
    match: after
  processors:
    - add_kubernetes_metadata:
        in_cluster: true
        namespace: ingress-nginx

using deamonset to load above 2 configmap files.

errors getting from filebeat pods:

Exiting: error loading config file: yaml: line 17: did not find expected '-' indicator

Hi!

Seems like your configuration file cannot be parsed successfully. Could you try to fix the indentation in config part of filebeat.autodiscover?

See https://codebeautify.org/yaml-validator/cb421e58

Hi Chris,

Thanks for your reply!!
Indentations issues resolved.
could you please let me know the filebeat.autodiscover configs to filter out only the -- kubernetes.container.name: "nginx-ingress-controller"
to filter only ingress-nginx pods logs & push them to ELK, to view/organize them.

I tried out - https://gist.github.com/tkuther/b432827283f360293a87b5be49594f91
but this didn't helped me.

Thank you, Siva.

Hi,

what about something like the one below?

filebeat.autodiscover:
  providers:
    - type: kubernetes
      templates:
        - condition:
            equals:
              kubernetes.container.image: "nginx-ingress-controller"

https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover.html#_kubernetes

@ChrsMark,

pods running fine except one pod, & unable to find logs in ELK after sometime, weird!!.

i'll share a zoom meeting link if you're willing to join, to wrap this issue in few minutes. if so, please drop a test mail to me - siva.krishna@lendingkart.com.

Hey,

let's keep this conversation in this thread for traceability and other community users' benefit.

In this regard, it would be helpful to provide details of your setup and any logs of the failing pod so as to isolate the problem.

Thank you!

sure, Thanks @ChrsMark !!

suspecting the issue with filebeat-config-ingress, now the pods are in error syncing state.
please check below all our deployments & let us know how to fix push only nginx-ingress-controller pods logs to our elk.
deamonset, we're using is:

{
  "kind": "DaemonSet",
  "apiVersion": "extensions/v1beta1",
  "metadata": {
    "name": "filebeat-ingress",
    "namespace": "logging",
    "labels": {
      "k8s-app": "filebeat",
      "kubernetes.io/cluster-service": "true"
    },
    "finalizers": [
      "foregroundDeletion"
    ]
  },
  "spec": {
    "selector": {
      "matchLabels": {
        "k8s-app": "filebeat",
        "kubernetes.io/cluster-service": "true"
      }
    },
    "template": {
      "metadata": {
        "creationTimestamp": null,
        "labels": {
          "k8s-app": "filebeat",
          "kubernetes.io/cluster-service": "true"
        }
      },
      "spec": {
        "volumes": [
          {
            "name": "config",
            "configMap": {
              "name": "filebeat-config-ingress",
              "defaultMode": 384
            }
          },
          {
            "name": "varlibdockercontainers",
            "hostPath": {
              "path": "/var/lib/docker/containers",
              "type": ""
            }
          },
          {
            "name": "logforwarderssl",
            "secret": {
              "secretName": "logforwarderssl",
              "defaultMode": 384
            }
          },
          {
            "name": "prospectors",
            "configMap": {
              "name": "filebeat-prospectors-ingress",
              "defaultMode": 384
            }
          },
          {
            "name": "data",
            "emptyDir": {}
          }
        ],
        "containers": [
          {
            "name": "filebeat",
            "image": "docker.elastic.co/beats/filebeat:6.3.0",
            "args": [
              "-c",
              "/etc/filebeat.yml",
              "-e"
            ],
            "env": [
              {
                "name": "ELASTICSEARCH_HOST",
                "value": "testelk.lk.com"
              },
              {
                "name": "ELASTIC_CLOUD_ID"
              },
              {
                "name": "ELASTIC_CLOUD_AUTH"
              },
              {
                "name": "POD_NAMESPACE",
                "valueFrom": {
                  "fieldRef": {
                    "apiVersion": "v1",
                    "fieldPath": "metadata.namespace"
                  }
                }
              }
            ],
            "resources": {
              "limits": {
                "memory": "200Mi"
              },
              "requests": {
                "cpu": "100m",
                "memory": "100Mi"
              }
            },
            "volumeMounts": [
              {
                "name": "config",
                "readOnly": true,
                "mountPath": "/etc/filebeat.yml",
                "subPath": "filebeat.yml"
              },
              {
                "name": "prospectors",
                "readOnly": true,
                "mountPath": "/usr/share/filebeat/prospectors.d"
              },
              {
                "name": "data",
                "mountPath": "/usr/share/filebeat/data"
              },
              {
                "name": "varlibdockercontainers",
                "readOnly": true,
                "mountPath": "/var/lib/docker/containers"
              },
              {
                "name": "logforwarderssl",
                "mountPath": "/certs/logforwarderssl"
              }
            ],
            "terminationMessagePath": "/dev/termination-log",
            "terminationMessagePolicy": "File",
            "imagePullPolicy": "IfNotPresent",
            "securityContext": {
              "runAsUser": 0
            }
          }
        ],
        "restartPolicy": "Always",
        "terminationGracePeriodSeconds": 30,
        "dnsPolicy": "ClusterFirst",
        "serviceAccountName": "filebeat",
        "serviceAccount": "filebeat",
        "securityContext": {},
        "schedulerName": "default-scheduler"
      }
    },
    "updateStrategy": {
      "type": "OnDelete"
    },
    "templateGeneration": 6,
    "revisionHistoryLimit": 10
  }
}

2 configmaps, we're using are:
filebeat-config-ingress:

filebeat.config:
 prospectors:
   # Mounted `filebeat-prospectors` configmap:
   path: ${path.config}/prospectors.d/*.yml
   # Reload prospectors configs as they change:
   reload.enabled: false
 modules:
   path: ${path.config}/modules.d/*.yml
   # Reload module configs as they change:
   reload.enabled: false
filebeat.autodiscover:
 providers:
   - type: kubernetes
     templates:
      - condition:
         equals:
           kubernetes.container.name: "nginx-ingress-controller"
        config:
           - module: nginx
             access:
               input:
                 type: docker
                 containers.ids:
                   - "${data.kubernetes.container.id}"
processors:
   - add_cloud_metadata:
   - add_docker_metadata:
   - add_kubernetes_metadata:

cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}

output:
   logstash:
     hosts: ["testelk.lk.com:5044"]

filebeat-prospectors-ingress:

kubernetes.yml:
- type: docker
  containers:
    ids: "*"
    path: /var/lib/docker/containers
  fields:
    type: k8s_nginx_ingress_ctrls
  registry_file: "/var/lib/filebeat/registry"
  multiline:
    pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
    negate: true
    match: after
  processors:
    - add_kubernetes_metadata:
        in_cluster: true
        namespace: ${POD_NAMESPACE}

the last error logs of pod given below:

2019-11-13T07:13:15.323Z	INFO	add_cloud_metadata/add_cloud_metadata.go:301	add_cloud_metadata: hosting provider type detected as ec2, metadata={"availability_zone":"ap-south-1b","instance_id":"i-0d5d5bba0be1e449e","machine_type":"r3.xlarge","provider":"ec2","region":"ap-south-1"}
2019-11-13T07:13:15.324Z	INFO	instance/beat.go:275	filebeat stopped.
2019-11-13T07:13:15.324Z	ERROR	instance/beat.go:691	Exiting: error initializing publisher: error initializing processors: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Exiting: error initializing publisher: error initializing processors: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Hi! Thank you for the feedback!

From the error you include the problem seems to be the add_docker_metadata processor. Since you are running Filebeat inside a pod I assume it cannot reach the docker daemon. Could you try and remove add_docker_metadata from Filebeat's config?

Thanks!

Hi Chris,

appreciated for your patience in replying. :slight_smile:
pods are running without errors, after removing add_docker_metadata from the configmap.

our main aim is, to push only the nginx-ingress-controller container logs to ELK, but It is not doing. could you please help here how to acheive that, to close the ticket.

found some index issues in logstash log file. digging further more.

[2019-11-13T10:38:10,910][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-2019.11.13", :_type=>"doc", :_routing=>nil}, 2019-11-13T10:38:07.552Z {name=filebeat-ingress-nwg5s} 2019-11-13 10:38:07.552 INFO [pool-9-thread-1] [,,,,,,] com.abc.admin.abc.schedulers.TaskScheduler 

t[2019-11-13T10:38:10,910][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-2019.11.13", :_type=>"doc", :_routing=>nil}, 2019-11-13T10:38:07.553Z {name=filebeat-ingress-nwg5s} 2019-11-13 10:38:07.553 ERROR [pool-9-thread-1] [,,,,,,]

[2019-11-13T12:52:49,390][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-2019.11.12", :_type=>"doc", :_routing=>nil}, 2019-11-12T12:16:03.089Z filebeat-ingress-t6z2z 2019-11-12 12:16:03.089  INFO [http-nio-9025-exec-205] [,,,,,,] org.crazycake.shiro.: Reading session with id : 3b973c3c-3a53-4681-a40b-527054297023 session: org.apache.shiro.session.mgt.SimpleSession,id=3b973-4681-a40b-527054297023], :response=>{"index"=>{"_index"=>"filebeat-2019.11.12", "_type"=>"doc", "_id"=>"AW5k0tZQvgXvqyg", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [host] tried to parse field [host] as object, but found a concrete value"}}}}
[2019-11-13T12:52:49,390][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-2019.11.12", :_type=>"doc", :_routing=>nil}, 2019-11-12T12:16:03.089Z filebeat-ingress-t6z2z 2019-11-12 12:16:03.089  INFO [http-nio-9025-exec-188] [,,,,,,] org.craze.ro.: Reading session with id : 3b973c3c-3a53-4681-a297023 session: org.apache.shiro.session.mgt.SimpleSession,id=3b973c3c-3a53-4681-a40b-527054297023], :response=>{"index"=>{"_index"=>"filebeat-2019.11.12", "_type"=>"doc", "_id"=>"AW5k0tZQvpwvqyh", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [host] tried to parse field [host] as object, but found a concrete value"}}}}

hi @Siva_Krishna,

Are you using add_host_metadata? It would look like it's clashing with some other system sending the host field.

Hi @exekias

nope, we're using below two. logs are not at all displayed when we used filebeat v6.30 image version, when using filebeat image v6.2.4 we could see some logs of ingress.
pointing the issues we're facing to make it clear.

  1. push only ingress-nginx-controller pods, but our filebeat configmaps pushing all namespaces pods logs with the grok pattern. which are unnecessary.
  2. Not all logs are not displaying in kibana for ingress nginx pods logs.
  3. facing some index related issues. shared earlier the error logs.
    we're using below versions of ELK.

logstash 5.6.16

sudo /usr/share/kibana/bin/kibana --version
5.6.16

$ /usr/share/elasticsearch/bin/elasticsearch --version
Version: 5.6.16, Build: 3a740d1/2019-03-13T15:33:36.565Z, JVM: 1.8.0_73

processors:
   - add_cloud_metadata:
   - add_kubernetes_metadata:

kindly help us.

@ChrsMark and Team,

Could you please provide any fix here!!

Hey!

It is not so clear what is the failing cause. The last output you provide is from Logstash, however could we focus on Filebeat's side first?

In this regard, could you please check and/or share the output of Filebeat's pod so as to confirm that Filebeat is able to parse the logs of ingress-nginx and if not what is the reason?

Thanks!

hey @ChrsMark,

I've resolved this issue using the configmap prospectors file configs, shared below.

earlier we are not adding these filters in processors in it. we were adding in filebeat-config-ingress configmap, which is wrong.

the correct configmap should be as below:

kubernetes.yml:
- type: docker
  containers:
    ids: "*"
    path: /var/lib/docker/containers
  fields:
    type: k8s_nginx_ingress_ctrls
  registry_file: "/var/lib/filebeat/registry"
  multiline:
    pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
    negate: true
    match: after
  processors:
    - add_kubernetes_metadata:
        in_cluster: true
        namespace: ${POD_NAMESPACE}
    - drop_event:
       when:
         not:
           regexp:
              kubernetes.container.name: "nginx-ingress-controller"
    - drop_event:
       when:
         not:
           regexp:
              kubernetes.container.name: "nginx-ingress-controller-internal"

index creation issue could be because of higher version of filebeat image in the deamonset configs. I was fixed when I use standard v6.2.4 image.

many thanks @ChrsMark & Team. :slight_smile:

Hi @ChrsMark and Team,
we're facing issue with logstash, error log msg is given below.

[2019-11-27T14:20:04,004][WARN ][logstash.outputs.elasticsearch] Failed action. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-2019.11.27", :_type=>"doc", :_routing=>nil}, 2019-11-27T14:00:51.695Z filebeat-ingress-eks-faafd %{message}], :response=>{"index"=>{"_index"=>"filebeat-2019.11.27", "_type"=>"doc", "_id"=>"fsdfa", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Could not dynamically add mapping for field [app.kubernetes.io/name]. Existing mapping for [kubernetes.labels.app] must be of type object but found [keyword]."}}}}

Could you please let us know, how we can fix it.

using same prospectors configs shared in my previous reply.

kindly let us know the Root cause behind the error logs in filebeat pod logs. though, it pushes few log line but not all lines of ingress controller.

2019-11-27T14:30:39.589Z	ERROR	logstash/async.go:235	Failed to publish events caused by: write tcp 100.111.184.225:46478->elk-IP:5044: write: connection reset by peer

Hey!

Seeing the Logstash output provided, it seems that there is a known mapping issue here related to dots. In order to fix that you should remove your old index/mapping first and then add labels.dedot: true in the add_kubernetes_metadata processor configuration.

Thanks Chris.
we've added the given configs, will update you if any issues, next week.

Hi @ChrsMark & Team , hope your doing well.!!

we've deleted the today's index of filebeat using below cmd, but still the issue persists.
added labels.dedot: true in the add_kubernetes_metadata processor configuration. & restarted logstash service as well.

could you please help us to fix it.

]# curl -XDELETE localhost:9200/filebeat-2019.12.23
{"acknowledged":true}
[root@ip-host ~]# 

below logstash error msg's for logs.

35.154.200.180 - [35.154.200.180] - - [23/Dec/2019:07:38:10 +0000] "POST /java/path HTTP/1.0" 503 203 "-" "Apache-HttpClient/v4 (Java/-)" 549 0.001 [podname-80] [] - - - - b853b79bdcd6c2786911d6376b2b53c0], :response=>{"index"=>{"_index"=>"filebeat-2019.12.23", "_type"=>"doc", "_id"=>"-W3Pq7Jjc3C", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Could not dynamically add mapping for field [app.kubernetes.io/part-of]. Existing mapping for [kubernetes.labels.app] must be of type object but found [keyword]."}

[2019-12-23T08:51:35,991][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-2019.12.23", :_type=>"doc", :_routing=>nil}, 2019-12-23T08:51:27.564Z filebeat-ingress-eks-k2n29 E1223 08:51:27.563972       6 leaderelection.go:359] Failed to update lock: configmaps "ingress-controller-leader-internal-nginx-internal" is forbidden: User "system:serviceaccount:ingress-internal:nginx-ingress-serviceaccount-internal" cannot update resource "configmaps" in API group "" in the namespace "ingress-internal"], :response=>{"index"=>{"_index"=>"filebeat-2019.12.23", "_type"=>"doc", "_id"=>"-W3Pq7JpZAo", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Could not dynamically add mapping for field [app.kubernetes.io/part-of]. Existing mapping for [kubernetes.labels.app] must be of type object but found [keyword]."}}}}

main part of the prospectors file

kubernetes.yml:
- type: docker
  containers:
    ids: "*"
    path: /var/lib/docker/containers
  fields:
    type: k8s_xxx
  registry_file: "/var/lib/filebeat/registry"
  multiline:
    pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
    negate: true
    match: after
  processors:
    - add_kubernetes_metadata:
        in_cluster: true
        labels.dedot: true
        namespace: ${POD_NAMESPACE}
    - drop_event:
       when:
         not:
           regexp:
              kubernetes.container.name: "nginx-ingress-controller"

Hi!

Does this pod has any annotations? Could you try what is mentioned here? This is supposed to solve the problem.

Let us know!