Filebeat + kubernetes + ingress-nginx

Versions:
Filebeat 6.6.1
ELK 6.6.1
kubernetes 1.11.1
ingress 0.23.1

Greetings! I am new to filebeat, and k8s. My goal is - harvest nginx-igress access + error logs and pass it to logstash with filebeat.
Here is a problem, that i cound not figure something about:

First, how does it looks like:

  • I'm installed filebeat as a daemonset
  • Configured autodiscovery for ingress pods
  • Filebeat could send data to logstash, it contains some metadata, ingress log path but no log itself, or some part of it. json example ahead:
{
  "_index": "logstash-prod-filebeat-2019.04.18",
  "_type": "doc",
  "_id": "fEQJMWoBjex8WhY7U6aC",
  "_version": 1,
  "_score": null,
  "_source": {
    "source": "/var/log/containers/nginx-ingress-controller-96cf4b9bc-gqb5f_ingress-nginx_nginx-ingress-controller-d13191e498f76a8a9fba7b79951a837945acc6b839eea15d55a69404278e4260.log",
    "kubernetes": {
      "namespace": "ingress-nginx",
      "container": {
        "name": "nginx-ingress-controller"
      },
      "node": {
        "name": "kubmst-06"
      },
      "replicaset": {
        "name": "nginx-ingress-controller-96cf4b9bc"
      },
      "labels": {
        "pod-template-hash": "527906567",
        "app": "ingress-nginx"
      },
      "pod": {
        "name": "nginx-ingress-controller-96cf4b9bc-gqb5f",
        "uid": "1ce9c3bb-4fcb-11e9-9c8a-0050562e0160"
      }
    },
    "input": {
      "type": "log"
    },
    "log": {
      "file": {
        "path": "/var/log/containers/nginx-ingress-controller-96cf4b9bc-gqb5f_ingress-nginx_nginx-ingress-controller-d13191e498f76a8a9fba7b79951a837945acc6b839eea15d55a69404278e4260.log"
      }
    },
    "host": {
      "name": "filebeat-dynamic-f8ktd"
    },
    "offset": 6325657,
    "@version": "1",
    "stream": "stdout",
    "prospector": {
      "type": "log"
    },
    "@timestamp": "2019-04-18T15:21:05.751Z",
    "beat": {
      "version": "6.6.1",
      "name": "filebeat-dynamic-f8ktd",
      "hostname": "filebeat-dynamic-f8ktd"
    },
    "tags": [
      "beats_input_raw_event"
    ],
    "time": "2019-04-18T15:21:01.526131784Z"
  },
  "fields": {
    "@timestamp": [
      "2019-04-18T15:21:05.751Z"
    ],
    "time": [
      "2019-04-18T15:21:01.526Z"
    ]
  },
  "sort": [
    1555600865751
  ]
}

P.S - log entries in kibana are flowing with every each new row in nginx log

P.S.S - i tried to setup filtering, also tried common filter with greedydata - nothing helps, entries shown in kibana contains only metadata, but no log itself.

My sample configuration ahead:

filebeat.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  labels:
    app: filebeat
data:
  filebeat.yml: |-
    filebeat:
      config:
        prospectors:
          path: ${path.config}/prospectors.d/*.yml
          reload.enabled: false
        modules:
          path: ${path.config}/modules.d/*.yml
          reload.enabled: false

    filebeat.autodiscover:
      providers:
        - type: kubernetes
          hints.enabled: true

    processors:
      - add_kubernetes_metadata:
          in_cluster: true

    output.logstash:
      hosts: ["ip-address:5001"]
      bulk_max_size: 2048
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat-dynamic
  namespace: kube-system
  labels:
    k8s-app: filebeat-dynamic
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: filebeat-dynamic
        kubernetes.io/cluster-service: "true"
    spec:
      nodeSelector:
        node-role.kubernetes.io/master: ""
      tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
      serviceAccountName: filebeat-dynamic
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat-dynamic
        image: 10.40.94.134:80/beats/filebeat:6.6.1
        imagePullPolicy: Always
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        securityContext:
          runAsUser: 0
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlog
          mountPath: /var/log
          readOnly: true
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: dockersock
          mountPath: /var/run/docker.sock
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-dynamic-config
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: dockersock
        hostPath:
          path: /var/run/docker.sock
      - name: data
        emptyDir: {}
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat-dynamic
subjects:
- kind: ServiceAccount
  name: filebeat-dynamic
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat-dynamic
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat-dynamic
  labels:
    k8s-app: filebeat-dynamic
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat-dynamic
  namespace: kube-system
  labels:
    k8s-app: filebeat-dynamic

logstash config:

input {
  beats {
    port => 5001
  }
}

output {
  stdout { codec => rubydebug }
  elasticsearch {
    index => "logstash-prod-filebeat-%{+YYYY.MM.dd}"
  }
}

Hi @concretefairy

can you configure filebeat configmap to send events to the console only temporarily?

output.console:
  pretty: true

And tell us if the logs appear at the filebeat pod that runs at the same node as nginx?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.