Can't retrieve Nginx logs with Elastic Stask on ECK

Hello,

As a test, I want to retrieve access.log and error.log from a Nginx server with :

Filebeat ==> Logstash ==> Elasticsearch ==> Kibana

Here's the situation:

  • Windows 10 PRO with WSL1/Ubuntu18.04/Terminator and Docker Desktop

  • Everything works under ECK. With the following files, all pods are running 1/1.

  • To generate logs with Nginx, I just do F5 or Ctrl+F5 on the Welcome to nginx page.

  • The autodiscover configuration I propose in the filebeat.yaml file is not reliable, but I don't see where the problem comes from. Depending on the changes in the filebeat.yaml file, I retrieve, at best, data from the namespace beats (from Elasticsearch, Filebeat itself...) but never access.log or error.log data from Nginx. With the following files, here is what I get if I check my Elasticsearch indices:

      $▶ curl -k https://localhost:9200/_cat/indices
      green open .kibana-event-log-7.8.0-000001 ScLrz5y5RTifotD2QtY3pQ 1 0  1 0   5.3kb   5.3kb
      green open .security-7                    WboPPeRSQ4ulU_UQH0PfCw 1 0 37 0 125.5kb 125.5kb
      green open .apm-custom-link               CYNj4646QjaBYmtha7Rtqw 1 0  0 0    208b    208b
      green open .kibana_task_manager_1         -gCCmjnJQHKaWfni1GjyNg 1 0  5 0    47kb    47kb
      green open .apm-agent-configuration       ndyJ7ivmTWG4WL-vBVL4eg 1 0  0 0    208b    208b
      green open .kibana_1                      c8WZce4IRWqn1WNKwwoBfA 1 0  4 0  31.4kb  31.4kb
    
  • Sometimes, data with the nginx_test tag is found in Kibana but never the error or access tags.

  • If it helps, here's what I get when I check the state of the Kubernetes objects after starting the stack:

      $▶ sh check_all.sh 
      ----- Statefulsets -----
      NAME                             READY   AGE
      elasticsearch-es-elasticsearch   1/1     5m27s
    
      ----- Deployments -----
      NAME        READY   UP-TO-DATE   AVAILABLE   AGE
      kibana-kb   1/1     1            1           5m28s
      my-nginx    1/1     1            1           5m29s
    
      ----- Config Map -----
      NAME                             DATA   AGE
      elasticsearch-es-scripts         3      5m30s
      elasticsearch-es-unicast-hosts   1      5m28s
      filebeat-config                  1      5m31s
      logstash-configmap               2      5m30s
    
      ----- Services -----
      NAME                             TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)              AGE
      elasticsearch-es-elasticsearch   ClusterIP      None             <none>        <none>               5m28s
      elasticsearch-es-http            ClusterIP      10.105.232.75    <none>        9200/TCP             5m31s
      elasticsearch-es-transport       ClusterIP      None             <none>        9300/TCP             5m31s
      kibana-kb-http                   LoadBalancer   10.105.225.226   localhost     5601:32721/TCP       5m30s
      logstash                         ClusterIP      10.101.229.220   <none>        25826/TCP,5044/TCP   5m31s
      my-nginx                         LoadBalancer   10.102.102.119   localhost     80:31618/TCP         5m30s
    
      ----- Daemon Set -----
      NAME       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
      filebeat   1         1         1       1            1           <none>          5m32s
    
      ----- Pods -----
      NAME                               READY   STATUS    RESTARTS   AGE
      elasticsearch-es-elasticsearch-0   1/1     Running   0          5m28s
      filebeat-tmh7l                     1/1     Running   0          5m31s
      kibana-kb-f84d496df-kclsh          1/1     Running   0          5m29s
      logstash                           1/1     Running   0          5m31s
      my-nginx-ff88c49d-nbp72            1/1     Running   0          5m30s
    
      ----- Storage Class -----
      NAME                 PROVISIONER                    AGE
      es-data              kubernetes.io/no-provisioner   5m30s
      hostpath (default)   docker.io/hostpath             17d
      nginx-data           kubernetes.io/no-provisioner   5m30s
    
      ----- Volumes -----
      NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                       STORAGECLASS   REASON   AGE
      es-data-pv      5Gi        RWO            Retain           Bound    beats/elasticsearch-data-elasticsearch-es-elasticsearch-0   es-data                 5m30s
      nginx-data-pv   5Gi        RWO            Retain           Bound    beats/nginx-data-pvc                                                                5m30s
    
      ----- PVC -----
      NAME                                                  STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
      elasticsearch-data-elasticsearch-es-elasticsearch-0   Bound    es-data-pv      5Gi        RWO            es-data        5m28s
      nginx-data-pvc                                        Bound    nginx-data-pv   5Gi        RWO                           5m30s
    
      ----- PW -----
      PW_FOR_USING_KIBANA
    

...

The rest of my message

Here are the files I use:

nginx.yaml:

---
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  namespace: beats
  labels:
    app: my-nginx
spec:
  externalTrafficPolicy: Local
  type: LoadBalancer
  ports:
    - port: 80
      protocol: TCP
      targetPort: 80
  selector:
    app: my-nginx

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
  namespace: beats
spec:
  selector:
    matchLabels:
      app: my-nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: my-nginx
    spec:
      containers:
        - name: my-nginx
          image: nginx
          ports:
          - containerPort: 80
          volumeMounts:
            - mountPath: "/var/log/nginx"
              name: nginx-data
      volumes:
        - name: nginx-data
          persistentVolumeClaim:
            claimName: nginx-data-pvc

filebeat.yaml:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: beats
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-

    tags: ["nginx_test"]

    filebeat.autodiscover:
      providers:
        - type: kubernetes
          host: ${NODE_NAME}
          hints.enabled: true

          templates:
            - conditions.and:
                - equals.kubernetes.pod.name: nginx
                - contains.kubernetes.namespace: beats
              config:
                - module: nginx
                  access:
                    enabled: true
                    var.paths: ["/c/PATH/TO/PERSISTENT/VOLUME/nginx-data/access.log"]
                    subPath: access.log
                    tags: ["access"]

                  error:
                    enabled: true
                    var.paths: ["/c/PATH/TO/PERSISTENT/VOLUME/nginx-data/error.log"]
                    subPath: error.log
                    tags: ["error"]

    processors:

      - add_cloud_metadata:
      - add_kubernetes_metadata:
      - add_host_metadata:
      - add_docker_metadata:

    output.logstash:
      hosts: ["logstash:5044"]

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: beats
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
        - name: filebeat
          image: docker.elastic.co/beats/filebeat:7.8.0
          args: [
            "-c", "/etc/filebeat.yml",
            "-e",
          ]
          env:
            - name: ELASTICSEARCH_HOST
              value: elasticsearch-es-http
            - name: ELASTICSEARCH_PORT
              value: "9200"
            - name: ELASTICSEARCH_USERNAME
              value: elastic
            - name: ELASTICSEARCH_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: elastic
                  name: elasticsearch-es-elastic-user
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          securityContext:
            runAsUser: 0
          resources:
            limits:
              memory: 200Mi
            requests:
              cpu: 100m
              memory: 100Mi
          volumeMounts:
            - name: config
              mountPath: /etc/filebeat.yml
              subPath: filebeat.yml
              readOnly: true
            - name: data
              mountPath: /usr/share/filebeat/data
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
            - name: varlog
              mountPath: /var/log
              readOnly: true

      volumes:
        - name: config
          configMap:
            defaultMode: 0600
            name: filebeat-config
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers
        - name: varlog
          hostPath:
            path: /var/log
        - name: data
          hostPath:
            path: /var/lib/filebeat-data
            type: DirectoryOrCreate

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: beats
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
  - apiGroups: [""]
    resources:
      - namespaces
      - pods
    verbs:
      - get
      - watch
      - list

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: beats
  labels:
    k8s-app: filebeat
---

logstash.yaml:

---
apiVersion: v1
kind: Service
metadata:
  namespace: beats
  labels:
    app: logstash
  name: logstash
spec:
  ports:
    - name: "25826"
      port: 25826
      targetPort: 25826
    - name: "5044"
      port: 5044
      targetPort: 5044
  selector:
    app: logstash
status:
  loadBalancer: {}

---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: beats
  name: logstash-configmap
data:
  logstash.yml: |
    http.host: "0.0.0.0"
    path.config: /usr/share/logstash/pipeline
  logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }

    filter {
    }

    output {
      if "nginx_test" in [tags] {
        elasticsearch {
          index => "nginx_test-%{[@metadata][beat]}-%{+YYYY.MM.dd-H.m}"
          hosts => [ "${ES_HOSTS}" ]
          user => "${ES_USER}"
          password => "${ES_PASSWORD}"
          cacert => '/etc/logstash/certificates/ca.crt'
        }
      }
    }

---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: logstash
  name: logstash
  namespace: beats
spec:
  containers:
    - image: docker.elastic.co/logstash/logstash:7.8.0
      name: logstash
      ports:
        - containerPort: 25826
        - containerPort: 5044
      env:
        - name: ES_HOSTS
          value: "https://elasticsearch-es-http:9200"
        - name: ES_USER
          value: "elastic"
        - name: ES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: elasticsearch-es-elastic-user
              key: elastic
      resources: {}
      volumeMounts:
        - name: config-volume
          mountPath: /usr/share/logstash/config
        - name: logstash-pipeline-volume
          mountPath: /usr/share/logstash/pipeline
        - name: cert-ca
          mountPath: "/etc/logstash/certificates"
          readOnly: true
  restartPolicy: OnFailure
  volumes:
    - name: config-volume
      configMap:
        name: logstash-configmap
        items:
          - key: logstash.yml
            path: logstash.yml
    - name: logstash-pipeline-volume
      configMap:
        name: logstash-configmap
        items:
          - key: logstash.conf
            path: logstash.conf
    - name: cert-ca
      secret:
        secretName: elasticsearch-es-http-certs-public
status: {}

elasticsearch.yaml:

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
  namespace: beats
spec:
  version: 7.8.0

  nodeSets:
    - name: elasticsearch
      count: 1
      config:
        node.store.allow_mmap: false
        node.master: true
        node.data: true
        node.ingest: true
        xpack.security.authc:
          anonymous:
            username: anonymous
            roles: superuser
            authz_exception: false
      podTemplate:
        metadata:
          labels:
            app: elasticsearch
        spec:
          initContainers:
            - name: sysctl
              securityContext:
                privileged: true
              command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
          containers:
            - name: elasticsearch
              resources:
                requests:
                  memory: 4Gi
                  cpu: 0.5
                limits:
                  memory: 4Gi
                  cpu: 1
              env:
                - name: ES_JAVA_OPTS
                  value: "-Xms2g -Xmx2g"
      volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            storageClassName: es-data
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 5Gi

kibana.yaml:

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
  namespace: beats
spec:
  version: 7.8.0
  count: 1
  elasticsearchRef:
    name: elasticsearch
  http:
    service:
      spec:
        type: LoadBalancer

volume.yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: es-data
  namespace: beats
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nginx-data
  namespace: beats
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: es-data-pv
  namespace: beats
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: es-data
  hostPath:
    path: /c/PATH/TO/es-data

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nginx-data-pv
  namespace: beats
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  #storageClassName: nginx-data
  storageClassName: ""
  hostPath:
    path: /c/PATH/TO/nginx-data

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-data-pvc
  namespace: beats
spec:
  storageClassName: ""
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  volumeName: nginx-data-pv

Don't hesitate to ask me for more information and if you have any ideas to unblock me, thank you in advance!

Guillaume.

First of all you should make sure that filebeat picks up these logs and passes them correctly further. Did you check this stage?

Hello Marcin and thank you for your answer,

What is certain is that the Nginx logs do not pass (I can't find the error and access tags in Kibana). For the rest, it's variable... In fact, yesterday I was retrieving data from Elasticsearch, Filebeat, Kibana... from namespace beats but nothing from Nginx. This morning, after updating Docker Desktop + rebooting the PC, I have nothing left with the same code snippets (this is why I sais my code is not reliable).

As a result, I tried with something like this instead of the autodiscover feature:

filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

In this case, I get a very large amount of data (800.000+ entries), but nothing from Nginx. The PC also starts ventilating a lot and for a long time.

I'm a bit lost so I try a lot of things but I don't remember everything I tested.

I just made a last test for tonight and without having modified anything I recovered data again (like yesterday) but nothing about Nginx.
I don't know if this answers Marcin's question, but here is a series of screenshots made from what I've just recovered in Kibana:

kubernetes.container.image:

Capture_container_image

kubernetes.container.name:

Capture_container_name

kubernetes.namespace:

Capture_namespace

kubernetes.pod.name:

Capture_pod_name

log.file.path:

Capture_path_file

message:

Capture_message

service.type:

Capture_service

tags:

Capture_tag

Anyway, there's no trace of Nginx... :thinking:

Hi,

I'm coming back here to post a link where you can find the filebeat.yaml and volume.yaml files with which I solved my problem with Nginx's access.log and error.log data... if that helps anyone.

Happy reading to you.

Guillaume.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.