Reading both container logs and host logs on K8s?


(Tim Ward) #1

This is probably a typo or something I haven't spotted despite staring at it for hours, but just in case ...

I'm running Filebeat in K8s as a Daemon Set, with Kubernetes type autodiscover configured and working fine to collect container logs.

I'm now also wanting to pick up the host's syslog. So I've mounted that as a volume into the container, and verified by shelling in to the container that it's present as expected.

Then I added filebeat.inputs configuration to pick up the syslog file. It doesn't appear to do so. I've enabled debug logs, with selectors set to "prospector" and "harvester", and the output shows lots of stuff to do with reading Kubernetes logs but no mention at all of any attempt, whether successful or not, to also read the syslog file.

Is there a reason why I shouldn't expect this to work? Or must I spend more hours staring at a few lines of config and trying to guess what's wrong ...

(Yes I know that the default behaviour of all parts of the Elastic stack when you've got something, anything, wrong is to throw away events silently, so in theory the problem could be in my Logstash or Elasticsearch configuration or coding. But I don't think so, because with the full Filebeat debug output enabled it didn't show any of the syslog events being shipped.)


(Jaime Soriano) #2

Hi @TimWard,

Even if events are dropped, you should see at least the message about filebeat starting to harvest the files. Could you share the config you have added to connect syslog files?


(Tim Ward) #3
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: origami-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  filebeat.yml: |-
    filebeat.config:

      modules:
        path: ${path.config}/modules.d/*.yml
        # Reload module configs as they change:
        reload.enabled: false

    filebeat.autodiscover:
      providers:

        - type: kubernetes

          [details snipped]

    filebeat.inputs:
      - type: log
        paths:
          - /var/log/host/syslog
        fields_under_root: true
        fields:
          log_type: etp_log

    logging.level: debug
    logging.selectors: ["prospector","harvester"]

    processors:
      - add_cloud_metadata:

    output.logstash:

      #   Output goes to Logstash hosted in the same Kubernetes cluster.

      hosts: ['logstash-service.origami-system']
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: origami-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: filebeat
        kubernetes.io/cluster-service: "true"
    spec:
      #serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:6.2.4
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: host-syslog
          mountPath: /var/log/host/syslog
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: data
        emptyDir: {}
      - name: host-syslog
        hostPath:
          path: /var/log/syslog
          type: File

(Tim Ward) #4

Now, I wonder whether that was because I was reading the 6.4 documentation (filebeat.inputs) rather than the 6.2 documentation (filebeat.prospectors)? - I'll try that when my development environment is back up again.

But if that is the cause, there weren't any "unknown configuration options" errors or warnings in the logs.


(Jaime Soriano) #5

Oh, yes, if you are using 6.2 you still need to use filebeat.prospectors. If it still doesn't work, you can try to mount the whole /var/log directory from the host instead of just the syslog file.

Also, instead of configuring an input/prospector directly, consider using the system module, it includes patterns to parse information in the syslog file.


(Tim Ward) #6

Yeah, confused myself because I'm running a mixture of 6.2 and 6.4. Our K8s cluster is still broken so haven't been able to test yet.

It's still a pity that there was no "unknown configuration field 'inputs'" in the Filebeat logs though.


(Tim Ward) #7

OK, so going back to "prospectors" gives me

2018-10-31T16:18:28.702Z INFO log/harvester.go:216 Harvester started for file: /var/log/host/syslog

which is a bit more like it. It is still the case that nothing ends up in Kibana from syslog, of course, so I've now got to find the next place in the chain where the messages disappear ...


(system) #8

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.