Reading both container logs and host logs on K8s?

This is probably a typo or something I haven't spotted despite staring at it for hours, but just in case ...

I'm running Filebeat in K8s as a Daemon Set, with Kubernetes type autodiscover configured and working fine to collect container logs.

I'm now also wanting to pick up the host's syslog. So I've mounted that as a volume into the container, and verified by shelling in to the container that it's present as expected.

Then I added filebeat.inputs configuration to pick up the syslog file. It doesn't appear to do so. I've enabled debug logs, with selectors set to "prospector" and "harvester", and the output shows lots of stuff to do with reading Kubernetes logs but no mention at all of any attempt, whether successful or not, to also read the syslog file.

Is there a reason why I shouldn't expect this to work? Or must I spend more hours staring at a few lines of config and trying to guess what's wrong ...

(Yes I know that the default behaviour of all parts of the Elastic stack when you've got something, anything, wrong is to throw away events silently, so in theory the problem could be in my Logstash or Elasticsearch configuration or coding. But I don't think so, because with the full Filebeat debug output enabled it didn't show any of the syslog events being shipped.)

Hi @TimWard,

Even if events are dropped, you should see at least the message about filebeat starting to harvest the files. Could you share the config you have added to connect syslog files?

apiVersion: v1
kind: ConfigMap
  name: filebeat-config
  namespace: origami-system
    k8s-app: filebeat "true"
  filebeat.yml: |-

        path: ${path.config}/modules.d/*.yml
        # Reload module configs as they change:
        reload.enabled: false


        - type: kubernetes

          [details snipped]

      - type: log
          - /var/log/host/syslog
        fields_under_root: true
          log_type: etp_log

    logging.level: debug
    logging.selectors: ["prospector","harvester"]

      - add_cloud_metadata:


      #   Output goes to Logstash hosted in the same Kubernetes cluster.

      hosts: ['logstash-service.origami-system']
apiVersion: extensions/v1beta1
kind: DaemonSet
  name: filebeat
  namespace: origami-system
    k8s-app: filebeat "true"
        k8s-app: filebeat "true"
      #serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      - name: filebeat
        args: [
          "-c", "/etc/filebeat.yml",
          runAsUser: 0
            memory: 500Mi
            cpu: 100m
            memory: 100Mi
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: host-syslog
          mountPath: /var/log/host/syslog
      - name: config
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
          path: /var/lib/docker/containers
      - name: data
        emptyDir: {}
      - name: host-syslog
          path: /var/log/syslog
          type: File

Now, I wonder whether that was because I was reading the 6.4 documentation (filebeat.inputs) rather than the 6.2 documentation (filebeat.prospectors)? - I'll try that when my development environment is back up again.

But if that is the cause, there weren't any "unknown configuration options" errors or warnings in the logs.

Oh, yes, if you are using 6.2 you still need to use filebeat.prospectors. If it still doesn't work, you can try to mount the whole /var/log directory from the host instead of just the syslog file.

Also, instead of configuring an input/prospector directly, consider using the system module, it includes patterns to parse information in the syslog file.

Yeah, confused myself because I'm running a mixture of 6.2 and 6.4. Our K8s cluster is still broken so haven't been able to test yet.

It's still a pity that there was no "unknown configuration field 'inputs'" in the Filebeat logs though.

OK, so going back to "prospectors" gives me

2018-10-31T16:18:28.702Z INFO log/harvester.go:216 Harvester started for file: /var/log/host/syslog

which is a bit more like it. It is still the case that nothing ends up in Kibana from syslog, of course, so I've now got to find the next place in the chain where the messages disappear ...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.