K8 pod -> stdout ->?

I have a simple deployment to a k8 cluster. One pod sending info to stdout and I'd like to pass it along to elastic. (I can see the output in the k8 log).

K8 cluster in question has both metricbeat and filebeat daemonsets running and I'm getting data from both... except.. not from my new pod.

In the filebeat daemonset definition I have these values:

volumeMounts:
...
    - name: varlibdockercontainers
      mountPath: /var/lib/docker/containers

and

volumes:
 ...
  - name: varlibdockercontainers
    hostPath:
      path: /var/lib/docker/containers

Isn't that supposed to be where docker sends stdout? So what am I missing?

Hi @ethrbunny,

It depends on the configuration you are using, if you are using the one we provide it should in principle collect logs from all containers.

You mention that you get data from filebeat, what data? is it collecting logs from some pods but not from others?

Could you share the configuration you are using and check the logs in case you see something relevant?

It's getting logs from all pods but I suspect my container isn't putting output in the right place. It's def hitting stdout but seems to be vanishing after that.

BlockquoteapiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
inputs:
# Mounted filebeat-inputs configmap:
path: ${path.config}/inputs.d/.yml
# Reload inputs configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/
.yml
# Reload module configs as they change:
reload.enabled: false

# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
filebeat.autodiscover:
  providers:
    - type: kubernetes
      hints.enabled: true
  templates:
  #  - condition:
  #      regexp:
  #        kubernetes.labels.yourlabel: '.*'
      config:
        - type: docker
          containers.ids:
            - "${data.kubernetes.container.id}"
          # Point 2, add custom fields to events:
          fields:
            yourdesiredfield: "${data.kubernentes.labels.yourlabel}"
          
processors:
  - add_cloud_metadata:

output.logstash:
  hosts: ["10.95.96.75:5044"]

apiVersion: v1
kind: ConfigMap 
metadata:
  name: filebeat-inputs
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  kubernetes.yml: |-
    - type: docker
      containers.ids:
      - "*"
     processors:
        - add_kubernetes_metadata:
        in_cluster: true
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:6.4.1
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        securityContext:
          runAsUser: 0
    resources:
      limits:
        memory: 200Mi
      requests:
        cpu: 100m
        memory: 100Mi
    volumeMounts:
    - name: config
      mountPath: /etc/filebeat.yml
      readOnly: true
      subPath: filebeat.yml
    - name: inputs
      mountPath: /usr/share/filebeat/inputs.d
      readOnly: true
    - name: data
      mountPath: /usr/share/filebeat/data
    - name: varlibdockercontainers
      mountPath: /var/lib/docker/containers
      readOnly: true
    - name: msgs
      mountPath: /var/log/messages
      readOnly: true
  volumes:
  - name: config
    configMap:
      defaultMode: 0600
      name: filebeat-config
  - name: varlibdockercontainers
    hostPath:
      path: /var/lib/docker/containers
  - name: msgs
    hostPath:
      path: /var/log/messages
  - name: inputs
    configMap:
      defaultMode: 0600
      name: filebeat-inputs
  # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
  - name: data
    hostPath:
      path: /var/lib/filebeat-data
      type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount 
metadata:
  name: filebeat
 namespace: kube-system
  labels:
    k8s-app: filebeat

Does this container have something special? For example is it started as a job and/or is short-lived? or is it in a different namespace or node than other pods whose logs are being collected?

I have containers spread out across a number of namespaces so that's not really different.

I can't think of anything that would really set it apart. Anything else I should look for? Something i might be filtering somewhere or a logstash setting?

To make this work I had to use this entry as one of the config-maps:

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-inputs
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  kubernetes.yml: |-
    - type: log
      paths:
        - /var/lib/docker/containers/*/*.log
      json.message_key: log
      json.keys_under_root: false
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
            namespace: ${POD_NAMESPACE}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.