Filebeat not starting as deamonset on Kuberneters

Hi,

I configured a deamonset on a kubernetes cluster

apiVersion: v1
items:

  • apiVersion: extensions/v1beta1
    kind: DaemonSet
    metadata:
    name: tracing-filebeat
    namespace: ""
    spec:
    template:
    metadata:
    labels:
    tracing: filebeat
    type: tracing
    spec:
    containers:
    - env:
    - name: LOGSTASH_HOST
    value: logstash:5044
    image: docker-registry.default.svc:5000/filebeat:1.4
    name: tracing-filebeat
    volumeMounts:
    - mountPath: /var/log/containers
    name: varlog
    serviceAccountName: filebeat
    volumes:
    - hostPath:
    path: /var/log/containers/
    name: varlog

My docker file looks like this
FROM docker.elastic.co/beats/filebeat:6.2.0
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
USER root
RUN chown filebeat /usr/share/filebeat/filebeat.yml
RUN chmod go-w /usr/share/filebeat/filebeat.yml
USER filebeat

The pod fails with the error "/usr/local/bin/docker-entrypoint: line 8: exec: filebeat: not found"
The issue seems to be related to the volume mount. If I remove the volumes the pods start immediately.
Do you have any idea what could be wrong?

The issue seems to be related to any volume mounts, it doesn't matter the path or the content.

Thank you,
Irina

That looks like a very old image. Why are you not using docker.elastic.co/beats/filebeat:6.2.1, which I believe is the latest official one ?

Hi @irina_andronachi,

We provide reference manifests to deploy Filebeat as a daemonset, probably you want to have a look to them. That will allow you to use ConfigMaps instead of creating a new image for Filebeat: https://www.elastic.co/guide/en/beats/filebeat/6.2/running-on-kubernetes.html

1 Like

Hi,
with the rerefence manifest I get exactly the same error
"/usr/local/bin/docker-entrypoint: line 8: exec: filebeat: not found"
I have only removed the "securityContext: runAsUser: 0" as it failed with an error stating that the user has to be in a range.


apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: edcm
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
filebeat.yml: |-
filebeat.config:
prospectors:
# Mounted filebeat-prospectors configmap:
path: ${path.config}/prospectors.d/.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/
.yml
# Reload module configs as they change:
reload.enabled: false

processors:
  - add_cloud_metadata:

cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}

output.elasticsearch:
  hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
  username: ${ELASTICSEARCH_USERNAME}
  password: ${ELASTICSEARCH_PASSWORD}

apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-prospectors
namespace: edcm
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true


apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: edcm
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.1.3
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: prospectors
mountPath: /usr/share/filebeat/prospectors.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: prospectors
configMap:
defaultMode: 0600
name: filebeat-prospectors
- name: data
emptyDir: {}


apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:

  • kind: ServiceAccount
    name: filebeat
    namespace: edcm
    roleRef:
    kind: ClusterRole
    name: filebeat
    apiGroup: rbac.authorization.k8s.io

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:

  • apiGroups: [""] # "" indicates the core API group
    resources:
    • namespaces
    • pods
      verbs:
    • get
    • watch
    • list

Thank you,
Irina

Here is how the pod looks like
[ec2-user@ip-10-0-117-21 ~]$ oc describe pod filebeat-9s79q
Name: filebeat-9s79q
Namespace: edcm
Node: ip-10-0-117-24.eu-west-1.compute.internal/10.0.117.24
Start Time: Thu, 15 Feb 2018 15:39:36 +0100
Labels: controller-revision-hash=4091261649
k8s-app=filebeat
kubernetes.io/cluster-service=true
pod-template-generation=1
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"DaemonSet","namespace":"edcm","name":"filebeat","uid":"0b86758a-125e-11e8-a3a9-0a4d7e5abdea","apiVersion":...
openshift.io/scc=hostaccess
Status: Running
IP: 10.130.0.8
Created By: DaemonSet/filebeat
Controlled By: DaemonSet/filebeat
Containers:
filebeat:
Container ID: docker://5b1865edfc72a6571196246296131d366231ccfdf4a8522edb9cba4c3ece1f4f
Image: docker.elastic.co/beats/filebeat:6.1.3
Image ID: docker-pullable://docker.elastic.co/beats/filebeat@sha256:ffb7a446d6f4931f5458045acded76d4a3a7a08f69832ecf1f496a3fcd4e39a4
Port:
Args:
-c
/etc/filebeat.yml
-e
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 127
Started: Thu, 15 Feb 2018 15:45:31 +0100
Finished: Thu, 15 Feb 2018 15:45:31 +0100
Ready: False
Restart Count: 6
Limits:
memory: 200Mi
Requests:
cpu: 100m
memory: 100Mi
Environment:
ELASTICSEARCH_HOST: elasticsearch
ELASTICSEARCH_PORT: 9200
ELASTICSEARCH_USERNAME: elastic
ELASTICSEARCH_PASSWORD: changeme
ELASTIC_CLOUD_ID:
ELASTIC_CLOUD_AUTH:
Mounts:
/etc/filebeat.yml from config (ro)
/usr/share/filebeat/data from data (rw)
/usr/share/filebeat/prospectors.d from prospectors (ro)
/var/lib/docker/containers from varlibdockercontainers (ro)
/var/run/secrets/kubernetes.io/serviceaccount from filebeat-token-mq4f2 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: filebeat-config
Optional: false
varlibdockercontainers:
Type: HostPath (bare host directory volume)
Path: /var/lib/docker/containers
prospectors:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: filebeat-prospectors
Optional: false
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
filebeat-token-mq4f2:
Type: Secret (a volume populated by a Secret)
SecretName: filebeat-token-mq4f2
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute
node.alpha.kubernetes.io/unreachable:NoExecute
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message


7m 7m 1 kubelet, ip-10-0-117-24.eu-west-1.compute.internal Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "data"
7m 7m 1 kubelet, ip-10-0-117-24.eu-west-1.compute.internal Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "varlibdockercontainers"
7m 7m 1 kubelet, ip-10-0-117-24.eu-west-1.compute.internal Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "prospectors"
7m 7m 1 kubelet, ip-10-0-117-24.eu-west-1.compute.internal Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "config"
7m 7m 1 kubelet, ip-10-0-117-24.eu-west-1.compute.internal Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "filebeat-token-mq4f2"
7m 7m 1 kubelet, ip-10-0-117-24.eu-west-1.compute.internal spec.containers{filebeat} Normal Pulling pulling image "docker.elastic.co/beats/filebeat:6.1.3"
7m 7m 1 kubelet, ip-10-0-117-24.eu-west-1.compute.internal spec.containers{filebeat} Normal Pulled Successfully pulled image "docker.elastic.co/beats/filebeat:6.1.3"
7m 6m 3 kubelet, ip-10-0-117-24.eu-west-1.compute.internal spec.containers{filebeat} Normal Pulled Container image "docker.elastic.co/beats/filebeat:6.1.3" already present on machine
7m 6m 4 kubelet, ip-10-0-117-24.eu-west-1.compute.internal spec.containers{filebeat} Normal Created Created container
7m 6m 4 kubelet, ip-10-0-117-24.eu-west-1.compute.internal spec.containers{filebeat} Normal Started Started container
7m 2m 23 kubelet, ip-10-0-117-24.eu-west-1.compute.internal spec.containers{filebeat} Warning BackOff Back-off restarting failed container
[ec2-user@ip-10-0-117-21 ~]$ oc logs -f filebeat-7s9ct

Thank you for the details, it seems logs are missing from the paste, could you include them?

Best regards

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.