Filebeat with Kubernetes on GCE


(Newbas) #1

Hi to everyone.

I am trying to send logs to logstash from GCE Kubernetes with use of Filebeat. I took example config for filebeat and just changed elastic search on my logstash.

DaemonSet starting and logs look like normal, but suddenly after 30-60sec all fileabeats are deleted and daemonset as well.

Here are logs from one container

18-06-06T22:55:42.008Z INFO log/harvester.go:216 Harvester started for file: /var/lib/docker/containers/ba4a6c6d255bff2118b465cf664e6410b4e9c1095542c925d312c4de9a35b61c/ba4a6c6d255bff2118b465cf664e6410b4e9c1095542c925d312c4de9a35b61c-json.log
2018-06-06T22:56:11.906Z INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":80,"time":83},"total":{"ticks":520,"time":532,"value":520},"user":{"ticks":440,"time":449}},"info":{"ephemeral_id":"f83a86cc-7a9a-402c-acd9-b990840efb8a","uptime":{"ms":30026}},"memstats":{"gc_next":11144080,"memory_alloc":5635176,"memory_total":75723376,"rss":48230400}},"filebeat":{"events":{"added":6780,"done":6780},"harvester":{"open_files":46,"running":46,"started":46}},"libbeat":{"config":{"module":{"running":1,"starts":1},"reloads":2},"output":{"events":{"acked":6734,"batches":10,"total":6734},"read":{"bytes":60},"type":"logstash","write":{"bytes":516925}},"pipeline":{"clients":2,"events":{"active":0,"filtered":46,"published":6734,"retry":2048,"total":6780},"queue":{"acked":6734}}},"registrar":{"states":{"current":46,"update":6780},"writes":17},"system":{"cpu":{"cores":1},"load":{"1":0.43,"15":0.37,"5":0.29,"norm":{"1":0.43,"15":0.37,"5":0.29}}}}}}
2018-06-06T22:56:30.614Z INFO beater/filebeat.go:323 Stopping filebeat
2018-06-06T22:56:30.614Z INFO crawler/crawler.go:109 Stopping Crawler
2018-06-06T22:56:30.614Z INFO crawler/crawler.go:119 Stopping 0 prospectors
2018-06-06T22:56:30.614Z INFO cfgfile/reload.go:222 Dynamic config reloader stopped
2018-06-06T22:56:30.614Z INFO cfgfile/reload.go:222 Dynamic config reloader stopped
2018-06-06T22:56:30.614Z INFO crawler/crawler.go:135 Crawler stopped
2018-06-06T22:56:30.614Z INFO registrar/registrar.go:239 Stopping Registrar
2018-06-06T22:56:30.614Z INFO registrar/registrar.go:167 Ending Registrar
2018-06-06T22:56:30.617Z INFO instance/beat.go:308 filebeat stopped.
2018-06-06T22:56:30.619Z INFO [monitoring] log/log.go:132 Total non-zero metrics {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":90,"time":92},"total":{"ticks":560,"time":571,"value":560},"user":{"ticks":470,"time":479}},"info":{"ephemeral_id":"f83a86cc-7a9a-402c-acd9-b990840efb8a","uptime":{"ms":48741}},"memstats":{"gc_next":10841600,"memory_alloc":5782568,"memory_total":84827056,"rss":48230400}},"filebeat":{"events":{"added":6797,"done":6797},"harvester":{"open_files":46,"running":46,"started":46}},"libbeat":{"config":{"module":{"running":1,"starts":1},"reloads":2},"output":{"events":{"acked":6751,"batches":15,"total":6751},"read":{"bytes":90},"type":"logstash","write":{"bytes":521829}},"pipeline":{"clients":0,"events":{"active":0,"filtered":46,"published":6751,"retry":2048,"total":6797},"queue":{"acked":6751}}},"registrar":{"states":{"current":46,"update":6797},"writes":23},"system":{"cpu":{"cores":1},"load":{"1":0.3,"15":0.36,"5":0.27,"norm":{"1":0.3,"15":0.36,"5":0.27}}}}}}
2018-06-06T22:56:30.619Z INFO [monitoring] log/log.go:133 Uptime: 48.741991964s
2018-06-06T22:56:30.619Z INFO [monitoring] log/log.go:110 Stopping metrics logging.
rpc error: code = Unknown desc = Error: No such container:

And My config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
filebeat.yml: |-
filebeat.config:
prospectors:
# Mounted filebeat-prospectors configmap:
path: ${path.config}/prospectors.d/.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/
.yml
# Reload module configs as they change:
reload.enabled: false

    processors:
      - add_cloud_metadata:

    output.logstash:
      hosts: ['logstash']
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-prospectors
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  kubernetes.yml: |-
    - type: docker
      containers.ids:
      - "*"
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: filebeat
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:6.2.4
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: prospectors
          mountPath: /usr/share/filebeat/prospectors.d
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: prospectors
        configMap:
          defaultMode: 0600
          name: filebeat-prospectors
      - name: data
        emptyDir: {}
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---

Thanks for any help


(Adrian Serrano) #2

Can you enable debug log output on filebeat so we get more information in the logs? Add the -e -d '*' arguments:

https://www.elastic.co/guide/en/beats/filebeat/current/enable-filebeat-debugging.html


(Newbas) #3

Thanks for your answer.

Tried with debugging, but only things i see now in log is -
First lines are repeated while container works.

2018-06-07T08:43:35.051Z	DEBUG	[kubernetes]	add_kubernetes_metadata/matchers.go:57	Incoming source value: /var/lib/docker/containers/c56aa53914b1287f21a211c97afc386dfebceb0416a3f57add36e2df6b4b8b60/c56aa53914b1287f21a211c97afc386dfebceb0416a3f57add36e2df6b4b8b60-json.log
2018-06-07T08:43:35.051Z	DEBUG	[kubernetes]	add_kubernetes_metadata/matchers.go:80	Using container id: c56aa53914b1287f21a211c97afc386dfebceb0416a3f57add36e2df6b4b8b60
2018-06-07T08:43:35.051Z	DEBUG	[publish]	pipeline/processor.go:275	Publish event: {
  "@timestamp": "2018-06-07T08:43:28.044Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.2.4"
  },
  "source": "/var/lib/docker/containers/c56aa53914b1287f21a211c97afc386dfebceb0416a3f57add36e2df6b4b8b60/c56aa53914b1287f21a211c97afc386dfebceb0416a3f57add36e2df6b4b8b60-json.log",
  "offset": 1096295,
  "stream": "stderr",
  "message": "    }",
  "prospector": {
    "type": "docker"
  },
  "kubernetes": {
    "labels": {
      "kubernetes.io/cluster-service": "true",
      "pod-template-generation": "1",
      "controller-revision-hash": "3543958425",
rpc error: code = Unknown desc = Error: No such container: c56aa53914b1287f21a211c97afc386dfebceb0416a3f57add36e2df6b4b8b60

(Newbas) #4

Previously i have not received logs in logstash at all and thought problem with connection and after a while it dies. But now i changed prospector config like i use in docker and all logs came into logstash, the only problem it still dies after 30-60 sec sometimes only few seconds.

- type: log
      paths:
       - '/var/lib/docker/containers/*/*.log'
      json.message_key: log
      json.keys_under_root: true
      processors:
        - add_kubernetes_metadata:
            in_cluster: true

(Adrian Serrano) #5

I still would like to see the full logs, because in the logs in your original message I can see this:

2018-06-06T22:56:30.614Z	INFO	beater/filebeat.go:323	Stopping filebeat

Which means that filebeat is stopping itself. I wanted to confirm by looking at the debug logs, but the only reason I can find is that Filebeat is receiving a SIGINT or SIGTERM signal.

Is it possible that something is stopping the container?


(Adrian Serrano) #6

Now I'm told you might be running into this problem for which a fix is on the way:

Have a look at it, there is a suggestion to use kubectl create instead of kubectl apply


(Newbas) #7

THanks GOD. I did it with kubectl apply and had this problem. Have tried with kubectl create and everything works.

Great great thanks.


(system) #8

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.