Hi to everyone.
I am trying to send logs to logstash from GCE Kubernetes with use of Filebeat. I took example config for filebeat and just changed Elasticsearch on my logstash.
DaemonSet starting and logs look like normal, but suddenly after 30-60sec all fileabeats are deleted and daemonset as well.
Here are logs from one container
18-06-06T22:55:42.008Z INFO log/harvester.go:216 Harvester started for file: /var/lib/docker/containers/ba4a6c6d255bff2118b465cf664e6410b4e9c1095542c925d312c4de9a35b61c/ba4a6c6d255bff2118b465cf664e6410b4e9c1095542c925d312c4de9a35b61c-json.log
2018-06-06T22:56:11.906Z INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":80,"time":83},"total":{"ticks":520,"time":532,"value":520},"user":{"ticks":440,"time":449}},"info":{"ephemeral_id":"f83a86cc-7a9a-402c-acd9-b990840efb8a","uptime":{"ms":30026}},"memstats":{"gc_next":11144080,"memory_alloc":5635176,"memory_total":75723376,"rss":48230400}},"filebeat":{"events":{"added":6780,"done":6780},"harvester":{"open_files":46,"running":46,"started":46}},"libbeat":{"config":{"module":{"running":1,"starts":1},"reloads":2},"output":{"events":{"acked":6734,"batches":10,"total":6734},"read":{"bytes":60},"type":"logstash","write":{"bytes":516925}},"pipeline":{"clients":2,"events":{"active":0,"filtered":46,"published":6734,"retry":2048,"total":6780},"queue":{"acked":6734}}},"registrar":{"states":{"current":46,"update":6780},"writes":17},"system":{"cpu":{"cores":1},"load":{"1":0.43,"15":0.37,"5":0.29,"norm":{"1":0.43,"15":0.37,"5":0.29}}}}}}
2018-06-06T22:56:30.614Z INFO beater/filebeat.go:323 Stopping filebeat
2018-06-06T22:56:30.614Z INFO crawler/crawler.go:109 Stopping Crawler
2018-06-06T22:56:30.614Z INFO crawler/crawler.go:119 Stopping 0 prospectors
2018-06-06T22:56:30.614Z INFO cfgfile/reload.go:222 Dynamic config reloader stopped
2018-06-06T22:56:30.614Z INFO cfgfile/reload.go:222 Dynamic config reloader stopped
2018-06-06T22:56:30.614Z INFO crawler/crawler.go:135 Crawler stopped
2018-06-06T22:56:30.614Z INFO registrar/registrar.go:239 Stopping Registrar
2018-06-06T22:56:30.614Z INFO registrar/registrar.go:167 Ending Registrar
2018-06-06T22:56:30.617Z INFO instance/beat.go:308 filebeat stopped.
2018-06-06T22:56:30.619Z INFO [monitoring] log/log.go:132 Total non-zero metrics {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":90,"time":92},"total":{"ticks":560,"time":571,"value":560},"user":{"ticks":470,"time":479}},"info":{"ephemeral_id":"f83a86cc-7a9a-402c-acd9-b990840efb8a","uptime":{"ms":48741}},"memstats":{"gc_next":10841600,"memory_alloc":5782568,"memory_total":84827056,"rss":48230400}},"filebeat":{"events":{"added":6797,"done":6797},"harvester":{"open_files":46,"running":46,"started":46}},"libbeat":{"config":{"module":{"running":1,"starts":1},"reloads":2},"output":{"events":{"acked":6751,"batches":15,"total":6751},"read":{"bytes":90},"type":"logstash","write":{"bytes":521829}},"pipeline":{"clients":0,"events":{"active":0,"filtered":46,"published":6751,"retry":2048,"total":6797},"queue":{"acked":6751}}},"registrar":{"states":{"current":46,"update":6797},"writes":23},"system":{"cpu":{"cores":1},"load":{"1":0.3,"15":0.36,"5":0.27,"norm":{"1":0.3,"15":0.36,"5":0.27}}}}}}
2018-06-06T22:56:30.619Z INFO [monitoring] log/log.go:133 Uptime: 48.741991964s
2018-06-06T22:56:30.619Z INFO [monitoring] log/log.go:110 Stopping metrics logging.
rpc error: code = Unknown desc = Error: No such container:
And My config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
filebeat.yml: |-
filebeat.config:
prospectors:
# Mounted filebeat-prospectors configmap:
path: ${path.config}/prospectors.d/.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
output.logstash:
hosts: ['logstash']
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-prospectors
namespace: kube-system
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.2.4
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: prospectors
mountPath: /usr/share/filebeat/prospectors.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: prospectors
configMap:
defaultMode: 0600
name: filebeat-prospectors
- name: data
emptyDir: {}
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
Thanks for any help