Kubernetes watch connection never close when inputreload create runner failed

Hi, guys:
filebeat run as daemonset in kubernetes cluster and set reload true, when config has changed and if reload create runner failed, it won't close kubernetes watch connection with kube-apiserver, and it will keep increasing the connections, at the end lead to kube-apiserver memory pressure . is it A already known problem or do i made some mistake?

filebeat.config:
  inputs:
    # Mounted `filebeat-inputs` configmap:
    path: ${path.config}/inputs.d/*.json
    # Reload inputs configs as they change:
    reload.enabled: true
  modules:
    path: ${path.config}/modules.d/*.yml
    # Reload module configs as they change:
    reload.enabled: true

processors:
  - add_cloud_metadata:

cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}

output.elasticsearch:
  hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
  username: ${ELASTICSEARCH_USERNAME}
  password: ${ELASTICSEARCH_PASSWORD}
2019-05-08T07:08:17.261Z        ERROR   [reload]        cfgfile/list.go:104     Error creating runner from config: No paths were defined for input accessing config
2019-05-08T07:08:27.269Z        ERROR   [reload]        cfgfile/list.go:104     Error creating runner from config: No paths were defined for input accessing config
2019-05-08T07:08:37.279Z        ERROR   [reload]        cfgfile/list.go:104     Error creating runner from config: No paths were defined for input accessing config
2019-05-08T07:08:47.288Z        ERROR   [reload]        cfgfile/list.go:104     Error creating runner from config: No paths were defined for input accessing config
2019-05-08T07:08:57.297Z        ERROR   [reload]        cfgfile/list.go:104     Error creating runner from config: No paths were defined for input accessing config
2019-05-08T07:09:07.307Z        ERROR   [reload]        cfgfile/list.go:104     Error creating runner from config: No paths were defined for input accessing config
tcp        0      0 192.168.1.86:47956      192.168.1.86:8080       ESTABLISHED 22052/filebeat      
tcp        0      0 192.168.1.86:54902      192.168.1.86:8080       ESTABLISHED 22052/filebeat      
tcp        0      0 192.168.1.86:47336      192.168.1.86:8080       ESTABLISHED 22052/filebeat      
tcp        0      0 192.168.1.86:43552      192.168.1.86:8080       ESTABLISHED 22052/filebeat      
tcp        0      0 192.168.1.86:37720      192.168.1.86:8080       ESTABLISHED 22052/filebeat      
tcp        0      0 192.168.1.86:49188      192.168.1.86:8080       ESTABLISHED 22052/filebeat      
tcp        0      0 192.168.1.86:53730      192.168.1.86:8080       ESTABLISHED 22052/filebeat      
tcp        0      0 192.168.1.86:60706      192.168.1.86:8080       ESTABLISHED 22052/filebeat      
tcp        0      0 192.168.1.86:56284      192.168.1.86:8080       ESTABLISHED 22052/filebeat      
tcp        0      0 192.168.1.86:49542      192.168.1.86:8080       ESTABLISHED 22052/filebeat      
tcp        0      0 192.168.1.86:56986      192.168.1.86:8080       ESTABLISHED 22052/filebeat      
tcp        0      0 192.168.1.86:55332      192.168.1.86:8080       ESTABLISHED 22052/filebeat      
tcp        0      0 192.168.1.86:55166      192.168.1.86:8080       ESTABLISHED 22052/filebeat      
tcp        0      0 192.168.1.86:49522      192.168.1.86:8080       ESTABLISHED 22052/filebeat      
tcp        0      0 192.168.1.86:49450      192.168.1.86:8080       ESTABLISHED 22052/filebeat   

$ netstat -anp |grep 8080 |grep filebeat |wc -l
624

Hey @danielQ,

welcome and thanks for sharing that scenario.

I've trying to replicate the issue with no success. My setup is:

Kubernetes 1.13, multimaster behing an HAProxy.
Filebeat 7.0 deployed at all nodes, the configuration is retrieving logs from only one pod at the whole cluster, input reload is set to true.

Then I mess up the filebeat-inputs configmap, adding a non valid logger. Filebeat at the log starts to complain at the reload, just as yours.

I exec into the filebeat container and my connections to the apiserver looks good (2 established connections)
Checking at the load balancer at the front of apiservers, there is no significant movement.

Am I missing something to replicate your scenario?
What filebeat versions are you running?

Hi, thx for reply .filebeat 6.4.0 running in pod ,have you got the same error as me?

I follower the log error and checked source code , it seems when create A newInput it will start the processors at the first line


;https://github.com/elastic/beats/blob/master/filebeat/input/log/input.go#L86;
https://github.com/elastic/beats/blob/master/libbeat/processors/processor.go#L84,

and the add_kubernets_metadata will start watch here:

so if there any error happens in newinput process,it will create a useless watch.

I'm fresh at filebeat, if I missed something please tell me, thx.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.