Limiting Filebeat Autodiscover to Namespace

I am hoping I can get some guidance on an issue I am having with my filebeat.yml and my kubernetes environment. I am new to elastic so hopefully I am providing the relevant info. I am running a filebeat daemonset and from everything I have read, to get new pod logs I need to use
filebeat.autodiscover. Once I implement this I run into this error:

|2020-05-28T18:25:05.334Z|INFO|kubernetes/util.go:86|kubernetes: Using pod name filebeat-qtwgk and namespace xxxxxxxx to discover kubernetes node|
|---|---|---|---|
|2020-05-28T18:25:05.341Z|INFO|kubernetes/util.go:93|kubernetes: Using node xxxxxx discovered by in cluster pod node query|
|2020-05-28T18:25:05.341Z|INFO|autodiscover/autodiscover.go:104|Starting autodiscover manager|
|2020-05-28T18:25:05.341Z|INFO|kubernetes/watcher.go:182|kubernetes: Performing a resource sync for *v1.PodList|
|2020-05-28T18:25:05.342Z|ERROR|kubernetes/watcher.go:185|kubernetes: Performing a resource sync err kubernetes api: Failure 403 pods is forbidden: User "system:serviceaccount:xxxxx:filebeat" cannot list resource "pods" in API group "" at the cluster scope for *v1.PodList|
|2020-05-28T18:25:05.342Z|ERROR|kubernetes/kubernetes.go:132|Error starting kubernetes autodiscover provider: kubernetes api: Failure 403 pods is forbidden: User "system:serviceaccount:xxxxxx:filebeat" cannot list resource "pods" in API group "" at the cluster scope|

My limitations in this kubernetes environment are strictly role based. It is a multi-tenant cluster and I do not have and will not get ClusterRole / Scope permissions. I was able to get this working before I implemented auto discover but I was not getting logs for any new pods in that instance. Any guidance on how I can get all my pods sending logs and keep this limited to namespace would be helpful

Elastic - v6.8.1
Filebeat - v6.8.9


apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: xxxxxxx
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.registry_file: /usr/share/filebeat/data/registry/filebeat/data.json
# filebeat.inputs:
# - type: docker
# containers.ids: ''
# containers.path: /var/lib/docker/containers
filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition:
equals:
kubernetes.namespace: xxxxxxx
config:
- type: container
containers.ids: '
'
containers.path: /var/lib/docker/containers
processors:
- add_kubernetes_metadata:
default_indexers.enabled: true
default_matchers.enabled: true
namespace: xxxxxx
matchers:
- logs_path:
logs_path: "/var/log/containers/"
# filebeat.modules:
# - module: elasticsearch
# # Server log
# server:
# enabled: true
# - module: kibana
# # All logs
# log:
# enabled: true
setup.template.settings:
index:
number_of_shards: 30
codec: best_compression
setup.template.name: "filebeat"
setup.template.pattern: "filebeat-*"
output.elasticsearch:
hosts: ['http://xxxxxxxxx:9200']
index: "filebeat-%{+yyyy.MM.dd}"
setup.kibana:
host: http://xxxxxx:5601


apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: filebeat
subjects:

  • kind: ServiceAccount
    name: filebeat
    namespace: xxxxxxx
    roleRef:
    kind: Role
    name: filebeat
    apiGroup: rbac.authorization.k8s.io

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:

  • apiGroups: [""] # "" indicates the core API group
    resources:
    • pods
      verbs:
    • get
    • watch
    • list

apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: xxxxxxxx
labels:
k8s-app: filebeat

I actually got this to work finally and feel pretty silly for it taking so long.

filebeat.autodiscover:
  providers:
    - type: kubernetes
      namespace: xxxxxxx
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.