Hi!
I have an issue with autodiscover in kubernetes.
I have such yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: kube-logging
name: filebeat-config
labels:
app: filebeat
data:
filebeat.yml: |-
filebeat.modules:
- module: system
auth:
enabled: true
filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition:
or:
- equals:
kubernetes.namespace: cis
- equals:
kubernetes.namespace: kube-logging
config:
- type: container
paths:
- /var/log/containers/*-${data.kubernetes.container.id}.log
exclude_lines: ["^\\s+[\\-`('.|_]"] # drop asciiart lines
processors:
- decode_json_fields:
fields: ["message"]
target: "json_message"
process_array: true
processors:
- drop_event:
when.or:
- and:
- regexp:
message: '^\d+\.\d+\.\d+\.\d+ '
- equals:
fileset.name: error
- and:
- not:
regexp:
message: '^\d+\.\d+\.\d+\.\d+ '
- equals:
fileset.name: access
- add_cloud_metadata:
- add_kubernetes_metadata:
- add_docker_metadata:
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
setup.kibana:
host: '${KIBANA_HOST:kibana}:${KIBANA_PORT:5601}'
setup.dashboards.enabled: true
setup.template.enabled: true
setup.ilm:
policy_file: /etc/indice-lifecycle.json
setup.template.settings:
index.number_of_shards: 1
index.number_of_replicas: 2
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: kube-logging
name: filebeat-indice-lifecycle
labels:
app: filebeat
data:
indice-lifecycle.json: |-
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_age": "1d"
}
}
},
"delete": {
"min_age": "1d",
"actions": {
"delete": {}
}
}
}
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: kube-logging
name: filebeat
labels:
app: filebeat
spec:
selector:
matchLabels:
app: filebeat
template:
metadata:
labels:
app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
# hostNetwork: true
# dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.6.1
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
# - name: ELASTICSEARCH_PASSWORD
# valueFrom:
# secretKeyRef:
# name: elasticsearch-pw-elastic
# key: password
- name: KIBANA_HOST
value: kibana
- name: KIBANA_PORT
value: "5601"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
privileged: true # for openshift
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: filebeat-indice-lifecycle
mountPath: /etc/indice-lifecycle.json
readOnly: true
subPath: indice-lifecycle.json
- name: data
mountPath: /usr/share/filebeat/data
- name: varlog
mountPath: /var/log
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: filebeat-indice-lifecycle
configMap:
defaultMode: 0600
name: filebeat-indice-lifecycle
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: data
emptyDir: {}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
namespace: kube-logging
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-logging
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: kube-logging
name: filebeat
labels:
app: filebeat
rules:
- apiGroups: [""]
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-logging
name: filebeat
labels:
app: filebeat
---
It works, but the filebeat uses its hostname to connect to k8s:
2020-05-29T12:09:04.394Z INFO kubernetes/util.go:94 kubernetes: Using pod name filebeat-5zkvv and namespace kube-logging to discover kubernetes node
2020-05-29T12:09:04.402Z INFO kubernetes/util.go:100 kubernetes: Using node ip-10-20-2-27.ec2.internal discovered by in cluster pod node query
It doesn't look good because the hostname of filebeat pod will appear in the ES. It will be good to see the node's name in which the filebeat pod works instead of pod's name.
In the official example I found that we have to use those parameters (but in my example file I didn't use that)
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
When I added those fields the filebeat didn't discover any kubernetes entities because in this case the node's hostname was used but there weren't any pods with that name:
2020-05-29T12:03:33.279Z INFO kubernetes/util.go:94 kubernetes: Using pod name ip-10-20-0-115.ec2.internal and namespace kube-logging to discover kubernetes node
2020-05-29T12:03:33.283Z ERROR kubernetes/util.go:97 kubernetes: Querying for pod failed with error: pods "ip-10-20-0-115.ec2.internal" not found
What do I want? I want filebeat works with "hostNetwork: true" correctly.