@Tortoise. Thanks for the recommendation. However, your suggestion is more of a workaround. This would probably require a custom container image to force permission changes. I don't want to do that if I don't have to.
My question was more of a standards question for the Elastic employees. I can't find anywhere in the documentation that stats specifics on when Metricbeats permissions for deployment. Everything says you can run as any permissions you want. Unfortunately based on my configurations, this does not seem to be true. I read in another post that system modules require more permissions. My confusion (and frustration) is how I can't find any documentation that supports these claims.
Can someone from Elastic chime in here?
Here is my configMaps for configuration and modules. Is there some setting in here that is causing elastic to try and poke around a root owned file?
Is there any way around this that does not require me to manually force permission changes?
Configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "metricbeat.fullname" . }}-config
labels:
{{- include "metricbeat.labels" . | nindent 4 }}
data:
metricbeat.yml: |-
logging:
level: info # debug #
metrics:
enabled: true # false
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
# Copied over from the non-k8s metricbeat.yml - these are default values anyway
setup.template.settings:
index:
number_of_shards: 1
codec: best_compression
#_source.enabled: false
# Copied over from the non-k8s metricbeat.yml
# TODO: is this best way to do setup in k8s env? Did this ever work before?
setup.kibana:
host: "${CL02_ENDPOINT}"
protocol: "https"
headers:
Authorization: "ApiKey ${CL02_APIKEY_ENCODED}"
metricbeat.autodiscover:
providers:
- type: kubernetes
scope: cluster
unique: true
templates:
- config:
- module: kubernetes
hosts: [{{ print .Release.Name "-kube-state-metrics:8080"}}]
period: 10s
add_metadata: true
metricsets:
- state_namespace
- state_node
- state_deployment
- state_daemonset
- state_replicaset
- state_pod
- state_container
- state_job
- state_cronjob
- state_resourcequota
- state_statefulset
- state_service
- state_persistentvolume
- state_persistentvolumeclaim
- state_storageclass
# If `https` is used to access `kube-state-metrics`, uncomment following settings:
# bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
# ssl.certificate_authorities:
# - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
- module: kubernetes
metricsets:
- apiserver
hosts: ["https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}"]
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.certificate_authorities:
- /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
period: 30s
# Uncomment this to get k8s events:
- module: kubernetes
metricsets:
- event
# Need separate k8s provider definition for haproxy module
- type: kubernetes
scope: node
node: ${NODE_NAME}
unique: false
templates:
- condition:
equals:
kubernetes.labels.app.kubernetes.io/name: "haproxy"
config:
- module: haproxy
metricsets: ["info", "stat"]
hosts: ["tcp://${data.kubernetes.pod.ip}:15098"]
period: 10s
processors:
- add_cloud_metadata:
monitoring:
enabled: true
cluster_uuid: ${CL01_UUID}
elasticsearch:
output.elasticsearch:
hosts: '["${CL02_ENDPOINT}/search"]'
api_key: "${CL02_APIKEY}"
modules
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "metricbeat.fullname" . }}-modules
labels:
{{- include "metricbeat.labels" . | nindent 4 }}
data:
logstash.yml: |-
- module: logstash
metricsets: ["node", "node_stats"]
hosts: ["${LOGSTASH_STATS_SVC}"]
period: 10s
xpack.enabled: true
system.yml: |-
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
#- core
#- diskio
#- socket
processes: ['.*']
process.include_top_n:
by_cpu: 5 # include top 5 processes by CPU
by_memory: 5 # include top 5 processes by memory
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)'
kubernetes.yml: |-
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
host: ${NODE_NAME}
hosts: ["https://${NODE_NAME}:10250"]
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: "none"
# If there is a CA bundle that contains the issuer of the certificate used in the Kubelet API,
# remove ssl.verification_mode entry and use the CA, for instance:
#ssl.certificate_authorities:
#- /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
- module: kubernetes
metricsets:
- proxy
period: 10s
host: ${NODE_NAME}
hosts: ["localhost:10249"]