How does the filebeat elasticsearch module know where the elasticsearch server is?

How do I tell the filebeat elasticsearch module where my elasticsearch cluster is? I am running in kubernetes and elasticsearch is installed in a different namespace.

Hi @ceastman-ibm,

You need to configure the output section:
Configure filebeat to target elasticsearch

If you are using a service that resides at a different namespace, usually servicename.namespace will succeed:
Kubernetes Services - DNS

@pmercado the output is working fine, i thought the filebeat elasticsearch module would capture logs from my elasticsearch pods. do i need to install filebeat inside my elasticsearch kubernetes pods themselves?

I guess a better question would be whats the purpose of installing filebeat into kubernetes?

Since logs are located at /var/log/containers deploying a filebeat agent as a daemonset should make sense.

@pmercado i dont see that the filebeat daemonset is mounting /var/log thou. do i need to manually do something so that it does mount it?

ah this might be due to kube versions. i believe with later versions of kube that is not the correct directory any more. i am on kube version 1.14

afaik kubelet is bind mounting to /var/log/pods and /var/log/containers, but you can always use the mapping that works for your installation.

Yes please, use:

  • volume that includes path to log files
  • mount the volume at the filebeat container, usually RO
  • point the logs_path option at filebeat's container input configuration to the folder where the logs are to be found

You can base your manifests on ours:
https://raw.githubusercontent.com/elastic/beats/master/deploy/kubernetes/filebeat-kubernetes.yaml

looks like /var/log/pods is a soft link to /var/data/kubeletlogs that doesn't exist.

maybe its a clusterrole permission thing that filebeat cant see the /var/data/kubeletlogs. i checked out ibm fluentd pod and it has access to the logs in those directories.

not a cluster role permissions, looks like /var/data has to be mounted

that was it, i added the two missing volumes/volumemounts to the daemonset and filebeat is parsing the kube logs now.

ill make a pr to the helm charts for filebeat. https://github.com/elastic/helm-charts/pull/294

@pmercado so back to the original question - how does this work: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-elasticsearch.html ? i logged into my filebeat kube pod and there is no /var/log/elasticsearch ? should this be changed to something like /var/log/containers/*elasticsearch*.log

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.