Hello,
I'm trying to deploy filebeats using the "Official Elastic helm chart for Filebeat" from github.com/elastic/helm-charts... and it seems to only have the ability to output to Elasticsearch directly, but I need to get it to output to Logstash. (although I might be able to get away with outputting to Kafka)
I cannot find any examples anywhere of what to pass via a custom values.yaml file in order to configure filebeat to output to logstash instead of to Elasticsearch. I've tried several things, but to no effect.
Where I'm at currently is a custom values file that looks like this:
deployment:
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: tcp
max_message_size: 10MiB
host: "localhost:9000"
output.logastash:
enabled: true
hosts: ["<IP-ADDRESS-REDACTED>:<PORT-REDACTED>"]
daemonset:
# Allows you to add any config files in /usr/share/filebeat
# such as filebeat.yml for daemonset
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: docker
containers.ids:
- '*'
processors:
- add_kubernetes_metadata:
in_cluster: true
filebeat.autodiscover:
providers:
- type: kubernetes
include_pod_uid: true
templates:
- condition.regexp:
kubernetes.container.name: '.+'
config:
- type: docker
containers:
path: "/var/log/pods/*${data.kubernetes.pod.uid}/"
ids:
- "${data.kubernetes.container.name}"
output.logstash:
enabled: true
hosts: ["<IP-ADDRESS-REDACTED>:<PORT-REDACTED>"]
Has anyone managed to get this working? Is there an alternate helm chart that I haven't found to send to logstash or kafka instead of Elasticsearch directly from filebeats?
It's looking like I'm going to have to create a custom helm chart from what we already have running in our cluster... and I hate "reinventing the wheel" if I don't have to.
Thanks in advance for any help!