I am trying to use something that should be straightforward:
beat -> logstash -> elasticsearch -> kibana.
Few config parameters and I believe it should be fine... but hwres' the catch... I just can't get my beat to forward its info to anyone. In order to get rid of an unknow... I removed logstash from the equation from now so trying to send to ES... but I just can't have it working. I tried different config files from others and I still get errors.. could you please help me out?
I am using the defaults in most things... so I would not expect issues... (When I run the following yaml file... it complains about the following: Exiting: error unpacking config data: more than one namespace configured accessing 'output' (source:'filebeat.yml')" How can that be?.. I have no idea since the file is so simple....
iAlso, if I use the output.file below... it just doesn't work... I see the output of beat in /var/lib/filebeat which is a default... but how come I cannot override it (again.. that was purely as a testing purpose since my goal is to send to ES or logstash)?
**** command line used to install the kubernetes pod *************
helm install --name filebeat stable/filebeat -f filebeat_values.yaml
**************** my filebeat_values.yaml file *******************
image:
repository: docker.elastic.co/beats/filebeat-oss
tag: 6.4.0
pullPolicy: IfNotPresent
config:
filebeat.config:
prospectors:
# Mounted filebeat-prospectors
configmap:
path: ${path.config}/prospectors.d/.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/.log
- /var/log/messages
- /var/log/syslog
- type: docker
containers.ids:
- ""
processors:
- add_kubernetes_metadata:
in_cluster: true
- drop_event:
when:
equals:
kubernetes.container.name: "filebeat"
#output.file:
#path: "/usr/share/filebeat/data"
#filename: filebeat
#rotate_every_kb: 10000
#number_of_files: 5
output.elasticsearch:
#Array of hosts to connect to.
hosts: ["elastic-elasticsearch-client:9200"]
#hosts: ["http://10.32.0.23:9200"]
#When a key contains a period, use this format for setting values on the command line:
#--set config."http.enabled"=true
http.enabled: false
http.port: 5066
#Upload index template to Elasticsearch if Logstash output is enabled
#https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-template.html
#List of Elasticsearch hosts
indexTemplateLoad: []
#- elasticsearch:9200
#List of beat plugins
plugins: []
#- kinesis.so
#pass custom command. This is equivalent of Entrypoint in docker
command: []
#pass custom args. This is equivalent of Cmd in docker
args: []
#A list of additional environment variables
extraVars: []
#Add additional volumes and mounts, for example to read other log files on the host
extraVolumes: []
extraVolumeMounts: []
extraInitContainers: []
resources: {}
priorityClassName: ""
nodeSelector: {}
annotations: {}
tolerations: []
#- operator: Exists
affinity: {}
rbac:
#Specifies whether RBAC resources should be created
create: true
serviceAccount:
#Specifies whether a ServiceAccount should be created
create: true
#The name of the ServiceAccount to use.
#If not set and create is true, a name is generated using the fullname template
name:
thanks everyone!