I have never been more frustrated in a product than I am with the ELK stack. Its needlessly complicated, feels almost intentional as if to push people towards Fleet and Elastic Cloud.
I have given up on the Apache integration. Given up on logstash.
I finally have a dedicated linux install showing up in kibana with only the Filebeat service dumping to external output.elasticsearch.
My Apache logformat is pretty common:
LogFormat "%h %l %u %t "%r" %q %>s %b" common
but it is dumping into Elastic as one message.
I tried to enable filebeat setup
to set the index / dash but I get an error that my kibana is localhost despite my config point to Kibana Host by IP and no localhost anywhere in config.
Error creating Kibana client: fail to get the Kibana version: HTTP GET request to /api/status fails: fail to execute the HTTP GET request: Get http://localhost:5601/api/status: dial tcp [::1]:5601: connect: connection refused. Response: .
Exiting: fail to create the Kibana loader: Error creating Kibana client: Error creating Kibana client: fail to get the Kibana version: HTTP GET request to /api/status fails: fail to execute the HTTP GET request: Get http://localhost:5601/api/status:
I get around this by declaring the host:
filebeat setup -e -E setup.kibana.host=10.0.0.5:5601
Loaded index template
Loaded machine learning job configurations
Still ssl_access_logs lines show only as single message field.
I tried enabling the Apache module but it added nothing.
I have and index called Weblog that has the patterns brought over from another setup.
I want to change the filebeat to point to that index.
#Array of hosts to connect to.
#TRYING TO SET A CUSTOM INDEX
ERROR instance/beat.go:906 Exiting: can not convert 'object' into 'string' accessing 'output.elasticsearch.index' (source:'/etc/filebeat/filebeat.yml')
Exiting: can not convert 'object' into 'string' accessing 'output.elasticsearch.index' (source:'/etc/filebeat/filebeat.yml')
How do I point filebeat to a set index ?