I've been trying to set up logging for the Stack Monitoring page, but it just won't seem to work:
I enabled the elasticsearch module in filebeat with this config:
# Module: elasticsearch
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.10/filebeat-module-elasticsearch.html
- module: elasticsearch
# Server log
server:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths:
- /var/log/elasticsearch/*.log # Plain text logs
- /var/log/elasticsearch/*_server.json # JSON logs
gc:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths:
- /var/log/elasticsearch/gc.log.[0-9]*
- /var/log/elasticsearch/gc.log
audit:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths:
- /var/log/elasticsearch/*_access.log # Plain text logs
- /var/log/elasticsearch/*_audit.json # JSON logs
slowlog:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths:
- /var/log/elasticsearch/*_index_search_slowlog.log # Plain text logs
- /var/log/elasticsearch/*_index_indexing_slowlog.log # Plain text logs
- /var/log/elasticsearch/*_index_search_slowlog.json # JSON logs
- /var/log/elasticsearch/*_index_indexing_slowlog.json # JSON logs
deprecation:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths:
- /var/log/elasticsearch/*_deprecation.log # Plain text logs
- /var/log/elasticsearch/*_deprecation.json # JSON logs
I created an indice called "filebeat-000001" (with a ILM police and rollover alias), and I can see that it sends the log files present in the /var/log/elasticsearch/ folder with:
GET filebeat-000001/_search
{
"query": {
"match_all": {}
}
}
I wasn't sure if needed a index template, but I created an empty template that matches filebeat-0* with everything else left to blank/default with dynamic mapping on.
What could be wrong? The linked document doesn't really give me any more clues.
Hi,
I just tried running the command, and I get this:
Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://REDACTED:9200: 504 Gateway Timeout
This is not the machine with Kibana on it, is that the issue? That one doesn't have Filebeat though.
The setup command should be run from the same host with the same module config that filebeat is supposed to run on from then on. That error message sounds like it's unable to reach Elasticsearch through a reverse proxy.
If you configured the setup.kibana.host setting as well, then it'll also try to reach the Kibana host to install dashboards, but thats optional (see step 5 and 6 in the docs).
Small update, turns out "REDACTED-HOST-1:5601" was decommissioned and not up anymore. After changing that to the current Kibana host I now see this error:
I still get the other error running filebeat setup -e
Thanks for providing the config file, it looks fine so far.
The http response code 504 suggests that you're connecting to Elasticsearch via a proxy. Can you verify that there's no unexpected filtering or routing happening on that level? If not then the requests sent by setup should leave traces in the Elasticsearch log (e.g. that it's creating index templates and pipelines). Is that the case?
I'll have to verify if it's using a proxy or not, but it's very possible. But should it really be using one if it's just trying to connect to itself? Or is it actually trying to connect to Kibana over a possible closed port?
I think I at least managed to get it to detect that filebeat is sending logs to Kibana after that change earlier since I now have a new error, but how should I go about troubleshooting the missing JSON logs? I don't see any in the default folder.
@fus80677 ES will store all its logs to the location specified for path.logs property in your configuration (elasticsearch.yml). You should mount this volume to your filebeat in order for filebeat to read those logs.
Since you are using elasticsearch module in filebeat, I'm assuming elasticsearch.yml (filebeat module configuration file) will be stored in ${filebeat_home}/config/modules.d directory.
From your last post, it seems ES logs are being stored at different locations, you need to specify the correct paths for each type to ensure the logs are being parsed correctly.
PS: Filebeat doesn't connect to Kibana to send logs, logs are always ingested in Elasticsearch and Kibana loads the logs from Elasticsearch. The only point when beats interact with Kibana is to setup dashboards and stuff.
Ah, now I see that it has a custom log path in elasticsearch.yml!
But when I correct these paths, I'm back at the first error until I change it back to the default paths? I can see log files in these folders that it should find.
Perhaps the issue is the template/pipeline then? I have these pipelines at least:
And the filebeat log says it found and writes to "filebeat-7.10.1", which is an alias I set up for the actual index name (filebeat-000001+) so I can rollover them.
I let it sit for a few days to see if a new index would help, but nope...
Are there logs for this specific component somewhere?
I also noticed that the documentation page links to this. I don't have any of these fields in the indexes or template, could that be the issue?
EDIT: I ran filebeat export template > filebeat.template.json and then copied the contents of it to the JSON-field in the template editor in Kibana, I'll check what happens when it rolls over next time and (hopefully) applies it.
And that was it! After applying the (default?) settings and mappings I got from that file to the new template, the logging feature finally started working after it rolled over to a new index with them!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.