I am running Elasticsearch (ECK) on Oracle Kubernetes Engine. The version in use is 7.14.1. The stack consists of: Elasticsearch, Kibana, Filebeat+Heartbeat+Metricbeat.
Each of these services runs properly and it is properly configured by following Elastic Cloud on Kubernetes [2.10] | Elastic.
Currently, we can search, inspect logs on Kibana and access the dashboard perfectly. Though, when entering on Stack Monitoring we are getting a:
When inspecting the deployment we are getting healthy (green) pods:
By looking for similar issues online, I have encountered the following material that did not turn out useful for our case:
Do you have any other suggestion to follow? Log-indexes seem to be written perfectly and we can inspect them easily on Kibana Dev Tools. How can I connect them to Stack Monitoring? Thanks!
Is filebeat running somewhere as well (the screenshot only shows heartbeat and metricbeat)? You'll need that in order to ingest the ES logs into the expected filebeat-* index pattern.
Checking which indices are available (for example via _cat/indices) could be a useful next step as well as checking the filebeat process config and logs for any errors.
Thanks @papers_hive ! I'm most familiar with using the elasticsearch filebeat module directly and I don't have much experience with hinted k8s configuration.
Do you know if the containers in question are hinted in a way that would activate that module for the elasticsearch container logs?
Do the docs that you found also contain a elasticsearch.cluster.uuid field?
It seems like those are the only two that are required by that getLogs function to show logs in the UI.
By rerunning and inspecting the before query, I saw that every record has the cluster.uuid in the form:
I do not know exactly how the containers would fetch the logs to elasticsearch. As both heartbeat and metricbeat were automatically working, I assumed the same for filebeat. And I can see it generates logs files and it is able to see the elasticsearch cluster and the uuid, though somehow these logs are not shipped to the correct place for Kibana to show them.
It's hard to tell from just a screenshot of the doc, but I checked the query on a 8.3 deployment and it looks like it's using the same fields I mentioned above.
If this works, then the UI should be able to show you the same logs as long as you're viewing the same cluster/timerange as the documents.
No problem and thanks for the response! That sure looks like it should match the query.
And you're definitely viewing the same cluster uuid shown in elastic.cluster.uuid right?
Have you checked your kibana logs or browser console for any errors?
I wonder if there might be some permissions problem causing trouble. Technically the query run by the UI is probably POST *:filebeat-*,filebeat-*/_search, so maybe try that to see if there might be some CCS execution problem as well.
Yeah, that looks reasonable too. If you can upgrade to at least 7.15, you could try setting monitoring.ui.debug_mode: true to see what queries the UI is executing exactly.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.