GET /_cat/indices/.monitoring*?v returns no values


(vamshi) #1

The monitoring tab in kibana throws "No monitoring data found" error. The monitoring index is also not found. GET /_cat/indices/.monitoring*?v returns an empty table with no values. Can you please help me in resolving this?


(kulkarni) #2

Hi,

I have couple of follow up questions. Is your monitoring data in the same cluster that Kibana is connected to?

If it is, that means that the cluster configured as elasticsearch.url in kibana.yml is the cluster with monitoring data. But as you say in the title, the .monitoring-* indices return no values.
If your monitoring data is in a different cluster, give the address of the dedicated monitoring cluster as xpack.monitoring.elasticsearch.url in kibana.yml. Monitoring data will be in the dedicated monitoring cluster if ES nodes are configured as such .
https://www.elastic.co/guide/en/elasticsearch/reference/current/monitoring-settings.html#http-exporter-settings

I'm also wondering if the user you are using is having the correct permissions to access the data.

Hope this helps,

Thanks
Rashmi


(vamshi) #3

Hi,

The monitoring data is in the same cluster (3 master nodes, 5 data nodes and Kibana node) that Kibana connected to. I gave the Elasticsearch node IP and port number for xpack.monitoring.elasticsearch.url . Yes the user I'm using has permissions to access data.


(kulkarni) #4

Can you check your Elasticsearch logs to see if there is anything pertaining to this and also I would check if you are giving false positives about your settings...

  1. maybe the xpack.monitoring.elasticsearch.url is just a misconfiguration? Would need more info.

2)Check some APIs just to make sure there are no exporters:

GET _cluster/settings?include_defaults
GET _nodes/stats

  1. maybe they have a setting in ES to disable monitoring
    even if you did though, you'd probably see .monitoring-kibana- indices, unless you have turned off Kibana collection in Kibana.yml

These settings can turn off monitoring kibana for stats:

xpack.monitoring.kibana.collection.enabled
xpack.monitoring.kibana.collection.interval

More info would help

Thanks
Rashmi


(vamshi) #5

Hi Rashmi,

Thank you for your help. I am new to ElasticSearch.

  1. xpack.monitoring.elasticsearch.url has the value of my ElasticSearch IP and its port number.
  2. How can I determine if there are no exporters?
    GET _nodes/stats returns following for all the 3 master nodes:
    {
    "_nodes": {
    "total": 8,
    "successful": 8,
    "failed": 0
    },
    "indices": {
    "docs": {
    "count": 0,
    "deleted": 0
    },
    "store": {
    "size_in_bytes": 0
    },
    "indexing": {
    "index_total": 0,
    "index_time_in_millis": 0,
    "index_current": 0,
    "index_failed": 0,
    "delete_total": 0,
    "delete_time_in_millis": 0,
    "delete_current": 0,
    "noop_update_total": 0,
    "is_throttled": false,
    "throttle_time_in_millis": 0
    },

for the data nodes, it returns following:
"indices": {
"docs": {
"count": 809126519,
"deleted": 111653837
},
"store": {
"size_in_bytes": 1279537932975
},
"indexing": {
"index_total": 34,
"index_time_in_millis": 142,
"index_current": 0,
"index_failed": 0,
"delete_total": 2,
"delete_time_in_millis": 3,
"delete_current": 0,
"noop_update_total": 0,
"is_throttled": false,
"throttle_time_in_millis": 0
},

  1. In my Kibana.yml file, I have only following configuration. the rest is commented:
    server.name: " "
    elasticsearch.url: " "
    xpack.monitoring.elasticsearch.url: " "
    xpack.security.enabled: false
    xpack.graph.enabled: false
    xpack.watcher.enabled: false
    xpack.monitoring.enabled: true
    logging.dest: /var/log/kibana/kibana.log

(Chris Earle) #7

Hi @vamshi,

server.name: " "
elasticsearch.url: " "
xpack.monitoring.elasticsearch.url: " "

I assume that you took out the actual values from there? If the cluster that elasticsearch.url talks to is the same as xpack.monitoring.elasticsearch.url, then you do not need to set the second value (but it doesn't hurt anything to set it).

How can I determine if there are no exporters?

You can set the exporter(s) anywhere that you can set Elasticsearch settings: elasticsearch.yml, command line arguments, and the cluster's _cluster/settings. By default, we use the local exporter, which keeps the data in the same cluster.

To find it, you only need to check two places:

GET /_cluster/settings?pretty

If it's not there, which takes priority over the next place, then you need to check:

GET /_nodes/settings?pretty

This will show you non-secure settings that were set via the command line or elasticsearch.yml.

After checking for exporters, it's probably worth double-checking the xpack.monitoring.enabled is either set to true or not set at all within the Elasticsearch cluster (also visible in the second command above). Given that you cannot find the data at all, I am wondering if X-Pack monitoring is simply disabled on the Elasticsearch side (or x-pack isn't even installed there).

Hope that helps,
Chris


(vamshi) #8

Hi Chris,

Thank you for taking time to resolve my issue.

elasticsearch.url and xpack.monitoring.elasticsearch.url both have the same value, which is my ElasticSearch URL and its port number.

`GET /_cluster/settings?pretty`

returns:
{ "persistent": { "cluster": { "routing": { "allocation": { "enable": "all" } } } }, "transient": {} }

GET /_nodes/settings?pretty

returns the nodes configuration. I don't see any exporters there. Also my kibana.yml and ElasticSearch.yml has no exporter settings configured.

xpack.monitoring.enabled

is set to true in both .yml files. xpack is installed in both elastic search master node and kibana.
I see that #action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*,.ml* is commented. Does it should be uncommented for monitoring? We don't need security, watches, triggered_watches and watcher-history. Please let me know.

Thank you,
Vamshi


(Chris Earle) #9

I see that #action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*,.ml* is commented.

As long as there is not another place that this is set (e.g., further up/down in the file), then this is fine. If you are using action.auto_create_index somewhere and .monitoring* is excluded, then that would explain this issue.

Have you checked the logs for Elasticsearch to see if there are any errors being logged by monitoring?

Let me know,
Chris


(vamshi) #10

I'm not using action.auto_create_index anywhere else. It is just commented in my ElasticSearch.yml file.

Have you checked the logs for Elasticsearch to see if there are any errors being logged by monitoring?

Can you please let me know how can I look errors in the ElasticSearch logs? All I see is several zipped log files at /var/log/elasticsearch directory.

Thank you,
Vamshi


(Chris Earle) #11

The currently active log shouldn't be zipped. Look for something named after your cluster that ends with .log.

Hope that helps,
Chris


(vamshi) #12

Sure! Thanks! Can you please let me know how errors look like in that log file?

Thank you,
Vamshi


(Chris Earle) #13

If you search for "monitoring", then you should hopefully find the right logs regardless of ES version. Check your elected master node's logs. You can easily determine which node is the elected master by running:

GET /_cat/master?v

Let me know,
Chris


(vamshi) #14

This is how my log file looks like.

GET /_cat/master?v

Shows my master02 node is configured as my master. But I got this logs by SSHing to my Master01 node.(we have 3 Master, 5 Data node clusters).


(Chris Earle) #15

Buried in that stack trace, it looks like you have disabled node.ingest on all of your nodes, which is not a supported configuration as-of X-Pack monitoring 6.0 (although technically the setting to get around it still exists we do not suggest using it).

Enable node.ingest: true on at least one of your nodes and this should resolve the issue.

Hope that helps,
Chris


(vamshi) #16

Thank you, Chris. It worked finally!


(Chris Earle) #17

Excellent! Glad we could help. For what it's worth, we do recommend running an external monitoring cluster and using the http exporter. This way you isolate as much workload as possible to a separate cluster (because it's bad practice to monitor yourself).

Good luck,
Chris


(vamshi) #18

Will do that Chris. Thank you for the help!


(system) #19

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.