Stack monitoring cannot see Indices

Hi,

I am ES 7.5 and using metricbeat for monitoring, under my cluster, Elasticsearch, Indices. I can see the number of Indices, Memory etc but in the filter list I see nothing, I have changed the date range to days and still nothing, if I flip to view System indices I see those.

Any ideas?

Thanks
Phil

Hi @probson,

Are you using the elasticsearch or elasticsearch-xpack Metricbeat module? Either way, can you share that config (found in the modules.d folder, likemodules.d/elasticsearch-xpack.yml)

I am just using the default settings

  • module: elasticsearch
    metricsets:
    • ccr
    • cluster_stats
    • enrich
    • index
    • index_recovery
    • index_summary
    • ml_job
    • node_stats
    • shard
      period: 10s

Thanks
Phil

Hmm

Let's see what the monitoring data shows.

Can you run the following query against the monitoring cluster and return the results?

POST .monitoring-es-*/_search
{
  "size": 0, 
  "query": {
    "bool": {
      "filter": [
        {
          "term": {
            "type": "index_stats"
          }
        }
      ]
    }
  },
  "aggs": {
    "clusters": {
      "terms": {
        "field": "cluster_uuid",
        "size": 20
      },
      "aggs": {
        "indices": {
          "terms": {
            "field": "index_stats.index",
            "size": 500
          }
        }
      }
    }
  }
}

Hi,

Sorry for the delay, below as requested:

{
"took" : 216,
"timed_out" : false,
"_shards" : {
"total" : 7,
"successful" : 7,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 10000,
"relation" : "gte"
},
"max_score" : null,
"hits" :
},
"aggregations" : {
"clusters" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "D56cfJfWScKYkVmhg9yRPQ",
"doc_count" : 1447661,
"indices" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : ".monitoring-alerts-7",
"doc_count" : 57131
},
{
"key" : ".monitoring-es-7-mb-2020.01.11",
"doc_count" : 57130
},
{
"key" : ".monitoring-kibana-7-mb-2020.01.11",
"doc_count" : 57130
},
{
"key" : ".monitoring-logstash-7-mb-2020.01.11",
"doc_count" : 57130
},
{
"key" : ".monitoring-es-7-mb-2020.01.10",
"doc_count" : 52200
},
{
"key" : ".monitoring-kibana-7-mb-2020.01.10",
"doc_count" : 52200
},
{
"key" : ".monitoring-logstash-7-2020.01.10",
"doc_count" : 52200
},
{
"key" : ".monitoring-logstash-7-mb-2020.01.10",
"doc_count" : 52200
},
{
"key" : ".monitoring-es-7-mb-2020.01.12",
"doc_count" : 48490
},
{
"key" : ".monitoring-kibana-7-mb-2020.01.12",
"doc_count" : 48490
},
{
"key" : ".monitoring-logstash-7-mb-2020.01.12",
"doc_count" : 48490
},
{
"key" : ".monitoring-es-7-mb-2020.01.09",
"doc_count" : 43560
},
{
"key" : ".monitoring-kibana-7-mb-2020.01.09",
"doc_count" : 43560
},
{
"key" : ".monitoring-logstash-7-2020.01.09",
"doc_count" : 43560
},
{
"key" : ".monitoring-logstash-7-mb-2020.01.09",
"doc_count" : 43560
},
{
"key" : ".monitoring-es-7-mb-2020.01.13",
"doc_count" : 39850
},
{
"key" : ".monitoring-kibana-7-mb-2020.01.13",
"doc_count" : 39850
},
{
"key" : ".monitoring-logstash-7-mb-2020.01.13",
"doc_count" : 39850
},
{
"key" : ".monitoring-es-7-mb-2020.01.08",
"doc_count" : 34920
},
{
"key" : ".monitoring-kibana-7-mb-2020.01.08",
"doc_count" : 34920
},
{
"key" : ".monitoring-logstash-7-2020.01.08",
"doc_count" : 34920
},
{
"key" : ".monitoring-logstash-7-mb-2020.01.08",
"doc_count" : 34920
},
{
"key" : ".monitoring-es-7-mb-2020.01.14",
"doc_count" : 31210
},
{
"key" : ".monitoring-kibana-7-mb-2020.01.14",
"doc_count" : 31210
},
{
"key" : ".monitoring-logstash-7-mb-2020.01.14",
"doc_count" : 31210
},
{
"key" : ".monitoring-es-7-mb-2020.01.07",
"doc_count" : 26280
},
{
"key" : ".monitoring-kibana-7-mb-2020.01.07",
"doc_count" : 26280
},
{
"key" : ".monitoring-logstash-7-2020.01.06",
"doc_count" : 26280
},
{
"key" : ".monitoring-logstash-7-2020.01.07",
"doc_count" : 26280
},
{
"key" : ".monitoring-logstash-7-mb-2020.01.07",
"doc_count" : 26280
},
{
"key" : ".monitoring-es-7-mb-2020.01.15",
"doc_count" : 22570
},
{
"key" : ".monitoring-kibana-7-mb-2020.01.15",
"doc_count" : 22570
},
{
"key" : ".monitoring-logstash-7-mb-2020.01.15",
"doc_count" : 22570
},
{
"key" : ".monitoring-es-7-mb-2020.01.06",
"doc_count" : 17640
},
{
"key" : ".monitoring-kibana-7-mb-2020.01.06",
"doc_count" : 17640
},
{
"key" : ".monitoring-logstash-7-mb-2020.01.06",
"doc_count" : 17640
},
{
"key" : ".monitoring-es-7-mb-2020.01.16",
"doc_count" : 13930
},
{
"key" : ".monitoring-kibana-7-mb-2020.01.16",
"doc_count" : 13930
},
{
"key" : ".monitoring-logstash-7-mb-2020.01.16",
"doc_count" : 13930
},
{
"key" : ".monitoring-es-7-mb-2020.01.05",
"doc_count" : 9000
},
{
"key" : ".monitoring-kibana-7-mb-2020.01.05",
"doc_count" : 9000
},
{
"key" : ".monitoring-logstash-7-mb-2020.01.05",
"doc_count" : 9000
},
{
"key" : ".monitoring-es-7-mb-2020.01.17",
"doc_count" : 5290
},
{
"key" : ".monitoring-kibana-7-mb-2020.01.17",
"doc_count" : 5290
},
{
"key" : ".monitoring-logstash-7-mb-2020.01.17",
"doc_count" : 5290
},
{
"key" : ".monitoring-es-7-mb-2020.01.04",
"doc_count" : 360
},
{
"key" : ".monitoring-kibana-7-mb-2020.01.04",
"doc_count" : 360
},
{
"key" : ".monitoring-logstash-7-mb-2020.01.04",
"doc_count" : 360
}
]
}
}
]
}
}
}

Something i just thought about, x-pack is using an account with the role : 'remote_monitoring_agent`

Would have any affect on this?

That shouldn't matter.

D56cfJfWScKYkVmhg9yRPQ - do you know if that is the cluster you're looking at in the monitoring UI?

Can you possible share a screenshot as well so I can make sure I'm on the right page?

Hi,

Hopefully this is what you needed, cluster name is soc-cluster, there are no other clusters

Thanks
Phil

Are you using a single ES cluster? or do you have multiple?

The results from your query indicate you only have system indices on that ES cluster

Hi,

I only have 1 cluster.

Kind regards
Phil

Okay great.

If you run GET _cat/indices, what do you see?

Hi,

With that i can see all of my indices not just the .monitoring ones.

Thanks
Phil

If you run GET _stats/docs,fielddata,indexing,merge,search,segments,store,refresh,query_cache,request_cache, do you see your indices in that list?

Hi, i do indeed.

Kind regards
Phil

Very strange.

I'm assuming you've already checked this, but are there any errors in the metricbeat logs? Maybe share the start up log?

Hi Chris,

This morning i added the role:remote_monitoring_collector to the account the x-pack monitoring is running as, just checked now (your post reminded me to check) the indices are now appearing. I believe this is an extract from the guide i originally followed:

https://www.elastic.co/guide/en/elasticsearch/reference/current/esms.html

Create a user on the production cluster that has the remote_monitoring_collector built-in role. Alternatively, use the remote_monitoring_user built-in user.

Sorry that i did not try this earlier on.

Thanks
Phil

I'm glad you were able to resolve it!

Is this the same account you configured in your metricbeat stack modules? Like elasticsearch-xpack.yml?

Hi,

It is indeed.

Hmm interesting. If it were a permission issue, I'd expect to see something in the metricbeat server log file about it, but nothing was there?

Hi,

The metricbeat logs have very little in, no errors
This is the only error i can see in the service log.
Error fetching data for metricset elasticsearch.enrich: HTTP error 403 in : 403 Forbidden

Thanks
Phil