Setup mode is not available You do not have the necessary permissions to do this

i have installed a Elasticsearch cluster of 3 master and 2 data nodes and a single node Elasticsearch monitoring cluster. i am sending the metrics of Elasticsearch using metricbeat. i can see the metrics when i check the index of metricbeat . but i cant see my cluster in stack monitoring section . when i try to setup it throws an error

"Setup mode is not available You do not have the necessary permissions to do this."

i have disabled the xpack security but i have no luck with stack monitoring. any steps to debug the issue? i dont see anything in logs except warning that i have to enabel xpack secutiry

Hi @Yogesh_Kumar1,

What version of the stack are you using?

For your user, can you share the results of:

GET _security/user/elastic

Then, based on the response above, can you share each role definition associated with that user?

For example:

GET _security/role/superuser

Thanks!

i am not using authentication anywhere. Elasticsearch, kibana and metricbeat version 7.8.1

Elasticseach cluster (master ) node config:
Elasticsearch

cluster.name: es78-cluster
node.name: es-78-master-2
node.data: false
node.master: true
node.ingest: true
bootstrap.memory_lock: true
bootstrap.system_call_filter: false
network.host: X.X.X.X2
http.port: 9200
transport.profiles.default.port: 9300
discovery.seed_hosts: ["X.X.X.X1", "X.X.X.X2", "X.X.X.X3"]
cluster.initial_master_nodes: ["1X.X.X.X1:9300", "X.X.X.X2:9300"]
action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*
indices.breaker.fielddata.limit: 60%
thread_pool:
write:
queue_size: 200
xpack.monitoring.collection.enabled: true

Metricbeat config on same node

$ cat /etc/metricbeat/metricbeat.yml

metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
index.codec: best_compression
output.elasticsearch:
hosts: ["http://Y.Y.Y.Y:9200"]
processors:

  • add_host_metadata: ~
  • add_cloud_metadata: ~
  • add_docker_metadata: ~
  • add_kubernetes_metadata: ~

$ cat /etc/metricbeat/modules.d/elasticsearch-xpack.yml

  • module: elasticsearch
    xpack.enabled: true
    period: 10s
    hosts: ["http://X.X.X.X2:9200"]
    metricsets:
    - ccr
    - cluster_stats
    - index
    - index_recovery
    - index_summary
    - ml_job
    - node_stats
    - shard
    - enrich
    - pending_tasks

Configuration of monitoring node

$ cat /etc/elasticsearch/elasticsearch.yml
node.name: es7-monitering-node
node.data: true
node.master: true
node.ingest: true
bootstrap.memory_lock: true
bootstrap.system_call_filter: false
network.host: Y.Y.Y.Y
http.port: 9200
transport.profiles.default.port: 9300
cluster.initial_master_nodes: ["es7-monitering-node"]
indices.breaker.fielddata.limit: 60%
thread_pool:
write:
queue_size: 200
xpack.security.enabled: false
xpack.monitoring.collection.enabled: false

$ cat /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://Y.Y.Y.Y:9200"]
kibana.index: ".kibana"
xpack.monitoring.kibana.collection.enabled: false

Thanks @Yogesh_Kumar1

It sounds like this bug to me: https://github.com/elastic/kibana/issues/73740

Are you able to try and upgrade to 7.9 to see if it resolves your issue?

Thanks

let me try

i upgraded the ES and Kibana of monitoring cluster to 7.9 while data ES are still 7.8.1. now after upgrade the error is gone but stack monitoring is trying to fetch data from monitoring cluster instead of production cluster. is there anything that i am missing

Can you share your kibana.yml?

cat /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://Y.Y.Y.Y:9200"]
kibana.index: ".kibana"

Here the elasticsearch mentioneed is of monitoring cluster

this issue is also reported by another user too. but no resolution on this

On http://Y.Y.Y.Y:9200, are you seeing .monitoring-* indices? When using the stack monitoring metricbeat modules with xpack.enabled: true, metricbeat will not index into metricbeat-*, but rather into .monitoring-* indices.

This config looks right in that you are pointing Kibana to the Y.Y.Y.Y cluster, which is where Metricbeat is outputting:

What gives you this impression?

yup i can see the .montoring_* in monitoring as well as data cluster

$ curl -XGET http://Y.Y.Y.Y:9200/_cat/indices
green open .kibana-event-log-7.8.0-000001 QB4omqsFQoq9pqzB0KAuxw 1 0 2 0 10.4kb 10.4kb
green open .apm-custom-link uzd1aWEzSI23nsXfeo2rEA 1 0 0 0 208b 208b
green open .kibana_task_manager_1 bz9-bPwnQ2CUwvcljtUsNA 1 0 5 0 20.1kb 20.1kb
green open .apm-agent-configuration 3wQJnZPvSFat8AzEroCEeA 1 0 0 0 208b 208b
yellow open metricbeat-7.8.1-2020.08.24-000001 1jnSIRDOTkeF-fN6XSi0Sg 1 1 15522 0 4.2mb 4.2mb
green open .kibana_1 LGXM0Y1hSZSyt2d2R84vIg 1 0 20 2 230.7kb 230.7kb
green open .monitoring-es-7-mb-2020.08.24 _jvZTgr7QHaiDmqlekOALw 1 0 252 0 470.4kb 470.4kb

in logs i saw the cluster id of monitoring cluster instead of data cluster whenever i try to setup monitoring in stack monitoring.

Okay, let's verify that really quick.

On the monitoring cluster, can you run these two queres and return the results?

POST .monitoring-es-*/_search
{
  "size": 0,
  "aggs": {
    "types": {
      "terms": {
        "field": "cluster_uuid",
        "size": 10
      },
      "aggs": {
        "types": {
          "terms": {
            "field": "type",
            "size": 20
          }
        }
      }
    }
  }
}

and

POST .monitoring-es-*/_search
{
  "size": 5,
  "query": {
    "term": {
      "type": {
        "value": "cluster_stats"
      }
    }
  }, 
  "collapse": {
    "field": "cluster_uuid"
  }
}

curl -XPOST http://Y.Y.Y.Y:9200/.monitoring-es-*/_search?pretty -H 'Content-Type: application/json' -d'

{
"size": 0,
"aggs": {
"types": {
"terms": {
"field": "cluster_uuid",
"size": 10
},
"aggs": {
"types": {
"terms": {
"field": "type",
"size": 20
}
}
}
}
}
}'
{
"took" : 680,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 120,
"relation" : "eq"
},
"max_score" : null,
"hits" :
},
"aggregations" : {
"types" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "jNLcalI1SdSRpFJws2PqEw",
"doc_count" : 120,
"types" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "node_stats",
"doc_count" : 120
}
]
}
}
]
}
}
}

curl -XPOST http://Y.Y.Y.Y:9200/.monitoring-es-*/_search?pretty -H 'Content-Type: application/json' -d'

{
"size": 5,
"query": {
"term": {
"type": {
"value": "cluster_stats"
}
}
},
"collapse": {
"field": "cluster_uuid"
}
}'
{
"took" : 385,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 0,
"relation" : "eq"
},
"max_score" : null,
"hits" :
}
}

data cluster UUID : jNLcalI1SdSRpFJws2PqEw
monitoring cluster UUID : pQ4V93CZR2ar6-qyk6fJyQ

here are logs too:

Aug 24 19:44:24 perf-es7-monitering-node kibana: {"type":"response","@timestamp":"2020-08-24T14:14:24Z","tags":,"pid":21543,"method":"post","statusCode":404,"req":{"url":"/api/monitoring/v1/clusters/pQ4V93CZR2ar6-qyk6fJyQ/elasticsearch/nodes","method":"post","headers":{"host":"Y.Y.Y.Y:5601","connection":"keep-alive","content-length":"158","accept":"application/json, text/plain, /","dnt":"1","kbn-version":"7.9.0","user-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36","content-type":"application/json;charset=UTF-8","origin":"http://Y.Y.Y.Y:5601","referer":"http://Y.Y.Y.Y::5601/app/monitoring","accept-encoding":"gzip, deflate","accept-language":"en-GB,en;q=0.9,ru-RU;q=0.8,ru;q=0.7,en-US;q=0.6"},"remoteAddress":"10.143.1.72","userAgent":"10.143.1.72","referer":"http://Y.Y.Y.Y::5601/app/monitoring"},"res":{"statusCode":404,"responseTime":53,"contentLength":9},"message":"POST /api/monitoring/v1/clusters/pQ4V93CZR2ar6-qyk6fJyQ/elasticsearch/nodes 404 53ms - 9.0B"}

Aug 24 19:44:37 perf-es7-monitering-node kibana: {"type":"log","@timestamp":"2020-08-24T14:14:37Z","tags":["error","plugins","monitoring","monitoring"],"pid":21543,"message":"{ Error: Unable to find the cluster in the selected time range. UUID: pQ4V93CZR2ar6-qyk6fJyQ\n at then.clusters (/usr/share/kibana/x-pack/plugins/monitoring/server/lib/cluster/get_cluster_stats.js:32:13)\n at process._tickCallback (internal/process/next_tick.js:68:7)\n data: null,\n isBoom: true,\n isServer: false,\n output:\n { statusCode: 404,\n payload:\n { statusCode: 404,\n error: 'Not Found',\n message:\n 'Unable to find the cluster in the selected time range. UUID: pQ4V93CZR2ar6-qyk6fJyQ' },\n headers: {} },\n reformat: [Function],\n typeof: [Function: notFound] }"}

Problem that i am not able to understand is that if there is some issue then it should either work or doesnt work. But here i can see the Index and metrics of the data nodes but i cannot see the same in stack monitoring and also no error

Okay so it seems there is only type of monitoring document (for Elasticsearch monitoring) when there should be more.

Let's take a look at the Metricbeat output.

Are you able to see a message in the logs like:

INFO	[monitoring]	log/log.go:145	Non-zero metrics in the last 30s	{"monitoring":

If so, post the last version of this log and we can verify Metricbeat isn't trying to send the other metricsets.

if not, try running in debug mode (-d "*") and seeing if you see the message

Aug 24 20:27:14 perf-es-78-data-2-65-98 metricbeat: 2020-08-24T20:27:14.397+0530#011INFO#011[monitoring]#011log/log.go:145#011Non-zero metrics in the last 30s#011{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":1190450,"time":{"ms":1279}},"total":{"ticks":2538890,"time":{"ms":2722},"value":2538890},"user":{"ticks":1348440,"time":{"ms":1443}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":18},"info":{"ephemeral_id":"7dbab0bc-b324-4519-bab6-d13322f069a1","uptime":{"ms":28470118}},"memstats":{"gc_next":28905952,"memory_alloc":23222680,"memory_total":168711029672,"rss":81920},"runtime":{"goroutines":115}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":67,"batches":6,"total":67}},"pipeline":{"clients":19,"events":{"active":0,"published":67,"total":67},"queue":{"acked":67}}},"metricbeat":{"elasticsearch":{"node_stats":{"events":3,"success":3}},"system":{"cpu":{"events":3,"success":3},"filesystem":{"events":4,"success":4},"fsstat":{"events":1,"success":1},"load":{"events":3,"success":3},"memory":{"events":3,"success":3},"network":{"events":21,"success":21},"process":{"events":23,"success":23},"process_summary":{"events":3,"success":3},"socket_summary":{"events":3,"success":3}}},"system":{"load":{"1":0,"15":0.05,"5":0.02,"norm":{"1":0,"15":0.0008,"5":0.0003}}}}}}
Aug 24 20:27:44 perf-es-78-data-2-65-98 metricbeat: 2020-08-24T20:27:44.399+0530#011INFO#011[monitoring]#011log/log.go:145#011Non-zero metrics in the last 30s#011{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":1191730,"time":{"ms":1288}},"total":{"ticks":2541620,"time":{"ms":2759},"value":2541620},"user":{"ticks":1349890,"time":{"ms":1471}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":18},"info":{"ephemeral_id":"7dbab0bc-b324-4519-bab6-d13322f069a1","uptime":{"ms":28500122}},"memstats":{"gc_next":26511680,"memory_alloc":22524136,"memory_total":168889949424},"runtime":{"goroutines":115}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":61,"batches":6,"total":61}},"pipeline":{"clients":19,"events":{"active":0,"published":61,"total":61},"queue":{"acked":61}}},"metricbeat":{"elasticsearch":{"node_stats":{"events":3,"success":3}},"system":{"cpu":{"events":3,"success":3},"load":{"events":3,"success":3},"memory":{"events":3,"success":3},"network":{"events":21,"success":21},"process":{"events":22,"success":22},"process_summary":{"events":3,"success":3},"socket_summary":{"events":3,"success":3}}},"system":{"load":{"1":0,"15":0.05,"5":0.01,"norm":{"1":0,"15":0.0008,"5":0.0002}}}}}}

What is the output of metricbeat modules list?

metricbeat modules list
Enabled:
elasticsearch-xpack
system

Disabled:
activemq
aerospike
apache
appsearch
aws
azure
beat
beat-xpack
ceph
ceph-mgr
cloudfoundry
cockroachdb
consul
coredns
couchbase
couchdb
docker
dropwizard
elasticsearch
envoyproxy
etcd
golang
googlecloud
graphite
haproxy
http
ibmmq
iis
istio
jolokia
kafka
kibana
kibana-xpack
kubernetes
kvm
linux
logstash
logstash-xpack
memcached
mongodb
mssql
munin
mysql
nats
nginx
openmetrics
oracle
php_fpm
postgresql
prometheus
rabbitmq
redis
redisenterprise
sql
stan
statsd
tomcat
traefik
uwsgi
vsphere
windows
zookeeper

https://github.com/elastic/beats/pull/17609 has been available since 7.8.0 so it means you shouldn't need to explicitly define the metricsets for the elasticsearch-xpack module, when xpack.enabled: true.

Let's try removing all the defined metricsets in your elasticsearch-xpack.yml so it will look like:

- module: elasticsearch
  xpack.enabled: true
  period: 10s
  hosts: ["http://X.X.X.X2:9200"]

ok i disabled the metricsets but the issue still there.

Aug 24 21:07:15 perf-es-78-data-2-65-98 metricbeat: 2020-08-24T21:07:15.037+0530#011INFO#011[monitoring]#011log/log.go:145#011Non-zero metrics in the last 30s#011{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":41250,"time":{"ms":1265}},"total":{"ticks":83350,"time":{"ms":2612},"value":83350},"user":{"ticks":42100,"time":{"ms":1347}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":18},"info":{"ephemeral_id":"3a4b1826-eda8-4374-ab45-51c150ff7b08","uptime":{"ms":930117}},"memstats":{"gc_next":22875120,"memory_alloc":17646552,"memory_total":5592516072,"rss":585728},"runtime":{"goroutines":115}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":69,"batches":6,"total":69}},"pipeline":{"clients":19,"events":{"active":0,"published":69,"total":69},"queue":{"acked":69}}},"metricbeat":{"elasticsearch":{"node_stats":{"events":3,"success":3}},"system":{"cpu":{"events":3,"success":3},"filesystem":{"events":4,"success":4},"fsstat":{"events":1,"success":1},"load":{"events":3,"success":3},"memory":{"events":3,"success":3},"network":{"events":21,"success":21},"process":{"events":24,"success":24},"process_summary":{"events":3,"success":3},"socket_summary":{"events":3,"success":3},"uptime":{"events":1,"success":1}}},"system":{"load":{"1":0,"15":0.06,"5":0.03,"norm":{"1":0,"15":0.0009,"5":0.0005}}}}}}

I can see the stats like total index, heap usage with this :
curl -XGET 'http://Y.Y.Y.Y:9200/.monitoring-es-*/_search?pretty'

system metrics are there in metricbeat index :
curl -XGET 'http://Y.Y.Y.Y:9200/metricbeat*/_search?pretty'