Stack Monitoring shows "No monitoring data found"

Hi all,

Our Single Node Elastic Stack's Stack Monitoring was working all along and stopped working suddenly.

We've took a look at https://github.com/elastic/kibana/issues/33760 but not much information obtained too.

We tried to restart kibana node, problem persist.
Would appriciate if anyone could shed some light onto where should we focus on.

Below are the error shows in kibana

[illegal_argument_exception] unknown type for collapse field `cluster_uuid`, only keywords and numbers are accepted (and) [illegal_argument_exception] unknown type for collapse field `cluster_uuid`, only keywords and numbers are accepted (and) [illegal_argument_exception] unknown type for collapse field `cluster_uuid`, only keywords and numbers are accepted (and) ...

Result for GET _cat/indices

green open .siem-signals-default-000001         HcWN_3esQmGYxJE7XdAuDg 1 0        0      0     208b     208b
green open .monitoring-logstash-7-2020.11.06    UGAL8nGKStebXu4z6h6nCw 1 0    13740      0    9.7mb    9.7mb
green open .monitoring-logstash-7-2020.11.05    j7rEBNp9R3ulzz20MyQJ-Q 1 0    51720      0     35mb     35mb
green open .kibana-event-log-7.9.2-000001       WIbxD6eTT9O_0llIfWfokA 1 0    44083      0    3.1mb    3.1mb
green open .monitoring-logstash-7-2020.11.02    smeXE9SDRc6cEGml1hZU2A 1 0    51726      0     34mb     34mb
green open .monitoring-logstash-7-2020.11.01    abS-XGJxTiWHoIDF3htaBg 1 0    51720      0   33.9mb   33.9mb
green open .monitoring-logstash-7-2020.11.04    pbYfHGWvQHe3EjduYmA3Eg 1 0    51726      0   34.2mb   34.2mb
green open .monitoring-logstash-7-2020.11.03    p71_FfxpQjiPOHVmsHN7aQ 1 0    51714      0   34.2mb   34.2mb
green open .monitoring-logstash-7-2020.10.31    RIl1cjzQRXWTHHyizCrtag 1 0    51726      0   34.2mb   34.2mb
green open .items-default-000001                _Qwb5qh-R26LXC4ghFiQMg 1 0        0      0     208b     208b
green open .monitoring-es-7-2020.11.01          HkVIKRHhSyyP507IMwS0bw 1 0   398372 140280    325mb    325mb
green open .monitoring-es-7-2020.10.31          SiLInfVhQUGQRWXOQY9MhA 1 0   398495 140532  331.7mb  331.7mb
green open .monitoring-es-7-2020.11.05          OQGSEII3Tm-06w6tSEogIA 1 0   398496 139650  322.9mb  322.9mb
green open .apm-custom-link                     YLkiepw8RoC8g0QTNRLRhg 1 0        0      0     208b     208b
green open .kibana_task_manager_1               g_79plDOTbm-YxvHqxgZtg 1 0        7 137118   16.1mb   16.1mb
green open .monitoring-es-7-2020.11.04          pge9QJkxSwW7BnH0OTgCEg 1 0   398370 139692  322.4mb  322.4mb
green open logs-index_pattern_placeholder       _bTO8w8GRom1OKTzBgrKYQ 1 0        0      0     208b     208b
green open .monitoring-es-7-2020.11.03          GWRcE2cTRBiGO3zrsgYvPg 1 0   398409 140028    320mb    320mb
green open .monitoring-kibana-7-2020.10.31      Nkvyb0T0RCuYLO9tfbRbXg 1 0    17278      0    3.9mb    3.9mb
green open .monitoring-es-7-2020.11.02          eOINgLQUSwyADiZZvznXgA 1 0   398416 140112  326.6mb  326.6mb
green open auditbeat-7.9.2-2020.10.27-000001    bHTgHFXVSluqaOef_Q_Bng 1 0   108055      0   37.1mb   37.1mb
green open .monitoring-kibana-7-2020.11.01      2MzMUzCiT_GnsEKJGdiE-Q 1 0    17278      0    3.9mb    3.9mb
green open .monitoring-kibana-7-2020.11.03      PxGrdbUiQW65n_bVgu7zyw 1 0    17278      0    3.9mb    3.9mb
green open .monitoring-kibana-7-2020.11.02      2GBevlPjQL-SUvrv7GYGxQ 1 0    17280      0    3.9mb    3.9mb
green open .monitoring-kibana-7-2020.11.05      rP0QkGeKQDSvb99kxQnL2Q 1 0    17278      0    3.9mb    3.9mb
green open .monitoring-kibana-7-2020.11.04      IDoz-NoeTNqW7yH3Q4h34w 1 0    17278      0    3.9mb    3.9mb
green open .monitoring-kibana-7-2020.11.06      tIlpRY4xRPmLRlk5jpRXVA 1 0     2998      0 1012.2kb 1012.2kb
green open logstash-syslog514-2020.10.21-000001 k_TRZ967SDaLuLiSRjT-Lw 1 0    98433      0   24.9mb   24.9mb
green open winlogbeat-7.9.2-2020.10.21-000001   xxxsI4uvRvuBDK3ZNVlKGA 1 0  3225143      0    2.2gb    2.2gb
green open .lists-default-000001                AlXUwpAaTIqo8ukeklOePw 1 0        0      0     208b     208b
green open .apm-agent-configuration             eksa4e3RRymRda59sVIEkw 1 0        0      0     208b     208b
green open .monitoring-es-7-2020.11.06          7NLbhAI6TIGJWwIFJ8Dvsg 1 0   106815  97293   96.4mb   96.4mb
green open .kibana_1                            TH2lZEhWTl2DpUWlKzq-0A 1 0     4129    788     13mb     13mb
green open metricbeat-7.9.2-2020.10.27-000001   VgQCGL9iSX-_8jSyVHjlKg 1 0 28997158      0    6.7gb    6.7gb
green open filebeat-7.9.2-2020.10.30-000001     gyEIxGZrQDGvKmKHbaEk7g 1 0  1655445      0  808.7mb  808.7mb
green open .tasks                               ARq5zMAtR8CizKryw4LOQA 1 0        1      0    6.7kb    6.7kb
green open .security-7                          x4nWoQR2QSqwUuv9epzx_w 1 0       56      1  132.1kb  132.1kb
green open .reporting-2020-10-25                nxUA4sJjRC21qgWRzHgNMQ 1 0        1      0   81.8kb   81.8kb
green open metrics-index_pattern_placeholder    snyJCDYySnSrR9B-KLyJXA 1 0        0      0     208b     208b
green open kibana_sample_data_logs              ZWVZ2dGRQbClkQ9KUuME2g 1 0    14074      0   11.5mb   11.5mb
green open .async-search                        zjr6hM1jSBCyI7yB5qDkPQ 1 0        0      0    1.9mb    1.9mb

Result for GET _cluster/health

{
  "cluster_name" : "elk",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 42,
  "active_shards" : 42,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}


Thanks!!

Result for GET .monitoring-es-/_mapping*

Result for GET _cat/templates/monitoring ?v

{
  "took" : 858,
  "timed_out" : false,
  "_shards" : {
    "total" : 41,
    "successful" : 41,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 10000,
      "relation" : "gte"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : ".async-search",
        "_type" : "_doc",
        "_id" : "whKO4w79T3qF-nzPNffCBQ",
        "_score" : 1.0,
        "_source" : {
          "result" : 
			"This result is very very long"
		  "headers" : {
            "Authorization" : "Basic cG9jYWRtaW46cG9jYWRtaW4=",
            "_xpack_audit_request_id" : "kbR-BHDUTfyA6B26PgPuRg",
            "_xpack_security_authentication" : "++CwAwAIcG9jYWRtaW4BCXN1cGVydXNlcgoAAQRQT0MgAQABAApkd2FzZy1lczAxDmRlZmF1bHRfbmF0aXZlBm5hdGl2ZQAACgA="
          },
          "expiration_time" : 1604646915168,
          "response_headers" : { }
        }
      }
    ]
  }
}

Below is elasticsearch logs

[2020-11-06T00:37:59,826][INFO ][o.e.x.m.MlDailyMaintenanceService] [censored-es01] triggering scheduled [ML] maintenance tasks
[2020-11-06T00:37:59,826][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [censored-es01] Deleting expired data
[2020-11-06T00:37:59,841][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [censored-es01] Completed deletion of expired ML data
[2020-11-06T00:37:59,841][INFO ][o.e.x.m.MlDailyMaintenanceService] [censored-es01] Successfully completed [ML] maintenance tasks
[2020-11-06T08:00:07,967][INFO ][o.e.c.m.MetadataCreateIndexService] [censored-es01] [.monitoring-es-7-2020.11.06] creating index, cause [auto(bulk api)], templates [0_replica_for_all], shards [1]/[0]
[2020-11-06T08:00:08,133][INFO ][o.e.c.r.a.AllocationService] [censored-es01] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-es-7-2020.11.06][0]]]).
[2020-11-06T08:00:08,164][INFO ][o.e.c.m.MetadataMappingService] [censored-es01] [.monitoring-es-7-2020.11.06/7NLbhAI6TIGJWwIFJ8Dvsg] create_mapping [_doc]
[2020-11-06T08:00:08,211][INFO ][o.e.c.m.MetadataMappingService] [censored-es01] [.monitoring-es-7-2020.11.06/7NLbhAI6TIGJWwIFJ8Dvsg] update_mapping [_doc]
[2020-11-06T08:00:08,258][INFO ][o.e.c.m.MetadataMappingService] [censored-es01] [.monitoring-es-7-2020.11.06/7NLbhAI6TIGJWwIFJ8Dvsg] update_mapping [_doc]
[2020-11-06T08:00:08,336][INFO ][o.e.c.m.MetadataMappingService] [censored-es01] [.monitoring-es-7-2020.11.06/7NLbhAI6TIGJWwIFJ8Dvsg] update_mapping [_doc]
[2020-11-06T08:00:08,383][INFO ][o.e.c.m.MetadataMappingService] [censored-es01] [.monitoring-es-7-2020.11.06/7NLbhAI6TIGJWwIFJ8Dvsg] update_mapping [_doc]
[2020-11-06T08:00:08,461][INFO ][o.e.c.m.MetadataMappingService] [censored-es01] [.monitoring-es-7-2020.11.06/7NLbhAI6TIGJWwIFJ8Dvsg] update_mapping [_doc]
[2020-11-06T08:00:08,540][INFO ][o.e.c.m.MetadataMappingService] [censored-es01] [.monitoring-es-7-2020.11.06/7NLbhAI6TIGJWwIFJ8Dvsg] update_mapping [_doc]
[2020-11-06T08:00:08,586][INFO ][o.e.c.m.MetadataCreateIndexService] [censored-es01] [.monitoring-logstash-7-2020.11.06] creating index, cause [auto(bulk api)], templates [0_replica_for_all], shards [1]/[0]
[2020-11-06T08:00:08,679][INFO ][o.e.c.r.a.AllocationService] [censored-es01] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-logstash-7-2020.11.06][0]]]).
[2020-11-06T08:00:08,726][INFO ][o.e.c.m.MetadataMappingService] [censored-es01] [.monitoring-logstash-7-2020.11.06/UGAL8nGKStebXu4z6h6nCw] create_mapping [_doc]
[2020-11-06T08:00:08,773][INFO ][o.e.c.m.MetadataMappingService] [censored-es01] [.monitoring-logstash-7-2020.11.06/UGAL8nGKStebXu4z6h6nCw] update_mapping [_doc]
[2020-11-06T08:00:09,570][INFO ][o.e.c.m.MetadataCreateIndexService] [censored-es01] [.monitoring-kibana-7-2020.11.06] creating index, cause [auto(bulk api)], templates [0_replica_for_all], shards [1]/[0]
[2020-11-06T08:00:09,653][INFO ][o.e.c.r.a.AllocationService] [censored-es01] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-kibana-7-2020.11.06][0]]]).
[2020-11-06T08:00:09,700][INFO ][o.e.c.m.MetadataMappingService] [censored-es01] [.monitoring-kibana-7-2020.11.06/tIlpRY4xRPmLRlk5jpRXVA] create_mapping [_doc]
[2020-11-06T08:00:09,747][INFO ][o.e.c.m.MetadataMappingService] [censored-es01] [.monitoring-kibana-7-2020.11.06/tIlpRY4xRPmLRlk5jpRXVA] update_mapping [_doc]
[2020-11-06T08:59:59,863][INFO ][o.e.x.m.e.l.LocalExporter] [censored-es01] cleaning up [3] old indices
[2020-11-06T08:59:59,863][INFO ][o.e.c.m.MetadataDeleteIndexService] [censored-es01] [.monitoring-es-7-2020.10.30/bnnGe6E4STWPA2tGv1VV2w] deleting index
[2020-11-06T08:59:59,863][INFO ][o.e.c.m.MetadataDeleteIndexService] [censored-es01] [.monitoring-kibana-7-2020.10.30/WJwqARb3Qki33FJ3ehNmtA] deleting index
[2020-11-06T08:59:59,863][INFO ][o.e.c.m.MetadataDeleteIndexService] [censored-es01] [.monitoring-logstash-7-2020.10.30/3FN2IHdzRGOwCSP7VafHuQ] deleting index
[2020-11-06T09:00:00,010][INFO ][o.e.x.m.e.l.LocalExporter] [censored-es01] cleaning up [3] old indices
[2020-11-06T09:00:00,119][ERROR][o.e.x.m.e.l.LocalExporter] [censored-es01] failed to delete indices
org.elasticsearch.index.IndexNotFoundException: no such index [.monitoring-es-7-2020.10.30]
	at org.elasticsearch.cluster.metadata.Metadata.getIndexSafe(Metadata.java:725) ~[elasticsearch-7.9.2.jar:7.9.2]
	at org.elasticsearch.cluster.metadata.MetadataDeleteIndexService.deleteIndices(MetadataDeleteIndexService.java:98) ~[elasticsearch-7.9.2.jar:7.9.2]
	at org.elasticsearch.cluster.metadata.MetadataDeleteIndexService$1.execute(MetadataDeleteIndexService.java:85) ~[elasticsearch-7.9.2.jar:7.9.2]
	at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47) ~[elasticsearch-7.9.2.jar:7.9.2]
	at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702) ~[elasticsearch-7.9.2.jar:7.9.2]
	at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324) ~[elasticsearch-7.9.2.jar:7.9.2]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219) [elasticsearch-7.9.2.jar:7.9.2]
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.9.2.jar:7.9.2]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.9.2.jar:7.9.2]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.9.2.jar:7.9.2]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.9.2.jar:7.9.2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.9.2.jar:7.9.2]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.9.2.jar:7.9.2]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.9.2.jar:7.9.2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
	at java.lang.Thread.run(Thread.java:832) [?:?]
[2020-11-06T09:29:59,831][INFO ][o.e.x.s.SnapshotRetentionTask] [censored-es01] starting SLM retention snapshot cleanup task
[2020-11-06T09:29:59,831][INFO ][o.e.x.s.SnapshotRetentionTask] [censored-es01] there are no repositories to fetch, SLM retention snapshot cleanup task complete
[2020-11-06T09:30:00,018][INFO ][o.e.x.s.SnapshotRetentionTask] [censored-es01] starting SLM retention snapshot cleanup task
[2020-11-06T09:30:00,018][INFO ][o.e.x.s.SnapshotRetentionTask] [censored-es01] there are no repositories to fetch, SLM retention snapshot cleanup task complete
[2020-11-06T14:08:52,668][INFO ][o.e.c.m.MetadataMappingService] [censored-es01] [.monitoring-kibana-7-2020.11.06/tIlpRY4xRPmLRlk5jpRXVA] update_mapping [_doc]
[2020-11-06T15:23:35,531][WARN ][o.e.c.m.MetadataIndexTemplateService] [censored-es01] legacy template [.management-beats] has index patterns [.management-beats] matching patterns from existing composable templates [0_replica_for_all] with patterns (0_replica_for_all => [.*]); this template [.management-beats] may be ignored in favor of a composable template at index creation time
[2020-11-06T15:23:35,531][INFO ][o.e.c.m.MetadataIndexTemplateService] [censored-es01] adding template [.management-beats] for index patterns [.management-beats]
[2020-11-06T15:23:44,439][INFO ][o.e.c.m.MetadataMappingService] [censored-es01] [.monitoring-kibana-7-2020.11.06/tIlpRY4xRPmLRlk5jpRXVA] update_mapping [_doc]

Could anyone shed some light onto where we should look into to troubleshoot this issue?

Thanks in advance!

Hi all,

Sorry for keep pushing this ticket up, but we are still having the same problem and would seek like to help from the community.

Thank!

I'm surprised by the output of:

GET /_cat/templates/monitoring?v

Could you check it again?

My guess is that the index template is not there for whatever reason and the index has been created with default settings.

You need to make sure first that the template is there, then remove the monitoring indices which are wrong (may be all monitoring indices). Then it should work.

As @dadoonet pointed out, the output of GET _cat/templates/*monitoring*?v looks quite different. Usually, it should be something like the below:

name                   index_patterns             order version
.monitoring-logstash-2 [.monitoring-logstash-2-*] 0     
.monitoring-data-2     [.monitoring-data-2]       0     
.monitoring-kibana-2   [.monitoring-kibana-2-*]   0     
.monitoring-es-2       [.monitoring-es-2-*]       0

Hi @dadoonet and @sandeepkanabar ,

Thanks for pointing me to the right direction, I will look into it during working hours when I have the access!!

Thanks alot!