ES Cannot Index Monitoring Indices

I am trying to turn on stack monitoring, however, metricbeat keeps logging something that starts like this: WARN [elasticsearch] elasticsearch/client.go:408 Cannot index event publisher.Event and inside containts a .monitoring-* index giving an index_not_found_exception. Not sure why, I have followed all the steps provided in Set up | Kibana Guide [7.10] | Elastic.

metricbeat.yml:

---
metricbeat.config.modules:
  path: "${path.config}/modules.d/*.yml"
metricbeat.config.monitors:
  reload.enabled: false
  reload.period: 10s
name: hostname
setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
output.elasticsearch:
  hosts: [
  "http://Y.Y.Y.Y:9200",
  "http://Y.Y.Y.Y:9200",
  "http://Y.Y.Y.Y:9200",
  "http://Y.Y.Y.Y:9200",
  "http://Y.Y.Y.Y:9200",
  "http://Y.Y.Y.Y:9200",
  "http://Y.Y.Y.Y:9200",
  "http://Y.Y.Y.Y:9200"
]
setup.kibana:
  host: http://Y.Y.Y.Y:5601
processors:
- add_host_metadata:
- add_cloud_metadata:
- add_docker_metadata:
- add_kubernetes_metadata:

note, the output.elasticsearch.hosts is my main es cluster as I am not running a separate monitorin cluster. I can see that no monitoring indices have been created but I have enabled the necessary xpack configurations/modules. Any help much appreciated!

Can you share more of the Metricbeat logs? Is there anything in your Elasticsearch logs at that time?

Two entries from metricbeat logs:

WARN        [elasticsearch]        elasticsearch/client.go:408        Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xbffe28555e8c4c37, ext:239540179763274, loc:(*time.Location)(0x84da4e0)}, Meta:{"index":".monitoring-es-7-mb"}, Fields:{"agent":{"ephemeral_id":"5a901ff7-b50e-4883-900b-420ddb7132be","hostname":"XXXX","id":"83751c1c-2a68-4259-b333-23591712e6fe","name":"XXXX","type":"metricbeat","version":"7.10.2"},"cluster_uuid":"n5hlnwBgRKuOp2AuA_UfjQ","ecs":{"version":"1.6.0"},"event":{"dataset":"elasticsearch.node.stats","duration":48551338,"module":"elasticsearch"},"host":{"architecture":"x86_64","containerized":false,"hostname":"XXXX","id":"c46fa9d920524599a49eb4ffafadffa9","ip":["10.124.0.110","fe80::250:56ff:feab:138a"],"mac":["00:50:56:ab:13:8a"],"name":"XXXX","os":{"codename":"Maipo","family":"redhat","kernel":"3.10.0-1160.11.1.el7.x86_64","name":"Red Hat Enterprise Linux Server","platform":"rhel","version":"7.9 (Maipo)"}},"interval_ms":10000,"metricset":{"name":"node_stats","period":10000},"node_stats":{"fs":{"io_stats":{"total":{"operations":9960603,"read_kilobytes":21861760,"read_operations":361647,"write_kilobytes":148490225,"write_operations":9598956}},"total":{"available_in_bytes":149525110784,"free_in_bytes":149525110784,"total_in_bytes":536604577792}},"indices":{"docs":{"count":1691663927},"fielddata":{"evictions":0,"memory_size_in_bytes":0},"indexing":{"index_time_in_millis":3006499,"index_total":1698632,"throttle_time_in_millis":0},"query_cache":{"evictions":79,"hit_count":82599,"memory_size_in_bytes":688823,"miss_count":13467911},"request_cache":{"evictions":0,"hit_count":19733,"memory_size_in_bytes":45123,"miss_count":3985},"search":{"query_time_in_millis":5224585,"query_total":1051198},"segments":{"count":1360,"doc_values_memory_in_bytes":15079718,"fixed_bit_set_memory_in_bytes":45873008,"index_writer_memory_in_bytes":0,"memory_in_bytes":53676046,"norms_memory_in_bytes":1671552,"points_memory_in_bytes":0,"stored_fields_memory_in_bytes":17431536,"term_vectors_memory_in_bytes":111288,"terms_memory_in_bytes":19381952,"version_map_memory_in_bytes":0},"store":{"size_in_bytes":386963264267}},"jvm":{"gc":{"collectors":{"old":{"collection_count":292732,"collection_time_in_millis":1324214},"young":{"collection_count":292732,"collection_time_in_millis":1324214}}},"mem":{"heap_max_in_bytes":1073741824,"heap_used_in_bytes":767314928,"heap_used_percent":71}},"mlockall":true,"node_id":"lk1dAJEfSVC59LI0mRm9Lw","node_master":false,"os":{"cgroup":{"cpu":{"cfs_period_micros":100000,"cfs_quota_micros":-1,"control_group":"/","stat":{"number_of_elapsed_periods":0,"number_of_times_throttled":0,"time_throttled_nanos":0}},"cpuacct":{"control_group":"/","usage_nanos":1018232685299376},"memory":{"control_group":"/system.slice/elasticsearch.service","limit_in_bytes":"9223372036854771712","usage_in_bytes":"40693600256"}},"cpu":{"load_average":{"15m":0.870000,"1m":1.100000,"5m":1.070000}}},"process":{"cpu":{"percent":0},"max_file_descriptors":65535,"open_file_descriptors":1733},"thread_pool":{"generic":{"queue":0,"rejected":0,"threads":23},"get":{"queue":0,"rejected":0,"threads":16},"management":{"queue":0,"rejected":0,"threads":5},"search":{"queue":0,"rejected":0,"threads":25},"watcher":{"queue":0,"rejected":0,"threads":0},"write":{"queue":0,"rejected":0,"threads":16}}},"service":{"address":"http://localhost:9200","type":"elasticsearch"},"source_node":{"name":"XXXX","transport_address":"10.124.0.110:9300","uuid":"lk1dAJEfSVC59LI0mRm9Lw"},"timestamp":"2021-02-01T16:18:29.561Z","type":"node_stats"}, Private:interface {}(nil), TimeSeries:true}, Flags:0x0, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=404): {"type":"index_not_found_exception","reason":"no such index [.monitoring-es-7-mb-2021.02.01] and [action.auto_create_index] ([.watches,.triggered_watches,.watcher-history-*]) doesn't match","index_uuid":"_na_","index":".monitoring-es-7-mb-2021.02.01"}

INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":-217088}}}},"cpu":{"system":{"ticks":3190940,"time":{"ms":506}},"total":{"ticks":6480920,"time":{"ms":1038},"value":6480920},"user":{"ticks":3289980,"time":{"ms":532}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":31},"info":{"ephemeral_id":"5a901ff7-b50e-4883-900b-420ddb7132be","uptime":{"ms":239550115}},"memstats":{"gc_next":19330656,"memory_alloc":11541656,"memory_total":851284761408},"runtime":{"goroutines":160}},"libbeat":{"config":{"module":{"running":5}},"output":{"events":{"acked":62,"batches":6,"dropped":3,"total":65},"read":{"bytes":3001},"write":{"bytes":125319}},"pipeline":{"clients":23,"events":{"active":0,"published":65,"total":65},"queue":{"acked":65}}},"metricbeat":{"elasticsearch":{"node":{"events":3,"success":3},"node_stats":{"events":6,"success":6}},"system":{"cpu":{"events":3,"success":3},"filesystem":{"events":8,"success":8},"fsstat":{"events":1,"success":1},"load":{"events":3,"success":3},"memory":{"events":3,"success":3},"network":{"events":9,"success":9},"process":{"events":23,"success":23},"process_summary":{"events":3,"success":3},"socket_summary":{"events":3,"success":3}}},"system":{"load":{"1":1.08,"15":0.87,"5":1.07,"norm":{"1":0.0675,"15":0.0544,"5":0.0669}}}}}}

Single entry from ES logs:

unexpected error while indexing monitoring document
org.elasticsearch.xpack.monitoring.exporter.ExportException: [.monitoring-kibana-7-2021.02.01] IndexNotFoundException[no such index [.monitoring-kibana-7-2021.02.01] and [action.auto_create_index] ([.watches,.triggered_watches,.watcher-history-*]) doesn't match]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:125) ~[x-pack-monitoring-7.10.2.jar:7.10.2]
        at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) ~[?:?]
        at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) ~[?:?]
        at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:?]
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]
        at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?]
        at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) ~[?:?]
        at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) ~[?:?]
        at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
        at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:126) [x-pack-monitoring-7.10.2.jar:7.10.2]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:108) [x-pack-monitoring-7.10.2.jar:7.10.2]
        at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:89) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:83) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.action.ActionListener$6.onResponse(ActionListener.java:282) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.action.ActionListener$4.onResponse(ActionListener.java:163) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.doRun(TransportBulkAction.java:533) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(TransportBulkAction.java:679) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.action.bulk.TransportBulkAction.doInternalExecute(TransportBulkAction.java:264) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.action.bulk.TransportBulkAction.lambda$processBulkIndexIngestRequest$5(TransportBulkAction.java:747) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.ingest.IngestService.lambda$executePipelines$3(IngestService.java:570) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.ingest.IngestService.innerExecute(IngestService.java:642) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.ingest.IngestService.executePipelines(IngestService.java:533) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.ingest.IngestService.access$000(IngestService.java:83) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.ingest.IngestService$3.doRun(IngestService.java:504) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:743) [elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
        at java.lang.Thread.run(Thread.java:832) [?:?]
Caused by: org.elasticsearch.index.IndexNotFoundException: no such index [.monitoring-kibana-7-2021.02.01] and [action.auto_create_index] ([.watches,.triggered_watches,.watcher-history-*]) doesn't match
        at org.elasticsearch.action.support.AutoCreateIndex.shouldAutoCreate(AutoCreateIndex.java:109) ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.action.bulk.TransportBulkAction.shouldAutoCreate(TransportBulkAction.java:393) ~[elasticsearch-7.10.2.jar:7.10.2]
        at org.elasticsearch.action.bulk.TransportBulkAction.doInternalExecute(TransportBulkAction.java:252) ~[elasticsearch-7.10.2.jar:7.10.2]
        ... 11 more

Not sure why the es logs are showing.montioring-kibana-* and metricbeat logs show .monitoring-es-7*.

I think you should check this setting as a first step.

Ahh that was it - I just had to add .monitoring* to that setting. Thank you @warkholm.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.