Metricbeat - Can't see linked indices

Hi together,

after an issue with metricbeat i have to re-configure everything again.

i deployed on my kuberentes cluster metricbeat-7.17.8. In Kibana i can't see any linked indices but logs tells me no error

2023-11-28T16:30:08.546Z	INFO	[index-management]	idxmgmt/std.go:261	Auto ILM enable success.

2023-11-28T16:30:08.693Z	INFO	[index-management.ilm]	ilm/std.go:170	ILM policy metricbeat exists already.

2023-11-28T16:30:08.693Z	INFO	[index-management]	idxmgmt/std.go:397	Set setup.template.name to '{metricbeat-7.17.8 {now/d}-000001}' as ILM is enabled.

2023-11-28T16:30:08.693Z	INFO	[index-management]	idxmgmt/std.go:402	Set setup.template.pattern to 'metricbeat-7.17.8-*' as ILM is enabled.

2023-11-28T16:30:08.693Z	INFO	[index-management]	idxmgmt/std.go:436	Set settings.index.lifecycle.rollover_alias in template to {metricbeat-7.17.8 {now/d}-000001} as ILM is enabled.

2023-11-28T16:30:08.693Z	INFO	[index-management]	idxmgmt/std.go:440	Set settings.index.lifecycle.name in template to {metricbeat {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.

2023-11-28T16:30:08.746Z	INFO	template/load.go:110	Template "metricbeat-7.17.8" already exists and will not be overwritten.

2023-11-28T16:30:08.746Z	INFO	[index-management]	idxmgmt/std.go:297	Loaded index template.

2023-11-28T16:30:28.881Z	ERROR	metrics/metrics.go:376	error getting cgroup stats: error fetching stats for controller io: error fetching IO stats: error getting io.stats for path /hostfs/sys/fs/cgroup: error scanning file: /hostfs/sys/fs/cgroup/io.stat: input does not match format

2023-11-28T16:30:28.881Z	INFO	[monitoring]	log/log.go:184	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3240,"time":{"ms":15}},"total":{"ticks":12730,"time":{"ms":206},"value":12730},"user":{"ticks":9490,"time":{"ms":191}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":11},"info":{"ephemeral_id":"e5e8a507-3c2c-457e-921b-a41ed84d2bb9","uptime":{"ms":1320085},"version":"7.17.8"},"memstats":{"gc_next":150273304,"memory_alloc":76376400,"memory_total":1534151128,"rss":274178048},"runtime":{"goroutines":2553}},"libbeat":{"config":{"module":{"running":4}},"output":{"events":{"active":0},"read":{"bytes":6161},"write":{"bytes":4862}},"pipeline":{"clients":14,"events":{"active":4129,"retry":50}}},"system":{"load":{"1":0.14,"15":0.35,"5":0.22,"norm":{"1":0.035,"15":0.0875,"5":0.055}}}}}}

2023-11-28T16:30:58.880Z	ERROR	metrics/metrics.go:376	error getting cgroup stats: error fetching stats for controller io: error fetching IO stats: error getting io.stats for path /hostfs/sys/fs/cgroup: error scanning file: /hostfs/sys/fs/cgroup/io.stat: input does not match format

2023-11-28T16:30:58.881Z	INFO	[monitoring]	log/log.go:184	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3250,"time":{"ms":15}},"total":{"ticks":12770,"time":{"ms":46},"value":12770},"user":{"ticks":9520,"time":{"ms":31}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":11},"info":{"ephemeral_id":"e5e8a507-3c2c-457e-921b-a41ed84d2bb9","uptime":{"ms":1350083},"version":"7.17.8"},"memstats":{"gc_next":150273304,"memory_alloc":77736200,"memory_total":1535510928,"rss":274448384},"runtime":{"goroutines":2553}},"libbeat":{"config":{"module":{"running":4}},"output":{"events":{"active":0}},"pipeline":{"clients":14,"events":{"active":4129,"filtered":1,"total":1}}},"metricbeat":{"system":{"filesystem":{"events":1,"success":1}}},"system":{"load":{"1":0.14,"15":0.35,"5":0.22,"norm":{"1":0.035,"15":0.0875,"5":0.055}}}}}}

2023-11-28T16:31:05.796Z	ERROR	[publisher_pipeline_output]	pipeline/output.go:154	Failed to connect to backoff(elasticsearch(http://elasticsearchxxxxxxx:80)): Connection marked as failed because the onConnect callback failed: resource 'metricbeat-7.17.8' exists, but it is not an alias

2023-11-28T16:31:05.796Z	INFO	[publisher_pipeline_output]	pipeline/output.go:145	Attempting to reconnect to backoff(elasticsearch(http://elasticsearch.xxxxxxx:80)) with 34 reconnect attempt(s)

2023-11-28T16:31:05.796Z	INFO	[publisher]	pipeline/retry.go:219	retryer: send unwait signal to consumer
2023-11-28T16:31:05.796Z	INFO	[publisher]	pipeline/retry.go:223	  done

2023-11-28T16:32:58.881Z	INFO	[monitoring]	log/log.go:184	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3370,"time":{"ms":45}},"total":{"ticks":13220,"time":{"ms":165},"value":13220},"user":{"ticks":9850,"time":{"ms":120}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":11},"info":{"ephemeral_id":"e5e8a507-3c2c-457e-921b-a41ed84d2bb9","uptime":{"ms":1470084},"version":"7.17.8"},"memstats":{"gc_next":142801880,"memory_alloc":79917256,"memory_total":1555460904,"rss":234602496},"runtime":{"goroutines":2553}},"libbeat":{"config":{"module":{"running":4}},"output":{"events":{"active":0},"read":{"bytes":6160},"write":{"bytes":4862}},"pipeline":{"clients":14,"events":{"active":4129,"filtered":1,"retry":50,"total":1}}},"metricbeat":{"system":{"filesystem":{"events":1,"success":1}}},"system":{"load":{"1":1.48,"15":0.46,"5":0.58,"norm":{"1":0.37,"15":0.115,"5":0.145}}}}}}

That's the error so no data is getting written.

Can you share your metricbeat.yml please?

i took this deployment yaml file and this page guide

(Run Metricbeat on Kubernetes | Metricbeat Reference [8.11] | Elastic) - Version 7.17.8

(https://raw.githubusercontent.com/elastic/beats/7.17/deploy/kubernetes/metricbeat-kubernetes.yaml) - Version 7.17.8

changed only the image and the connection details

image: docker.elastic.co/beats/metricbeat:7.17.8

env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch.xxxxxxxx.com
        - name: ELASTICSEARCH_PORT
          value: "80"
        - name: ELASTICSEARCH_USERNAME
          value: xxxxxxx
        - name: ELASTICSEARCH_PASSWORD
          value: xxxxxxxx
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME

Before it worked well but now not

So what usually has happened is that the Alias metricbeat-7.17.8 was removed / deleted...

So now metricbeat is trying to write to an alias which no longer exists... instead it is writing to a
"Concrete / Actual" index which is not correct.

GET _cat/aliases/metricbeat-7.17.8?v
It should look like this.. but I suspect yours is missing

alias             index                               filter routing.index routing.search is_write_index
metricbeat-7.17.8 metricbeat-7.17.3-2028.11.29-000001 -      -             -              true

2nd run...

GET metricbeat-7.17.8

is SHOULD looks like this but you will be missing the alias section.

GET metricbeat-7.17.8

{
  "metricbeat-7.17.8-2023.11.29-000001" : {
    "aliases" : {
      "metricbeat-7.17.8" : { <!--- You will be missing this section... 
        "is_write_index" : true
      }
    },

Generally this can be fixed by stopping all metricsbeats then DELETE the current metricbeat index which in your case is metricbeat-7.17.8... and restart the beats.

But show the results of the commands first.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.