How to find which node, with an old version of metricbeat, is the data coming from?

We've upgraded to elastic v7.5 a few months ago, and upgraded most of the known servers metricbeat to the same version. However, looks like there might be a few servers which still have v6.6 metricbeat running and so we see this error message constantly thrown in the master elastic node logs. Is there an easier to identify where the source data comign from these logs instead of logging into all the 100+ nodes and validating. Appreciate your time.

failed to put template [metricbeat-6.6.0]
org.elasticsearch.index.mapper.MapperParsingException: Failed to parse mapping [_doc]: Root mapping definition has unsupported parameters:  [doc : {_meta={version=6.6.0}, dynamic_templates=[{traefik.health.response.status_code={path_match=traefik.health.response.status_code.*, mapping={type=long}, match_mapping_type=long}}, {vsphere.virtualmachine.custom_fields={path_match=vsphere.virtualmachine.custom_fields.*, mapping={type=keyword}, match_mapping_type=string}}, {system.process.env={path_match=system.process.env.*, mapping={type=keyword}, match_mapping_type=string}}, {system.process.cgroup.cpuacct.percpu={path_match=system.process.cgroup.cpuacct.percpu.*, mapping={type=long}, match_mapping_type=long}}, {docker.cpu.core.*.pct={path_match=docker.cpu.core.*.pct, mapping={scaling_factor=1000, type=scaled_float}, match_mapping_type=*}}, {docker.cpu.core.*.ticks={path_match=docker.cpu.core.*.ticks, mapping={type=long}, match_mapping_type=long}}, {docker.image.labels={path_match=docker.image.labels.*, mapping={type=keyword}, match_mapping_type=string}}, {kubernetes.apiserver.request.latency.bucket={path_match=kubernetes.apiserver.request.latency.bucket.*, mapping={type=long}, match_mapping_type=long}}, {fields={path_match=fields.*, mapping={type=keyword}, match_mapping_type=string}}, {docker.container.labels={path_match=docker.container.labels.*, mapping={type=keyword}, match_mapping_type=string}}, {strings_as_keyword={mapping={ignore_above=1024, type=keyword}, match_mapping_type=string}}], date_detection=false, properties={container={properties={image={properties={name={path=docker.container.image, type=alias}}}, name={path=docker.container.name, type=alias}, id={path=docker.container.id, type=alias}}}, kubernetes={properties={container={properties={image={ignore_above=1024, type=keyword}, start_time={type=date}, memory={properties={request={properties={bytes={type=long}}}, rss={properties={bytes={type=long}}}, usage={properties={node={properties={pct={scaling_factor=1000, type=scaled_float}}}, bytes={type=long}, limit={properties={pct={scaling_factor=1000, type=scaled_float}}}}}, majorpagefaults={type=long}, limit={properties={bytes={type=long}}}, available={properties={bytes={type=long}}}, workingset={properties={bytes={type=long}}}, pagefaults={type=long}}}, rootfs={properties={inodes={properties={used={type=long}}}, available={properties={bytes={type=long}}}, used={properties={bytes={type=long}}}, capacity={properties={bytes={type=long}}}}}, name={ignore_above=1024, type=keyword}, cpu={properties={request={properties={cores={type=long}, nanocores={type=long}}}, usage={properties={core={properties={ns={type=long}}}, node={properties={pct={scaling_factor=1000, type=scaled_float}}}, nanocores={type=long}, limit={properties={pct={scaling_factor=1000, type=scaled_float}}}}}, limit={properties={cores={type=long}, nanocores={type=long}}}}}, id={ignore_above=1024, type=keyword}, logs={properties={inodes={properties={count={type=long}, used={type=long}, free={type=long}}}, available={properties={bytes={type=long}}}, used={properties={bytes={type=long}}}, capacity={properties={bytes={type=long}}}}}, status={properties={phase={ignore_above=1024, type=keyword}, reason={ignore_above=1024, type=keyword}, ready={type=boolean}, restarts={type=long}}}}}, pod=
etc.....

Do you have Monitoring enabled?

Yes, we do. I did check in the Kibana monitoring page (beats section), but only see v7.5 beats showing up there. I'm guessing it's because the data coming from 6.6 version is never made into elastic and so no reference to where it's coming from.

Ok, do you have deployment automation at all? Puppet/chef/ansible/whatever.

yes, we do have automation at elastic level using ansible. And the metricbeat installation/configuration is limited to sub-sets of our servers.

That should be able to do reporting on installed packages then?

I verified on all those servers to which metricbeat was rolled out through ansible, and they all have v7.5 running. So, the one with v6.6 must be coming from one of those misc servers which did not get updated through ansible. And that's what I'm attempting to find without having logging into all unknown servers to verify if they have v6 installed on them.

In Discover can you pull up the Monitoring indices and then filter down based on version?

Or even simpler just got into Discover, look at metricbeat-*, open a metricbeat document and filter out the agents version(s) you are not looking for..

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.