We've upgraded to elastic v7.5 a few months ago, and upgraded most of the known servers metricbeat to the same version. However, looks like there might be a few servers which still have v6.6 metricbeat running and so we see this error message constantly thrown in the master elastic node logs. Is there an easier to identify where the source data comign from these logs instead of logging into all the 100+ nodes and validating. Appreciate your time.
failed to put template [metricbeat-6.6.0]
org.elasticsearch.index.mapper.MapperParsingException: Failed to parse mapping [_doc]: Root mapping definition has unsupported parameters: [doc : {_meta={version=6.6.0}, dynamic_templates=[{traefik.health.response.status_code={path_match=traefik.health.response.status_code.*, mapping={type=long}, match_mapping_type=long}}, {vsphere.virtualmachine.custom_fields={path_match=vsphere.virtualmachine.custom_fields.*, mapping={type=keyword}, match_mapping_type=string}}, {system.process.env={path_match=system.process.env.*, mapping={type=keyword}, match_mapping_type=string}}, {system.process.cgroup.cpuacct.percpu={path_match=system.process.cgroup.cpuacct.percpu.*, mapping={type=long}, match_mapping_type=long}}, {docker.cpu.core.*.pct={path_match=docker.cpu.core.*.pct, mapping={scaling_factor=1000, type=scaled_float}, match_mapping_type=*}}, {docker.cpu.core.*.ticks={path_match=docker.cpu.core.*.ticks, mapping={type=long}, match_mapping_type=long}}, {docker.image.labels={path_match=docker.image.labels.*, mapping={type=keyword}, match_mapping_type=string}}, {kubernetes.apiserver.request.latency.bucket={path_match=kubernetes.apiserver.request.latency.bucket.*, mapping={type=long}, match_mapping_type=long}}, {fields={path_match=fields.*, mapping={type=keyword}, match_mapping_type=string}}, {docker.container.labels={path_match=docker.container.labels.*, mapping={type=keyword}, match_mapping_type=string}}, {strings_as_keyword={mapping={ignore_above=1024, type=keyword}, match_mapping_type=string}}], date_detection=false, properties={container={properties={image={properties={name={path=docker.container.image, type=alias}}}, name={path=docker.container.name, type=alias}, id={path=docker.container.id, type=alias}}}, kubernetes={properties={container={properties={image={ignore_above=1024, type=keyword}, start_time={type=date}, memory={properties={request={properties={bytes={type=long}}}, rss={properties={bytes={type=long}}}, usage={properties={node={properties={pct={scaling_factor=1000, type=scaled_float}}}, bytes={type=long}, limit={properties={pct={scaling_factor=1000, type=scaled_float}}}}}, majorpagefaults={type=long}, limit={properties={bytes={type=long}}}, available={properties={bytes={type=long}}}, workingset={properties={bytes={type=long}}}, pagefaults={type=long}}}, rootfs={properties={inodes={properties={used={type=long}}}, available={properties={bytes={type=long}}}, used={properties={bytes={type=long}}}, capacity={properties={bytes={type=long}}}}}, name={ignore_above=1024, type=keyword}, cpu={properties={request={properties={cores={type=long}, nanocores={type=long}}}, usage={properties={core={properties={ns={type=long}}}, node={properties={pct={scaling_factor=1000, type=scaled_float}}}, nanocores={type=long}, limit={properties={pct={scaling_factor=1000, type=scaled_float}}}}}, limit={properties={cores={type=long}, nanocores={type=long}}}}}, id={ignore_above=1024, type=keyword}, logs={properties={inodes={properties={count={type=long}, used={type=long}, free={type=long}}}, available={properties={bytes={type=long}}}, used={properties={bytes={type=long}}}, capacity={properties={bytes={type=long}}}}}, status={properties={phase={ignore_above=1024, type=keyword}, reason={ignore_above=1024, type=keyword}, ready={type=boolean}, restarts={type=long}}}}}, pod=
etc.....