Problems with metricbeat 7.3.0 using filebeat to ship metricbeat probe logs

Hi,

I was using metricbeat 6.5.4 in my dev system before I upgraded to 7.3.0.
Metricbeat is not pushing directly elasticsearch, but is the probes it takes to filesystem. So filesystem is my first buffer layer for metricbeats events. Then these logs are fetched by filebeat, shipped to redis where they are fetched by logstash, which is pushing to elasticsearch.
Filebeat is simply adding a field "logType", and using this logType as redis key.

The logstash configuration is quite trivial:

input
{
	redis
	{
		data_type => "list"
		db 				=> "${REDIS_DB}"
		host 			=> "${REDIS_HOST}"
		port			=> "${REDIS_PORT}"
		key 			=> "metricbeat"
	}
}

filter
{

	json
	{
		id => "json"
		source => "message"
	}
	
	# delete message if no _jsonparsefailure
	if ("_jsonparsefailure" not in [tags])
	{
		mutate
		{
			remove_field => ['message']
		}
	}
}

###
# this filter is just setting the index name and setting the index rotation time
###
filter
{
  mutate
  {
    # as suffix you can use following options: SUFFIX_WEEKLY, SUFFIX_DAILY, SUFFIX_MONTHLY
    # !!! NOT USING PREFIX plx_ FOR METRICBEAT !!!!
    add_field => { "[@metadata][indexName]" => "%{[logType]}-${SUFFIX_WEEKLY}" }
  }
}


output
{
	elasticsearch
	{
		hosts 		=> ["${ES_HOST}:${ES_PORT}"]
		ssl 			=> "${USE_ES_SSL}"
		cacert		=> "${ES_CA_CERT_PATH}"

		# credentials are fetched from envrionment or logstash-keystore

		user			=> "${LOGSTASH_USER}"
		password	=> "${LOGSTASH_PASSWORD}"

		index			=> "%{[@metadata][indexName]}"
	}
}

After Upgrading I exported metricbeat 7.3.0 index template and added it to elasticsearch via Kibana's DevTools.

I now have these templates:

.ml-meta                    [.ml-meta]                    0          7030099
.monitoring-alerts-7        [.monitoring-alerts-7]        0          7000199
.ml-config                  [.ml-config]                  0          7030099
.watches                    [.watches*]                   2147483647 
.ml-notifications           [.ml-notifications]           0          7030099
.data-frame-notifications-1 [.data-frame-notifications-*] 0          7030099
.watch-history-10           [.watcher-history-10*]        2147483647 
.ml-anomalies-              [.ml-anomalies-*]             0          7030099
metricbeat-7.3.0            [metricbeat-*]                1          
.watch-history-9            [.watcher-history-9*]         2147483647 
.logstash-management        [.logstash]                   0          
.kibana_task_manager        [.kibana_task_manager]        0          7030099
.management-beats           [.management-beats]           0          70000
.ml-state                   [.ml-state*]                  0          7030099
.monitoring-kibana          [.monitoring-kibana-7-*]      0          7000199
plx                         [plx*]                        0          
.monitoring-beats           [.monitoring-beats-7-*]       0          7000199
.monitoring-es              [.monitoring-es-7-*]          0          7000199
logstash                    [logstash-*]                  0          60001
.monitoring-logstash        [.monitoring-logstash-7-*]    0          7000199
.triggered_watches          [.triggered_watches*]         2147483647 
.data-frame-internal-1      [.data-frame-internal-1]      0          7030099

I also deleted all metricbeat indices, so that there shouldn't be any type mismatch. All metricbeat instances are updated to 7.3.0, so no cross version.

metricbeat-Data is getting stuck in logstash:

[2019-08-12T12:36:33,468][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"metricbeat-2019.w33", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x6f1e9013>], :response=>{"index"=>{"_index"=>"metricbeat-2019.w33", "_type"=>"_doc", "_id"=>"xIfThWwBNHp2V5M6a9Pb", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [source] tried to parse field [source] as object, but found a concrete value"}}}}

Any Idea what is the cause of this trouble?

Thanks, Andreas

played a bit. Changed my index now to having metricbeat version number in it, but no success.

Elasticsearch is giving this error:

{"type": "server", "timestamp": "2019-08-12T13:45:16,692+0000", "level": "DEBUG", "component": "o.e.a.b.TransportShardBulkAction", "cluster.name": "poc", "node.name": "poc-es-master-1", "cluster.uuid": "f-MnBq0BQbWRh5--yqFfgA", "node.id": "0Qh_dVaVTs6eWR1kWxP51g",  "message": "[metricbeat-7.3.0-2019.w33][0] failed to execute bulk item (index) index {[metricbeat-7.3.0-2019.w33][_doc][0DMUhmwBthAGD_OhkTgM], source[{\"offset\":8312360,\"logType\":\"metricbeat\",\"host\":{\"name\":\"myserver\"},\"agent\":{\"type\":\"metricbeat\",\"version\":\"7.3.0\",\"id\":\"bc3d0915-22b9-4b94-83db-b642c7a413ad\",\"ephemeral_id\":\"9f40dbb6-44b6-4290-8412-3972eba8c9e3\",\"hostname\":\"myserver\"},\"input\":{\"type\":\"log\"},\"prospector\":{\"type\":\"log\"},\"logstash\":{\"processing\":{\"filterEnd\":\"2019-08-12T13:45:16.680Z\",\"filterStart\":\"2019-08-12T13:45:16.679Z\",\"filterTime\":1}},\"metricset\":{\"name\":\"diskio\"},\"@timestamp\":\"2019-08-12T13:46:27.061Z\",\"beat\":{\"version\":\"6.5.4\",\"name\":\"myserver\",\"hostname\":\"myserver\"},\"@version\":\"1\",\"ecs\":{\"version\":\"1.0.1\"},\"service\":{\"type\":\"system\"},\"source\":\"/var/log/metricbeat/probes/metricbeat_probes.log\",\"system\":{\"diskio\":{\"io\":{\"time\":272453925},\"iostat\":{\"await\":0.6972624798711755,\"busy\":0.05842120856788753,\"write\":{\"request\":{\"merges_per_sec\":16.59162323328006,\"per_sec\":41.462366309323606},\"await\":0.6972624798711755,\"per_sec\":{\"bytes\":627307.3080758975}},\"queue\":{\"avg_size\":0.02887676880641298},\"service_time\":0.014090177133655395,\"read\":{\"await\":0,\"request\":{\"merges_per_sec\":0,\"per_sec\":0},\"per_sec\":{\"bytes\":0}},\"request\":{\"avg_size\":15129.558776167472}},\"read\":{\"count\":8010082,\"time\":144351032,\"bytes\":184426139136},\"write\":{\"count\":1613094680,\"time\":490286839,\"bytes\":61412385108480},\"name\":\"sda\"}},\"event\":{\"dataset\":\"system.diskio\",\"module\":\"system\",\"duration\":1207338}}]}" ,
"stacktrace": ["org.elasticsearch.index.mapper.MapperParsingException: object mapping for [source] tried to parse field [source] as object, but found a concrete value",
"at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:377) ~[elasticsearch-7.3.0.jar:7.3.0]",
"at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:485) ~[elasticsearch-7.3.0.jar:7.3.0]",
"at org.elasticsearch.index.mapper.DocumentParser.parseValue(DocumentParser.java:614) ~[elasticsearch-7.3.0.jar:7.3.0]",
"at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:427) ~[elasticsearch-7.3.0.jar:7.3.0]",
"at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:395) ~[elasticsearch-7.3.0.jar:7.3.0]",
"at org.elasticsearch.index.mapper.DocumentParser.internalParseDocument(DocumentParser.java:112) ~[elasticsearch-7.3.0.jar:7.3.0]",
"at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:71) ~[elasticsearch-7.3.0.jar:7.3.0]",
"at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:267) ~[elasticsearch-7.3.0.jar:7.3.0]",
"at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:772) ~[elasticsearch-7.3.0.jar:7.3.0]",

Great, after a lot of searching I found the root cause.

Don't know for which module, but metricbeat is declaring an object source in the mapping:

But I am using filebeat for shipping the saved metricbeat logfile. Filebeat itself is creating a field source as string. That filebeat field is conflicting with the optional source object of metricbeat.

So the issue should only occur if I use filebeat to ship metricbeat probe logs!

So I came to the following solution:
For metrics I do not really have a benefit if I know, which logfile was read. So get rid of it.
In the json parsing

I changed my filter for parsing the json message to the following:

filter
{

	# filebeat is exporting a field source as string, metricbeat has a source as object in mapping. This seems to collide.
	# We don't need metricbeat source file, so we delete the filebeat field BEFORE parsing the json message

	mutate
	{
		remove_field => [ "source" ]
	}

	json
	{
		id => "json"
		source => "message"
	}


	# delete message if no _jsonparsefailure
	if ("_jsonparsefailure" not in [tags])
	{
		mutate
		{
			remove_field => ['message']
		}
	}
}

-> Maybe it helps someone else :wink:

PS: renamed the title to be closer to the root cause.
PPS: before upgrading I missed to add metricbeat mapping templates. So I think that was the point why I did not face the issue in lower version.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.