WARN elasticsearch/client.go:520 Cannot index event publisher.Event

Hi,

I have the following issue since in the APM logs 2 days :

2018-11-06T09:34:13.816Z WARN elasticsearch/client.go:520 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0x35495b80, ext:63677093648, loc:(*time.Location)(nil)}, Meta:common.MapStr(nil), Fields:common.MapStr{"processor":common.MapStr{"name":"transaction", "event":"transaction"}, "transaction":common.MapStr{"result":"success", "sampled":true, "id":"986e2613-6bbe-493b-931e-560a729b08ac", "name":"GET static file", "duration":common.MapStr{"us":1718}, "type":"request"}, "context":common.MapStr{"service":common.MapStr{"name":"XXXXXX", "agent":common.MapStr{"name":"nodejs", "version":"1.12.0"}, "language":common.MapStr{"name":"javascript"}, "runtime":common.MapStr{"name":"node", "version":"v8.11.2"}, "framework":common.MapStr{"name":"express", "version":"4.13.4"}},
.
.
.
.
(status=400): {"type":"mapper_parsing_exception","reason":"Failed to parse mapping [doc]: Mapping definition for [host] has unsupported parameters: [properties : {os={properties={family={ignore_above=1024, type=keyword}, version={ignore_above=1024, type=keyword}, platform={ignore_above=1024, type=keyword}}}, ip={type=ip}, name={ignore_above=1024, type=keyword}, id={ignore_above=1024, type=keyword}, mac={ignore_above=1024, type=keyword}, architecture={ignore_above=1024, type=keyword}}]","caused_by":{"type":"mapper_parsing_exception","reason":"Mapping definition for [host] has unsupported parameters: [properties : {os={properties={family={ignore_above=1024, type=keyword}, version={ignore_above=1024, type=keyword}, platform={ignore_above=1024, type=keyword}}}, ip={type=ip}, name={ignore_above=1024, type=keyword}, id={ignore_above=1024, type=keyword}, mac={ignore_above=1024, type=keyword}, architecture={ignore_above=1024, type=keyword}}]"}}

There is no corresponding message in Elastic logs.

Note that Elastic doesn't create the new indice at 00:00 whereas it was created before 2018-11-06

[2018-11-04T00:00:02,805][INFO ][o.e.c.m.MetaDataCreateIndexService] [_knGLBb] [apm-6.4.2-2018.11.04] creating index, cause [auto(bulk api)], templates [apm-6.4.2], shards [5]/[1], mappings [doc]
[2018-11-05T00:00:15,114][INFO ][o.e.c.m.MetaDataCreateIndexService] [_knGLBb] [apm-6.4.2-2018.11.05] creating index, cause [auto(bulk api)], templates [apm-6.4.2], shards [5]/[1], mappings [doc]

I'm running APM / Elastic / Kibana 6.4.2 in 3 docker containers on the same node
Free diskspace is around 38% (10GB available)

the following post didn't help :

OK, the culprit was a logstash template messing up Elastic.
After dropping that template, elastic immediately created the apm index.