Same for me: I have a traces-apm-default index, no data stream. No ILM is configured, so this index keeps blowing up, eating up all disk space. I have an ELK stack that I have continuously updated from earlier ELK versions until the most recent one. Seems that some migration was botched here.
I've found a solution with help from the Elastic team:
This is apparently a bug with APM, and you need to ensure that you turn off APM server, then remove the APM integration entirely, delete the wrong index, and install the integration again. Only then turn on APM server. See the GitHub thread for more info.
Hi @amelieBoond ,
apologies for coming back to this topic that late.
With the help of @slhck we were able to identify a bug in the APM Server: when the APM Integration hasn't been installed before the APM Server is stopped or restarted, it will drop any queued events to Elasticsearch and an index is created because of the missing index templates.
Unfortunately the easiest way to solve this issue would also mean loosing data:
(1) Stop APM Server
(2) Install the APM Integration via Fleet UI
(3) Delete the traces-apm-default index
(4) Start the APM Server
If you are keen on keeping the existing data, after step (1)+(2) you could try to reindex the data from the index to a data stream which matches the traces-apm-* index pattern, for example
This would probably mean a longer interruption on the cluster though as you would have to disable APM Server for the reindexing time. I'm also not certain how managable the reindexing is for such a large index.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.