Huge disk read count when MapperParsingException

I found that the indexing efficiency of the entire cluster gradually decreased with each passing day.
After checking the size of all the indices, I didn't find any abnormalities.
However, at the same time, I noticed an abnormally high disk read count. Later, I found mapping errors in the elastic node log.
Why does this error result in a high disk read count? (cause iops higher than 8k io/s)

Error message:
{"type": "server", "timestamp": "2023-08-25T03:51:49,375Z", "level": "WARN", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "es-log", "node.name": "es-cluster-0", "message": "unexpected error while indexing monitoring document", "cluster.uuid": "QxeQnY9JTmCpSoPTgdrzEg", "node.id": "hfKHxOK_R8KYPxVSmsRvNQ" ,"stacktrace": ["org.elasticsearch.xpack.monitoring.exporter.ExportException: MapperParsingException[failed to parse]; nested: IllegalArgumentException[Limit of total fields [1000] has been exceeded while adding new fields [644]];","at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:135) ~[x-pack-monitoring-7.17.2.jar:7.17.2]",
"at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) ~[?:?]",
...

I eventually turned off the monitoring feature by set xpack.monitoring.collection.enabled to false.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.