We have several ES clusters receiving logs from kubernetes apps (fluentd) and non kubernetes apps (filebeat). All clusters have the same configuration. The "sandbox" cluster contains "unknown" type fields in some indexes. This is strange because the same indices in other clusters do not have "unknown" type fields. The elasticsearch version is 6.8.6.
I tried to identify the indices that contain these data fields without success so far. Since we're using one index per kubernetes application, there are many kubernetes indices and a few non-kubernetes indices in every cluster.
I spun up a vanilla ES cluster and sent all kubernetes logs to that cluster by re-deploying fluetnd with the new cluster's endpoint. No "unknown" data type field found there.
The offender must be a non-kubernetes (filebeat) index. There are only a few non-kubernetes indexes. I went through them manually, by fetching the JSON mapping locally and searching but I was not able to find any index mapping containing a field of different type.
Since the sandbox cluster is for testing purposes only, I am able to completely wipe out the cluster and spin up new elasticsearch nodes without problems. I did that but the same "unknown" data fields appeared, see picture bellow.
How can I find which index causes the problem?