How to get around the problem with Incompatible types found in multi-mapping during reading data by Spark from Elasticsearch?

How to get around this problem (without reindexing):

Py4JJavaError: An error occurred while calling o2649.load.
: org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Incompatible types found in multi-mapping: Field [metadata.cscope.0] has conflicting types of [LONG] and [OBJECT].

I try to exclude this field by es.read.field.exclude but not help.

My reading:

reader = spark.read.format("org.elasticsearch.spark.sql") \
    .option("es.port", "9200") \
    .option("spark.es.net.http.auth.user", "") \
    .option("spark.es.net.http.auth.pass", "") \
    .option("es.nodes.wan.only", "true") \
    .option("es.mapping.date.rich", "false") \
    .option("es.http.timeout", "3m") \
    .option("es.read.field.include", "log") \
    .option("es.read.field.exclude", "metadata*") \
    .option("es.read.metadata", "false") \
    .option("es.nodes", IP)

df = reader.load(INDEX_NAME)

Spark ver. 2.4.5
Lib: elasticsearch-spark-20_2.11-7.6.2.jar