Es.read.field.as.array.include multiple values

I understand from documentation that when I get this error

SparkException: Job aborted due to stage failure: Task 0 in stage 20.0 failed 4 times, most recent failure: Lost task 0.3 in stage 20.0 (TID 75, 10.244.241.30, executor 4): org.elasticsearch.hadoop.rest.EsHadoopParsingException: org.elasticsearch.hadoop.EsHadoopIllegalStateException: Field 'metadata.a.b.c.d' not found; typically this occurs with arrays which are not mapped as single value

I should do

spark.read.xx.
.option("es.read.field.as.array.include", "metadata.a.b.c,metadata.a.b.c.d,metadata.a.b.c.d.e")

But it keep throwing the error even I include the option above.

Apache Spark 2.4.5,
Scala 2.11
ES hadoop 7.6.2

Any thoughts?

es.read.field.as.array.include is a finicky property that can cause some very confusing serialization errors in the connector. Can you post a mapping and an average document that you're trying to read? Are the c.d.e fields meant to be treated as 3 nested arrays?