Hi Team,
I am trying to read data from elasticsearch index and write into a spark dataframe, but the index has same field name with different cases(upper/lower case)
below is the mapping, and the error I am getting is pyspark.sql.utils.AnalysisException: u'Found duplicate column(s) in the data schema: providercolumn
;'
Can you please help on how do I deal with this scenario
{
“INDEXNAME”: {
"mappings": {
“Type”: {
"properties": {
"Providercolumn”: {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"providercolumn”: {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}