I have a very strange problem with geo_point type and "script_fields" or "fielddata_fields" search queries.
What I want to do is to retrieve latitude and longitude from a geo_point field created by a custom plugin that computes and indexes the centroid of a shape (https://github.com/opendatasoft/elasticsearch-plugin-geoshape). By definition, I don't have this geo_point in the document source, in order to get it in the result, I need to use fielddata_fields or script_fields.
1 ) fielddata_fields
It's working fine with one node (or one shard), but, since geo_point type can't be serialized an exception is raised with two nodes or more. It is actually a bug or not ? A working patch could be : https://gist.github.com/clement-tourriere/2aa7219bd1da96393cbd.
2 ) Using script_fields
When using doc['my_shape.centroid'].value, I have the same problem as above (same code used internally with getScriptValue()).
When using doc['my_shape.centroid'].lat and doc['my_shape.centroid'].lon, it's working just fine with simple request. When performing parallel queries on 10000 thousands documents minimum, it's raising an exception : https://gist.github.com/clement-tourriere/855a24f150efb06e8fec.
I have made a lot of tests to be able to reproduced it, and here's my conclusion.
I have started with indexing a geo_point field with doc_values set to true. A full example can be found here (writing in python with celery for parallel indexing/requests) : https://gist.github.com/clement-tourriere/82b27c25d5876ae53eef
Everything is working just fine.
The next example is using a geo_point defined as an external_value and doc_value set to true. (I have write a simple class for that, or you can use the geo_shape plugin to test this case) : https://gist.github.com/clement-tourriere/1ffe9eba5bfa0070e5d2.
With the same use case (10 parallel requests on 100000 documents), it is now raising the groovy bufferOverflow exception.
If I remove doc_value from external geo_point definition, it's working fine again.
Could you please help me with that.
Thank you again for your incredible work.