Failed to parse field [coordinates] of type [geo_shape]

I'm using Kibana to upload a geojson to Elasticsearch. It loads and shows up without problems but when I try to index it I get:

  "success": true,
  "failures": [
      "item": 0,
      "reason": "failed to parse field [coordinates] of type [geo_shape]",
      "doc": {
        "coordinates": {
          "type": "polygon",
          "coordinates": [

I have fixed all intersections and the error just doesn't convey why the indexing is failing.

Here is the geojson:

How can I find out the issue?

Hi @davidb1 I think the issue is in Elasticsearch not being able to parse the multigeometries of your file that have several holes.

I tried in different ways to clean/validate your dataset and upload. Finally, using ogr2ogr (more info) I managed to get a bit more of info from the error returned by the tool

"reason":"failed to parse field [geometry] of type [geo_shape]",
  "reason":"Invalid shape: Hole is not within polygon"}}}

Can I ask what's the use case for a dataset like this?

Note I moved this topic to the Kibana forum but I'm moving it back, since this is apparently more an issue for the Elasticsearch team, as the issue appeared not only indexing from kibana but also from ogr2ogr.

I'm trying to implement a query for checking whether a shape is at sea.

Elastic seems like a good fit for such queries but getting the geojson into an index turning out to be quite a challenge.

Also, it is confusing that the initial stage shows the geojson fine on the map but the indexing fails

How would I go about fixing something like this? I can't find enough info about the issue

Thanks for explaining the use case. I'd suggest looking for a land polygon and do the opposite query to find points that don't intersect in documents in your indices. That will reduce enormously the possibility of getting hit by complex geometries.

Still, for good performance, you need your land/water polygons to be split so the spatial index picks smaller geometries to try against, otherwise, your queries will be really slow.

You may want to import more detailed datasets than the well known Natural Earth. The German OSM community maintains very detailed land/water datasets. These are large files, so uploading with Kibana is not an option. I can suggest using the procedure detailed on this blog post.

Beware that even the simplified dataset for the land polygons is going to result in a 3GB index in your cluster with 694381 documents.

Example of the precision of the simplified dataset for the west coast of Scotland.

Hope this helps

1 Like

Forgot to mention, and it's actually relevant, that you can only run this kind of analysis on Elasticsearch using the enrich processor since Elasticsearch geoqueries only support to point to a single document with the Preindexed Shape option.

With the enrich processor you could point to your land polygons index as reference data and use the DISJOINT spatial relation to mark the documents that don't intersect to any document of your reference polygons. That way, later at query time, you only need to filter by that field.

I had a look into this issue and it is actually a known bug that will be resolved here:

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.