That deployment is perfectly capable of managing this and way bigger datasets. Since this seems an issue with our cloud service, I suggest you to open a support ticket, details here. Please include this post for reference.
Regarding using ogr2ogr
, the blog post explains how to use the tool. You have the Elasticsearch endpoint in the deployment interface.
The thing is that the current stable release of the tool is still not compatible with Elasticsearch 7 yet. The easiest way to execute the tool at this moment is by using a Docker image, but you need some "fluency" with both tools to debug any issues you may find.
The workflow I would use is:
- Clean and optionally simplify the file:
$ mapshaper -i DPA_CANTONAL_S.geojson \
-clean -verbose -simplify "80%" \
-o DPA_CANTONAL_S.clean.geojson
- Upload to your cluster given a
ES_URL
,USER
, andPASSWORD
assuming the file is in the working directory
$ docker run --rm -u $(id -u ${USER}):$(id -g ${USER}) \
-v $(pwd):/data \
osgeo/gdal:alpine-small-latest \
ogr2ogr -nln cantonal_test -f Elasticsearch \
"https://USER:PASSWORD@ES_URL" \
/data/DPA_CANTONAL_S.clean.geojson
- Check the index
cantonal_test
has been generated and populated
$ curl "https://USER:PASSWORD@ES_URL/_cat/indices/cantonal*?v"
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open cantonal_test bNy5iTM6SHujMAm2h9t2zA 1 1 221 0 10mb 10mb
- Load it in Maps
Hope it helps