Maps crashes Elastic Cloud Kibana Instance on GCP

I'm evaluating Elastic Cloud for a geospatial application that maps real estate parcel data to geographic features like flood zones and wetlands. I'm using Kibana Maps to display the result.

I'm using a trial account and finding it to be very unreliable. The Kibana instance crashes multiple times a day and is almost guaranteed to crash if 2 concurrent users attempt to access it. At that point they both get {"ok":false,"message":"The instance rejected the connection."} and have to wait for the Kibana instance to be re-provisioned.

I notice trial accounts are limited to 1GB of memory for Kibana. Would adding more memory fix the problem or just make the crashes less frequent? I guess I'm asking does anyone know if Kibana Maps leaks memory?

Hi ,
I would try and increase the memory on Kibana GCP instance. Also would be good to know If you can open the network tab of your browser devtools you should see an msearch request. Does this request look the way you'd expect? Is ES returning any results? ES and Kibana logs would help as well. Also how big is your data?

I shall ask for some more additional help from Maps team here- tagging @Nathan_Reese @thomasneirynck for more inputs.

Thanks
Rashmi

hi @pgoldtho,

thanks for flagging.

@rashmi's suggestions are a good first step.

When you say the kibana instance crashes:
- Does it only crash when one or more users are actively using the Maps at the same time?
- Or also when Kibana is sitting there idly?

As for it being caused by the Maps-app, it would be really useful about what sort of data you are trying to visualize. I assume the parcels, flood zones and wetlands are all indexed as documents with a geo_shape field in Kibana? Are you brining those in as "Document layers"?

  • how many documents are you trying to visualize at the same time?
    • Are you hitting the default 10k limit of Elasticsearch?
      • If so, did you increase this limit on the server?
  • how large are the individual features (E.g. tens, hundreds, or thousand of vertices per polygon?)?
  • how many layers have you added to the map?

If these are large numbers in both cases, increasing the memory-requirements will help.

If you can share any example data, that would be greatly useful too.

It only happens with Maps and is reasonably easy to reproduce. For example, Kibana usually crashes if I open the same map in 3 different windows and reload them all. You don't get much additional information from developer tools. The xhr requests fail with a 504 error followed by a number of 502's.

The maps displays document layers with geo_shape fields. Here are a couple of screenshots to show the type of information

I'm not able to increase the memory on the Kibana instance. The control for doing this is greyed out. I suspect that is a limit for a trial account. How confident are you that increasing the memory will fix rather than shift the problem (e.g. from 2 to 10 concurrent users)?

I'm happy to share the data. Not sure how to do that. There are 1.3 million documents spread over 5 indexes.

@pgoldtho thanks for the quick reply

I'm happy to share the data. Not sure how to do that. There are 1.3 million documents spread over 5 indexes.

Is that data freely available somewhere (e.g. as GeoJson)?

Or even more easy, could you perhaps create an Elasticsearch snapshot? Snapshot and restore | Elasticsearch Guide [8.11] | Elastic That way we can restore on our end.

@thomasneirynck

It's public record data from multiple sources (https://github.com/pgoldtho/datasets). Some of it was pre-processed prior to loading. The instance is running in Elastic Cloud so snapshots are taken automatically. Is there a way I can grant you access to pull them?

Increasing Kibana memory from 1GB to 2GB seems to have helped. I can reload a map in 3 separate windows without crashing the instance

2 Likes

Hasn't crashed in a couple of days since increasing the memory.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.