I am using some C# code (NEST) to ingest 1.4 million documents. This run fine over night (chunk size 10 documents) but now stalled again after the ingestion of 228000-ish documents. I am getting:
[2019-02-22T09:36:30,942][INFO ][o.e.m.j.JvmGcMonitorService] [BwAAiDl] [gc] overhead, spent [258ms] collecting in the last [1s]
This is related to GC I believe and I am using:
I think my machine (windows VM) is too small in terms of spec. I was told that it can be scaled vertically. Is there an upper limit that Elastic Search can exploit in terms of memory (RAM)? What are good specs of windows machines? Should I just chose:
and how does this relate to actual memory? Thanks!