Hardware spec

I am using some C# code (NEST) to ingest 1.4 million documents. This run fine over night (chunk size 10 documents) but now stalled again after the ingestion of 228000-ish documents. I am getting:

[2019-02-22T09:36:30,942][INFO ][o.e.m.j.JvmGcMonitorService] [BwAAiDl] [gc][65052] overhead, spent [258ms] collecting in the last [1s]

This is related to GC I believe and I am using:

-Xms10g
-Xmx10g

I think my machine (windows VM) is too small in terms of spec. I was told that it can be scaled vertically. Is there an upper limit that Elastic Search can exploit in terms of memory (RAM)? What are good specs of windows machines? Should I just chose:

-Xms20g
-Xmx20g

and how does this relate to actual memory? Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.