I have an elasticsearch cluster with just one node and 1 shard on that node. I am performing bulk inserts @ 10k documents per bulk. Everything goes fine, but after a while I see that nearly 40 Gb of RAM has been used. When I started the node I just assigned it -Xmx4G -Xms4G. My server has crashed thrice, and I have no idea how to cap the memory being used. top reveals that the 'java' process is eating up all the memory. Any help is appreciated.
That's the OS caching things, not ES.
If ES is crashing then it may be because it needs more heap
How do I limit how much space the OS is using for caching? I'm new to java, so sorry for the newbie questions.
That's not something in java, it's an OS level thing and you will need to dig into the docs for your OS to find out.