ML Inference speeds &

Hello,

I have a question regarding the speed of which I embed my documents.

I currently have an index with 10.000 documents,
My ML node looks like this:


As can be seen, I currently have 4GB of ram & 2vCPU's.

With this setup I embedded the 10.000 documents in 6.5 hours.

My question is; What if I increase the RAM to 16GB which would also increase the vCPU's to 8 vCPUs.
How much faster would it be?
Is it possible to calculate this?
Is it possible to use the same calculation for an even higher upgrade?

Kr, Chenko