we have been using 0.16 for a long time with 20GB index and with no
problems (well, in general).
Now we have 0.19.7 for a new cluster and it looks like we've run into a
problem we didn't have before. We don't experiment with settings much, so
we have everything default and only adjust memory settings (900MB heap size
on fully dedicated 1024MB VPS, no swapping there, so seems OK).
The problem is that no matter what we try, we get ridiculous indexing
performance. The best what we could get is the bulk indexing by 100-doc
portions in a pure test setup (one server, one shard, 0 replicas, no
network involved, curl to localhost). Time spent for each 100-doc chunk
(below) is a) huge b) growing significally for each next portion.
The strangest part is that feeding that 1,5MB JSON file to ES may add ~
200MB of RAM usage.
We are out of ideas
Must be something simple, maybe related to upgrade.. Some settings?