We are using ES 1.2.1 on a machine with 32GB RAM, fast SSD and 12 cores. The
machine runs Ubuntu 14.0.x LTS.
The ES process has 12GB of RAM allocated.
We have an index in which we inserted 105 million small documents so the ES
data folder is around 50GB in size
(we see this by using du -h . on the folder)
The new document insertion rate is rather small (ie. 100-300 small docs per
We experienced rather frequent ES OOM (Out of Memory) at a rate of around
one every 15 mins. To lower the load on the index
we deleted 104+ million docs (ie. mostly small log entries) by deleting
everything in one type :
curl -XDELETE http://localhost:9200/index_xx/type_yy
so that we ended up with an ES index with several thousands docs.
After this we started to experience massive disk IO (10-20Mbs reads and
1MBs writes) and more frequent OOM's (at a rate of around
one every 7 minutes). We restart ES after every OOM and kept monitoring the
data folder size. Over the next hour the size went down
to around 36GB but now it's stuck there (doesn't go down in size even after
Is this a problem related to segment merging running out of memory? If so
how can be solved?
If not, what could be the problem?
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to email@example.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/695c92a3-f77a-46bd-9041-79421a0bf1be%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.