Lucene index corruption on nodes restart

We are using a small elasticsearch cluster of three nodes, version 1.0.1.
Each node has 7 GB RAM. Our software creates daily indexes for storing it's
data. Daily index is something around 5 GB. Unfortunately, for a reason,
Elasticsearch eats up all RAM and hangs the node, even though heap size is
set to 6 GB max. So we decided to use monit to restart it on reaching
memory limit of 90%. It works, but sometimes we got such errors:

[2014-03-22 16:56:04,943][DEBUG][action.search.type ] [es-00]
[product-22-03-2014][0], node[jbUDVzuvS5GTM7iOG8iwzQ], [P], s[STARTED]:
Failed to execute [org.elasticsearch.action.search.SearchRequest@687dc039]
org.elasticsearch.search.fetch.FetchPhaseExecutionException:
[product-22-03-2014][0]: query[filtered(ToParentBlockJoinQuery
(filtered(history.created:[1392574921000 TO
*])->cache(_type:__history)))->cache(_type:product)],from[0],size[1000],sort[<custom:"history.created":
org.elasticsearch.index.search.nested.NestedFieldComparatorSource@15e4ece9>]:
Fetch Failed [Failed to fetch doc id [7263214]]
at
org.elasticsearch.search.fetch.FetchPhase.loadStoredFields(FetchPhase.java:230)
at
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:156)
at
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:332)
at
org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:304)
at
org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryAndFetchAction.java:71)
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:216)
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$4.run(TransportSearchTypeAction.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.EOFException: seek past EOF:
MMapIndexInput(path="/opt/elasticsearch/main/nodes/0/indices/product-22-03-2014/0/index/_9lz.fdt")
at
org.apache.lucene.store.ByteBufferIndexInput.seek(ByteBufferIndexInput.java:174)
at
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:229)
at
org.apache.lucene.index.SegmentReader.document(SegmentReader.java:276)
at
org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
at
org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:196)
at
org.elasticsearch.search.fetch.FetchPhase.loadStoredFields(FetchPhase.java:228)
... 9 more
[2014-03-22 16:56:04,944][DEBUG][action.search.type ] [es-00] All
shards failed for phase: [query_fetch]

According to our logs, this might happen when one or two nodes get
restarted. More strangely, same shard got corrupted on all nodes of
cluster. Why could this happen? How can we fix it? Can you suggest us how
to fix memory usage?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/66cc2d98-d35f-4512-8519-66b967e2a132%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

If your node has 7 GB RAM, you should set heap to max. 3,5 GB, but not 6 GB.

Jörg

On Sat, Mar 22, 2014 at 2:01 PM, Andrey Perminov aperminov@gmail.comwrote:

We are using a small elasticsearch cluster of three nodes, version 1.0.1.
Each node has 7 GB RAM. Our software creates daily indexes for storing it's
data. Daily index is something around 5 GB. Unfortunately, for a reason,
Elasticsearch eats up all RAM and hangs the node, even though heap size is
set to 6 GB max...

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoF6d%3Dq_rjfR94d6gMMPz-R8NN9f9v17gGC4q_3USSa6LA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Could you please explain why?

суббота, 22 марта 2014 г., 19:01:48 UTC+4 пользователь Jörg Prante написал:

If your node has 7 GB RAM, you should set heap to max. 3,5 GB, but not 6
GB.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/4d0f1e31-fc1a-4c68-919b-54d4d990a395%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Because the OS needs some air to breathe as well

--

Itamar Syn-Hershko
http://code972.com | @synhershko https://twitter.com/synhershko
Freelance Developer & Consultant
Author of RavenDB in Action http://manning.com/synhershko/

On Mon, Mar 24, 2014 at 10:52 AM, Andrey Perminov aperminov@gmail.comwrote:

Could you please explain why?

суббота, 22 марта 2014 г., 19:01:48 UTC+4 пользователь Jörg Prante написал:

If your node has 7 GB RAM, you should set heap to max. 3,5 GB, but not 6
GB.

Jörg

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4d0f1e31-fc1a-4c68-919b-54d4d990a395%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4d0f1e31-fc1a-4c68-919b-54d4d990a395%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHTr4ZvdZG1%2Bufao%2BMF7q6CMgEjRf%2BuA%2BM6%3DKmB-oEdcK0AKjg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

If you set the heap to 6 GB and your RAM 7GB, the whole Java process needs
6GB + ~2GB = 8GB. You understand this will exceed your main memory.

50% a good rule of thumb for RAM around 4GB-16GB, because the ES process is
using a lot of filesystem buffers of OS. The OS relies on filesystem
buffers for faster I/O. If you have 7 GB, the rule of thumb calculation is:
3,5 GB for the ES heap, 2 GB for ES process buffers and internals, 1GB for
the OS kernel, and 1 GB for file system buffers. In this scenario, the OS
can work with best performance possible.

Jörg

On Mon, Mar 24, 2014 at 9:52 AM, Andrey Perminov aperminov@gmail.comwrote:

Could you please explain why?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHr%2BJxgUL1%2B2i_9O_aTZ7iqfgRjqhrQyn5oDUPqE5558Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Ok, thanks, I'll try to set heap to 3,5 GB. But it is java process who eats
whole memory, up to 96% percents. Anyway, i'll try and write back if it
helps.
Do you know why index might become corrupted on both master and replica?

понедельник, 24 марта 2014 г., 13:00:34 UTC+4 пользователь Jörg Prante
написал:

If you set the heap to 6 GB and your RAM 7GB, the whole Java process needs
6GB + ~2GB = 8GB. You understand this will exceed your main memory.

50% a good rule of thumb for RAM around 4GB-16GB, because the ES process
is using a lot of filesystem buffers of OS. The OS relies on filesystem
buffers for faster I/O. If you have 7 GB, the rule of thumb calculation is:
3,5 GB for the ES heap, 2 GB for ES process buffers and internals, 1GB for
the OS kernel, and 1 GB for file system buffers. In this scenario, the OS
can work with best performance possible.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/8bfa780e-a612-4c24-961c-35ee8ba5aa20%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

If main memory gets too low, JVM can not allocate data for read/write and
can not close properly, and this may also corrupt Lucene/ES data.

Jörg

On Mon, Mar 24, 2014 at 1:02 PM, Andrey Perminov aperminov@gmail.comwrote:

Ok, thanks, I'll try to set heap to 3,5 GB. But it is java process who
eats whole memory, up to 96% percents. Anyway, i'll try and write back if
it helps.
Do you know why index might become corrupted on both master and replica?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHAPmTbL4-77TsMxQstyygP%3DyVC%3D6T%2BfiBiRaDdGBUOew%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Setting heap to half of the RAM really decreased number of OOM faults.
Strangely, they do occur sometime, but very-very rarely.

понедельник, 24 марта 2014 г., 18:13:39 UTC+4 пользователь Jörg Prante
написал:

If main memory gets too low, JVM can not allocate data for read/write and
can not close properly, and this may also corrupt Lucene/ES data.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/09ad7102-31a4-46b0-85fb-23c87e38ab70%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.