Open deleted file handles with elasticsearch

Hi there,

we have a problem with open file handles of deleted lucene indices on one
elasticsearch instance, and I am not sure how to figure this out.

Setup: ES 0.19.3 with result grouping, one index, plus FST suggester (where
I suspect the leak, as it is my code).

After a bug in our river, the ES instance imported like 60 products per
second constantly for several days. It always imported the same documents
(several ten thousands) every n minutes and then immediately restarted
after 30 seconds break.

This slowly filled up the available disk space, because lucene segments
were deleted but the filehandle was still kept open, lsof looks like this:

java 2695 elasticsearch 6783r REG 251,0 275
797199
/var/lib/elasticsearch/production/nodes/0/indices/products1/0/index/_f3n.nrm
(deleted)
java 2695 elasticsearch 6784r REG 251,0 1293
797193
/var/lib/elasticsearch/production/nodes/0/indices/products1/0/index/_f3n.fdt
(deleted)
java 2695 elasticsearch 6785r REG 251,0 12
797194
/var/lib/elasticsearch/production/nodes/0/indices/products1/0/index/_f3n.fdx
(deleted)
java 2695 elasticsearch 6787r REG 251,0 2592
797166
/var/lib/elasticsearch/production/nodes/0/indices/products1/0/index/_f3r.fdt
(deleted)
java 2695 elasticsearch 6790r REG 251,0 20
797174
/var/lib/elasticsearch/production/nodes/0/indices/products1/0/index/_f3r.fdx
(deleted)

There are around 6500 deleted files open concurrently.

I fixed this by calling also closing the indexReader instance I used in my
fst-suggest plugin. This somewhat changed the behaviour of my problem.

When not closing the indexreader, the es instance had lots of open files
and ate all the diskspace. Now i changed the problem behaviour to not
consume the diskspace but still having tons of open deleted files lurking
around.

The inMemory structure I am using for my suggest feature contains an
IndexReader, a SpellChecker, and FSTLookup and a ShardId.

Are there any ES resources I need to take care of additionally, before
writing to the lucene mailinglist? :slight_smile:

Thanks for any pointers in this regard, my Lucene knowledge is not the best
:slight_smile:

--Alexander

The delete file handles problem usually comes from not properly closing
index reader. In elasticsearch, usually you get a searcher and then you
need to release it when you are done. Can you point me to the suggester
code that handles it?

On Sun, Jun 24, 2012 at 9:47 AM, Alexander Reelsen alr@spinscale.de wrote:

Hi there,

we have a problem with open file handles of deleted lucene indices on one
elasticsearch instance, and I am not sure how to figure this out.

Setup: ES 0.19.3 with result grouping, one index, plus FST suggester
(where I suspect the leak, as it is my code).

After a bug in our river, the ES instance imported like 60 products per
second constantly for several days. It always imported the same documents
(several ten thousands) every n minutes and then immediately restarted
after 30 seconds break.

This slowly filled up the available disk space, because lucene segments
were deleted but the filehandle was still kept open, lsof looks like this:

java 2695 elasticsearch 6783r REG 251,0 275
797199
/var/lib/elasticsearch/production/nodes/0/indices/products1/0/index/_f3n.nrm
(deleted)
java 2695 elasticsearch 6784r REG 251,0 1293
797193
/var/lib/elasticsearch/production/nodes/0/indices/products1/0/index/_f3n.fdt
(deleted)
java 2695 elasticsearch 6785r REG 251,0 12
797194
/var/lib/elasticsearch/production/nodes/0/indices/products1/0/index/_f3n.fdx
(deleted)
java 2695 elasticsearch 6787r REG 251,0 2592
797166
/var/lib/elasticsearch/production/nodes/0/indices/products1/0/index/_f3r.fdt
(deleted)
java 2695 elasticsearch 6790r REG 251,0 20
797174
/var/lib/elasticsearch/production/nodes/0/indices/products1/0/index/_f3r.fdx
(deleted)

There are around 6500 deleted files open concurrently.

I fixed this by calling also closing the indexReader instance I used in my
fst-suggest plugin. This somewhat changed the behaviour of my problem.

When not closing the indexreader, the es instance had lots of open files
and ate all the diskspace. Now i changed the problem behaviour to not
consume the diskspace but still having tons of open deleted files lurking
around.

The inMemory structure I am using for my suggest feature contains an
IndexReader, a SpellChecker, and FSTLookup and a ShardId.

Are there any ES resources I need to take care of additionally, before
writing to the lucene mailinglist? :slight_smile:

Thanks for any pointers in this regard, my Lucene knowledge is not the
best :slight_smile:

--Alexander

Hi Shay,

sorry for the late response...

On Mon, Jun 25, 2012 at 2:16 PM, Shay Banon kimchy@gmail.com wrote:

The delete file handles problem usually comes from not properly closing
index reader. In elasticsearch, usually you get a searcher and then you
need to release it when you are done. Can you point me to the suggester
code that handles it?

Check
https://github.com/spinscale/elasticsearch-suggest-plugin/blob/master/src/main/java/org/elasticsearch/service/suggest/SuggestService.java-
line 73, the suggest() method. I put the release() method in a finally
block.

The Suggester class uses the indexReader and is closed, whenever the fst
suggester is updated - this happens usually every 10 minutes. The Suggester
class can be seen here
https://github.com/spinscale/elasticsearch-suggest-plugin/blob/master/src/main/java/org/elasticsearch/service/suggest/Suggester.javaand
has a cleanUpResources() method where it closes the spellchecker and
the indexReader.

If you have any further questions or I answered to blurry, feel free to
tell.

Thanks for your help!

--Alexander