When I run lsof on the ElasticSearch java process it often shows that
some of the open files have been deleted. Does ElasticSearch and/or
Lucene really need to keep these deleted files open so it can continue
to use them, or is this just an oversight/bug?
Yes, those deleted files are needed to be kept around if there are open
search operations on going against them. Once they are done, those files
will be actually deleted (by the OS). Its really a file handle leak if you
see an increase of it over time.
-shay.banon
On Fri, Oct 15, 2010 at 6:14 PM, Tom May tom@tommay.net wrote:
When I run lsof on the Elasticsearch java process it often shows that
some of the open files have been deleted. Does Elasticsearch and/or
Lucene really need to keep these deleted files open so it can continue
to use them, or is this just an oversight/bug?
That makes sense, but there are no open search operations on that
index, or any of the indexes, at least not any searches that were
initiated from an api. That's from a fresh run where I stopped ES,
deleted /work, restarted ES, and used the http bulk index API to put
data into a couple hundred indexes. I haven't done any searches yet,
I'm just poking around to see what kind of ulimit I might need. The
fds have been showing up as (deleted) for over an hour, and I'm
wondering if the process is ever going to close them.
I re-indexed the data into the same indexes and the fds were closed, so either:
The files were closed properly by the application at some point.
There is a leak, and the files were closed when java gc'd them.
Yes, those deleted files are needed to be kept around if there are open
search operations on going against them. Once they are done, those files
will be actually deleted (by the OS). Its really a file handle leak if you
see an increase of it over time.
-shay.banon
On Fri, Oct 15, 2010 at 6:14 PM, Tom May tom@tommay.net wrote:
When I run lsof on the Elasticsearch java process it often shows that
some of the open files have been deleted. Does Elasticsearch and/or
Lucene really need to keep these deleted files open so it can continue
to use them, or is this just an oversight/bug?
Internally, there is a scheduled refresh of the indices that might keep
those around (deleted files), and I not sure about the operating system
keeping some of them around and when it actually decides to deletes them.
There have been some tests run that shows that under load (both indexing and
searching) the number of opened files are not increased over time, but kept
at a steady number (+/- a delta). Usually, I recommend setting the max open
files to 16k or even 32k, as open files are also sockets and other.
If you do find in your tests that they keep increasing over time without
stopping, then I can have a look at it and fix it if there is a leak.
-shay.banon
On Fri, Oct 15, 2010 at 6:54 PM, Tom May tom@tommay.net wrote:
That makes sense, but there are no open search operations on that
index, or any of the indexes, at least not any searches that were
initiated from an api. That's from a fresh run where I stopped ES,
deleted /work, restarted ES, and used the http bulk index API to put
data into a couple hundred indexes. I haven't done any searches yet,
I'm just poking around to see what kind of ulimit I might need. The
fds have been showing up as (deleted) for over an hour, and I'm
wondering if the process is ever going to close them.
I re-indexed the data into the same indexes and the fds were closed, so
either:
The files were closed properly by the application at some point.
There is a leak, and the files were closed when java gc'd them.
Yes, those deleted files are needed to be kept around if there are open
search operations on going against them. Once they are done, those files
will be actually deleted (by the OS). Its really a file handle leak if
you
see an increase of it over time.
-shay.banon
On Fri, Oct 15, 2010 at 6:14 PM, Tom May tom@tommay.net wrote:
When I run lsof on the Elasticsearch java process it often shows that
some of the open files have been deleted. Does Elasticsearch and/or
Lucene really need to keep these deleted files open so it can continue
to use them, or is this just an oversight/bug?
I'm not so concerned about the OS freeing the space. For completeness
I'll just mention that happens when a) the file has been removed
(i.e., no more hard links), and b) no process has the file open.
My concern was/is that java is removing the file but unintentionally
leaving it open, until gc calls a finalizer that closes it. In which
case gc tends to keep the number of these fds from increasing
indefinitely. I'm not saying that's happening, I don't have any
direct evidence, just that I've seen this before and it caught my eye
and made me wonder.
Internally, there is a scheduled refresh of the indices that might keep
those around (deleted files), and I not sure about the operating system
keeping some of them around and when it actually decides to deletes them.
There have been some tests run that shows that under load (both indexing and
searching) the number of opened files are not increased over time, but kept
at a steady number (+/- a delta). Usually, I recommend setting the max open
files to 16k or even 32k, as open files are also sockets and other.
If you do find in your tests that they keep increasing over time without
stopping, then I can have a look at it and fix it if there is a leak.
-shay.banon
On Fri, Oct 15, 2010 at 6:54 PM, Tom May tom@tommay.net wrote:
That makes sense, but there are no open search operations on that
index, or any of the indexes, at least not any searches that were
initiated from an api. That's from a fresh run where I stopped ES,
deleted /work, restarted ES, and used the http bulk index API to put
data into a couple hundred indexes. I haven't done any searches yet,
I'm just poking around to see what kind of ulimit I might need. The
fds have been showing up as (deleted) for over an hour, and I'm
wondering if the process is ever going to close them.
I re-indexed the data into the same indexes and the fds were closed, so
either:
The files were closed properly by the application at some point.
There is a leak, and the files were closed when java gc'd them.
Yes, those deleted files are needed to be kept around if there are open
search operations on going against them. Once they are done, those files
will be actually deleted (by the OS). Its really a file handle leak if
you
see an increase of it over time.
-shay.banon
On Fri, Oct 15, 2010 at 6:14 PM, Tom May tom@tommay.net wrote:
When I run lsof on the Elasticsearch java process it often shows that
some of the open files have been deleted. Does Elasticsearch and/or
Lucene really need to keep these deleted files open so it can continue
to use them, or is this just an oversight/bug?
As far as I know there is nothing like that in Lucene and it cleans up
once things are properly closed.
-shay.banon
On Sat, Oct 16, 2010 at 12:13 AM, Tom May tom@tommay.net wrote:
I'm not so concerned about the OS freeing the space. For completeness
I'll just mention that happens when a) the file has been removed
(i.e., no more hard links), and b) no process has the file open.
My concern was/is that java is removing the file but unintentionally
leaving it open, until gc calls a finalizer that closes it. In which
case gc tends to keep the number of these fds from increasing
indefinitely. I'm not saying that's happening, I don't have any
direct evidence, just that I've seen this before and it caught my eye
and made me wonder.
Internally, there is a scheduled refresh of the indices that might keep
those around (deleted files), and I not sure about the operating system
keeping some of them around and when it actually decides to deletes them.
There have been some tests run that shows that under load (both indexing
and
searching) the number of opened files are not increased over time, but
kept
at a steady number (+/- a delta). Usually, I recommend setting the max
open
files to 16k or even 32k, as open files are also sockets and other.
If you do find in your tests that they keep increasing over time without
stopping, then I can have a look at it and fix it if there is a leak.
-shay.banon
On Fri, Oct 15, 2010 at 6:54 PM, Tom May tom@tommay.net wrote:
That makes sense, but there are no open search operations on that
index, or any of the indexes, at least not any searches that were
initiated from an api. That's from a fresh run where I stopped ES,
deleted /work, restarted ES, and used the http bulk index API to put
data into a couple hundred indexes. I haven't done any searches yet,
I'm just poking around to see what kind of ulimit I might need. The
fds have been showing up as (deleted) for over an hour, and I'm
wondering if the process is ever going to close them.
I re-indexed the data into the same indexes and the fds were closed, so
either:
The files were closed properly by the application at some point.
There is a leak, and the files were closed when java gc'd them.
Yes, those deleted files are needed to be kept around if there are
open
search operations on going against them. Once they are done, those
files
will be actually deleted (by the OS). Its really a file handle leak if
you
see an increase of it over time.
-shay.banon
On Fri, Oct 15, 2010 at 6:14 PM, Tom May tom@tommay.net wrote:
When I run lsof on the Elasticsearch java process it often shows that
some of the open files have been deleted. Does Elasticsearch and/or
Lucene really need to keep these deleted files open so it can
continue
to use them, or is this just an oversight/bug?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.