Repeating errors

I have started receiving the following error messages on my
elasticsearch server after about 24 hours of uptime:

[2010-12-01 10:45:47,579][WARN ][index.shard.service ] [Forgotten
One] [production][1] Failed to perform scheduled engine refresh
org.elasticsearch.index.engine.RefreshFailedEngineException:
[production][1] Refresh failed
at
org.elasticsearch.index.engine.robin.RobinEngine.refresh(RobinEngine.java:
376)
at org.elasticsearch.index.shard.service.InternalIndexShard
$EngineRefresher.run(InternalIndexShard.java:526)
at java.util.concurrent.Executors
$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask
$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:
150)
at java.util.concurrent.ScheduledThreadPoolExecutor
$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at java.util.concurrent.ScheduledThreadPoolExecutor
$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor
$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.FileNotFoundException: /mnt/elasticsearch/nodes/0/
indices/production/1/index/_ci9.fdx (Too many open files)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:212)
at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput
$Descriptor.(SimpleFSDirectory.java:76)
at org.apache.lucene.store.SimpleFSDirectory
$SimpleFSIndexInput.(SimpleFSDirectory.java:97)
at org.apache.lucene.store.NIOFSDirectory
$NIOFSIndexInput.(NIOFSDirectory.java:87) at
org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:
67)
at org.elasticsearch.index.store.support.AbstractStore
$StoreDirectory.openInput(AbstractStore.java:268)
at
org.apache.lucene.index.FieldsReader.(FieldsReader.java:109)
at org.apache.lucene.index.SegmentReader
$CoreReaders.openDocStores(SegmentReader.java:291)
at
org.apache.lucene.index.SegmentReader.openDocStores(SegmentReader.java:
612)
at org.apache.lucene.index.IndexWriter
$ReaderPool.get(IndexWriter.java:624)
at org.apache.lucene.index.IndexWriter
$ReaderPool.getReadOnlyClone(IndexWriter.java:574)
at
org.apache.lucene.index.DirectoryReader.(DirectoryReader.java:
150)
at
org.apache.lucene.index.ReadOnlyDirectoryReader.(ReadOnlyDirectoryReader.java:
36)
at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:410)
at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:374)
at
org.apache.lucene.index.DirectoryReader.doReopenFromWriter(DirectoryReader.java:
377)
at
org.apache.lucene.index.DirectoryReader.doReopen(DirectoryReader.java:
388)
at
org.apache.lucene.index.DirectoryReader.reopen(DirectoryReader.java:
355)
at
org.elasticsearch.index.engine.robin.RobinEngine.refresh(RobinEngine.java:
360)
... 10 more

[2010-12-01 10:45:47,596][WARN ]
[netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to
initialize an accepted socket.
org.elasticsearch.common.netty.channel.ChannelException: Failed to
create a selector.
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.register(NioWorker.java:
104) at
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink
$Boss.registerAcceptedChannel(NioServerSocketPipelineSink.java
:280)
at
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink
$Boss.run(NioServerSocketPipelineSink.java:247)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:
108)
at
org.elasticsearch.common.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:
46)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.initPipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:
49)
at
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:
18)
at java.nio.channels.Selector.open(Selector.java:209)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.register(NioWorker.java:
102)
... 7 more

Once these error messages show up, all connections to the service fail
which starts causing issues on the clients which are attempting to
connect. The elasticsearch server is running 0.13 on an AWS EC2
medium instance and the clients are running PHP and using Elastica.
We balance our web traffic across three servers, and these servers
also have as many as 8 background php processes which are pulling in
data from various sources and adding that data to elastic.

Is the first error the cause of the second one? Is there a setting
somewhere I need to change that could alleviate this issue?

Hi Lee

Here's the cause of your error

Caused by: java.io.FileNotFoundException: /mnt/elasticsearch/nodes/0/
indices/production/1/index/_ci9.fdx (Too many open files)

You need to set the limit on open files high, like 30000:

ulimit -n 30000

See here for some notes about setting ulimit permanently:

clint

Thank you. I have made that change and will keep and eye on the
number of files open.

Lee

On Dec 1, 11:05 am, Clinton Gormley clin...@iannounce.co.uk wrote:

Hi Lee

Here's the cause of your error

Caused by: java.io.FileNotFoundException: /mnt/elasticsearch/nodes/0/
indices/production/1/index/_ci9.fdx (Too many open files)

You need to set the limit on open files high, like 30000:

ulimit -n 30000

See here for some notes about setting ulimit permanently:

GitHub - elastic/elasticsearch: Free and Open, Distributed, RESTful Search Engine...

clint