Weird Exception

Hi using ElasticSearch 1.3.0 JAVA 1.8_5 configed for 32g

[2014-08-11 09:23:16,258][WARN ][cluster.action.shard ] [Tyrant]
[.marvel-2014.08.04][0] received shard failed for [.marvel-2014.08.04][0],
node[TsoETYSERg-DNDpiDxpxKA], [R], s[INITIALIZING], indexUUID
[gxfk0pCiQg2QCJRyUWAYTw], reason [Failed to create shard, message
[IndexShardCreationException[[.marvel-2014.08.04][0] failed to create
shard]; nested: IOException[directory
'/.../.../elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.08.04/0/index'
exists and is a directory, but cannot be listed: list() returned null]; ]]

Also 1 of the 32 cores is at 100% CPU

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/13f56aca-def1-4dcd-b4d5-e734cfc6c8fd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

And I see this also...

[2014-08-11 09:31:35,233][WARN ][cluster.action.shard ] [Scarlet
Spiders] [.marvel-2014.08.04][0] sending failed shard for
[.marvel-2014.08.04][0], node[TsoETYSERg-DNDpiDxpxKA], [R],
s[INITIALIZING], indexUUID [gxfk0pCiQg2QCJRyUWAYTw], reason [engine
failure, message [corrupted preexisting
index][FileSystemException[/home/elasticsearch/elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.08.04/0/index/_2bf.si:
Too many open files]]]
[2014-08-11 09:31:35,393][WARN ][index.engine.internal ] [Scarlet
Spiders] [.marvel-2014.07.31][0] failed engine [corrupted preexisting index]
[2014-08-11 09:31:35,394][WARN ][indices.cluster ] [Scarlet
Spiders] [.marvel-2014.07.31][0] failed to start shard
java.nio.file.FileSystemException:
/home/elasticsearch/elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.07.31/0/index/_3ns.si:
Too many open files
at
sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at
sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at
org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:81)
at
org.apache.lucene.store.FileSwitchDirectory.openInput(FileSwitchDirectory.java:172)
at
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80)
at
org.elasticsearch.index.store.DistributorDirectory.openInput(DistributorDirectory.java:130)
at
org.apache.lucene.store.Directory.openChecksumInput(Directory.java:113)
at
org.apache.lucene.codecs.lucene46.Lucene46SegmentInfoReader.read(Lucene46SegmentInfoReader.java:49)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:361)
at
org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:457)
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:907)
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:753)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:453)
at
org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:96)
at
org.elasticsearch.index.store.Store.readLastCommittedSegmentsInfo(Store.java:124)
at org.elasticsearch.index.store.Store.access$300(Store.java:74)
at
org.elasticsearch.index.store.Store$MetadataSnapshot.buildMetadata(Store.java:442)
at
org.elasticsearch.index.store.Store$MetadataSnapshot.(Store.java:433)
at org.elasticsearch.index.store.Store.getMetadata(Store.java:144)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:724)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:576)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:183)
at
org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:444)
at
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:153)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2014-08-11 09:31:35,397][WARN ][cluster.action.shard ] [Scarlet
Spiders] [.marvel-2014.07.31][0] sending failed shard for
[.marvel-2014.07.31][0], node[TsoETYSERg-DNDpiDxpxKA], [R],
s[INITIALIZING], indexUUID [hsU3ZVo3T0OlYreN3mZ5aQ], reason [Failed to
start shard, message
[FileSystemException[/home/elasticsearch/elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.07.31/0/index/_3ns.si:
Too many open files]]]
[2014-08-11 09:31:35,399][WARN ][cluster.action.shard ] [Scarlet
Spiders] [.marvel-2014.07.31][0] sending failed shard for
[.marvel-2014.07.31][0], node[TsoETYSERg-DNDpiDxpxKA], [R],
s[INITIALIZING], indexUUID [hsU3ZVo3T0OlYreN3mZ5aQ], reason [engine
failure, message [corrupted preexisting
index][FileSystemException[/home/elasticsearch/elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.07.31/0/index/_3ns.si:
Too many open files]]]

On Monday, 11 August 2014 09:31:50 UTC-4, John Smith wrote:

Hi using Elasticsearch 1.3.0 JAVA 1.8_5 configed for 32g

[2014-08-11 09:23:16,258][WARN ][cluster.action.shard ] [Tyrant]
[.marvel-2014.08.04][0] received shard failed for [.marvel-2014.08.04][0],
node[TsoETYSERg-DNDpiDxpxKA], [R], s[INITIALIZING], indexUUID
[gxfk0pCiQg2QCJRyUWAYTw], reason [Failed to create shard, message
[IndexShardCreationException[[.marvel-2014.08.04][0] failed to create
shard]; nested: IOException[directory
'/.../.../elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.08.04/0/index'
exists and is a directory, but cannot be listed: list() returned null]; ]]

Also 1 of the 32 cores is at 100% CPU

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/595d208f-e0c0-4f73-8155-929232b67735%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Oops switched linux boxes and forgot to set the file limit. Let me see if
that works :slight_smile:

On Monday, 11 August 2014 09:33:54 UTC-4, John Smith wrote:

And I see this also...

[2014-08-11 09:31:35,233][WARN ][cluster.action.shard ] [Scarlet
Spiders] [.marvel-2014.08.04][0] sending failed shard for
[.marvel-2014.08.04][0], node[TsoETYSERg-DNDpiDxpxKA], [R],
s[INITIALIZING], indexUUID [gxfk0pCiQg2QCJRyUWAYTw], reason [engine
failure, message [corrupted preexisting
index][FileSystemException[/home/elasticsearch/elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.08.04/0/index/_
2bf.si: Too many open files]]]
[2014-08-11 09:31:35,393][WARN ][index.engine.internal ] [Scarlet
Spiders] [.marvel-2014.07.31][0] failed engine [corrupted preexisting index]
[2014-08-11 09:31:35,394][WARN ][indices.cluster ] [Scarlet
Spiders] [.marvel-2014.07.31][0] failed to start shard
java.nio.file.FileSystemException:
/home/elasticsearch/elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.07.31/0/index/_
3ns.si: Too many open files
at
sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at
sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at
org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:81)
at
org.apache.lucene.store.FileSwitchDirectory.openInput(FileSwitchDirectory.java:172)
at
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80)
at
org.elasticsearch.index.store.DistributorDirectory.openInput(DistributorDirectory.java:130)
at
org.apache.lucene.store.Directory.openChecksumInput(Directory.java:113)
at
org.apache.lucene.codecs.lucene46.Lucene46SegmentInfoReader.read(Lucene46SegmentInfoReader.java:49)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:361)
at
org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:457)
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:907)
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:753)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:453)
at
org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:96)
at
org.elasticsearch.index.store.Store.readLastCommittedSegmentsInfo(Store.java:124)
at org.elasticsearch.index.store.Store.access$300(Store.java:74)
at
org.elasticsearch.index.store.Store$MetadataSnapshot.buildMetadata(Store.java:442)
at
org.elasticsearch.index.store.Store$MetadataSnapshot.(Store.java:433)
at org.elasticsearch.index.store.Store.getMetadata(Store.java:144)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:724)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:576)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:183)
at
org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:444)
at
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:153)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2014-08-11 09:31:35,397][WARN ][cluster.action.shard ] [Scarlet
Spiders] [.marvel-2014.07.31][0] sending failed shard for
[.marvel-2014.07.31][0], node[TsoETYSERg-DNDpiDxpxKA], [R],
s[INITIALIZING], indexUUID [hsU3ZVo3T0OlYreN3mZ5aQ], reason [Failed to
start shard, message
[FileSystemException[/home/elasticsearch/elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.07.31/0/index/_
3ns.si: Too many open files]]]
[2014-08-11 09:31:35,399][WARN ][cluster.action.shard ] [Scarlet
Spiders] [.marvel-2014.07.31][0] sending failed shard for
[.marvel-2014.07.31][0], node[TsoETYSERg-DNDpiDxpxKA], [R],
s[INITIALIZING], indexUUID [hsU3ZVo3T0OlYreN3mZ5aQ], reason [engine
failure, message [corrupted preexisting
index][FileSystemException[/home/elasticsearch/elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.07.31/0/index/_
3ns.si: Too many open files]]]

On Monday, 11 August 2014 09:31:50 UTC-4, John Smith wrote:

Hi using Elasticsearch 1.3.0 JAVA 1.8_5 configed for 32g

[2014-08-11 09:23:16,258][WARN ][cluster.action.shard ] [Tyrant]
[.marvel-2014.08.04][0] received shard failed for [.marvel-2014.08.04][0],
node[TsoETYSERg-DNDpiDxpxKA], [R], s[INITIALIZING], indexUUID
[gxfk0pCiQg2QCJRyUWAYTw], reason [Failed to create shard, message
[IndexShardCreationException[[.marvel-2014.08.04][0] failed to create
shard]; nested: IOException[directory
'/.../.../elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.08.04/0/index'
exists and is a directory, but cannot be listed: list() returned null]; ]]

Also 1 of the 32 cores is at 100% CPU

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/8af700c3-8edf-423c-bfd4-a7994ecac153%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Ok set my sysctl and user limits to 65536

On Monday, 11 August 2014 09:37:33 UTC-4, John Smith wrote:

Oops switched linux boxes and forgot to set the file limit. Let me see if
that works :slight_smile:

On Monday, 11 August 2014 09:33:54 UTC-4, John Smith wrote:

And I see this also...

[2014-08-11 09:31:35,233][WARN ][cluster.action.shard ] [Scarlet
Spiders] [.marvel-2014.08.04][0] sending failed shard for
[.marvel-2014.08.04][0], node[TsoETYSERg-DNDpiDxpxKA], [R],
s[INITIALIZING], indexUUID [gxfk0pCiQg2QCJRyUWAYTw], reason [engine
failure, message [corrupted preexisting
index][FileSystemException[/home/elasticsearch/elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.08.04/0/index/_
2bf.si: Too many open files]]]
[2014-08-11 09:31:35,393][WARN ][index.engine.internal ] [Scarlet
Spiders] [.marvel-2014.07.31][0] failed engine [corrupted preexisting index]
[2014-08-11 09:31:35,394][WARN ][indices.cluster ] [Scarlet
Spiders] [.marvel-2014.07.31][0] failed to start shard
java.nio.file.FileSystemException:
/home/elasticsearch/elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.07.31/0/index/_
3ns.si: Too many open files
at
sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at
sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at
org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:81)
at
org.apache.lucene.store.FileSwitchDirectory.openInput(FileSwitchDirectory.java:172)
at
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80)
at
org.elasticsearch.index.store.DistributorDirectory.openInput(DistributorDirectory.java:130)
at
org.apache.lucene.store.Directory.openChecksumInput(Directory.java:113)
at
org.apache.lucene.codecs.lucene46.Lucene46SegmentInfoReader.read(Lucene46SegmentInfoReader.java:49)
at
org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:361)
at
org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:457)
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:907)
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:753)
at
org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:453)
at
org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:96)
at
org.elasticsearch.index.store.Store.readLastCommittedSegmentsInfo(Store.java:124)
at org.elasticsearch.index.store.Store.access$300(Store.java:74)
at
org.elasticsearch.index.store.Store$MetadataSnapshot.buildMetadata(Store.java:442)
at
org.elasticsearch.index.store.Store$MetadataSnapshot.(Store.java:433)
at org.elasticsearch.index.store.Store.getMetadata(Store.java:144)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:724)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:576)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:183)
at
org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:444)
at
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:153)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2014-08-11 09:31:35,397][WARN ][cluster.action.shard ] [Scarlet
Spiders] [.marvel-2014.07.31][0] sending failed shard for
[.marvel-2014.07.31][0], node[TsoETYSERg-DNDpiDxpxKA], [R],
s[INITIALIZING], indexUUID [hsU3ZVo3T0OlYreN3mZ5aQ], reason [Failed to
start shard, message
[FileSystemException[/home/elasticsearch/elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.07.31/0/index/_
3ns.si: Too many open files]]]
[2014-08-11 09:31:35,399][WARN ][cluster.action.shard ] [Scarlet
Spiders] [.marvel-2014.07.31][0] sending failed shard for
[.marvel-2014.07.31][0], node[TsoETYSERg-DNDpiDxpxKA], [R],
s[INITIALIZING], indexUUID [hsU3ZVo3T0OlYreN3mZ5aQ], reason [engine
failure, message [corrupted preexisting
index][FileSystemException[/home/elasticsearch/elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.07.31/0/index/_
3ns.si: Too many open files]]]

On Monday, 11 August 2014 09:31:50 UTC-4, John Smith wrote:

Hi using Elasticsearch 1.3.0 JAVA 1.8_5 configed for 32g

[2014-08-11 09:23:16,258][WARN ][cluster.action.shard ] [Tyrant]
[.marvel-2014.08.04][0] received shard failed for [.marvel-2014.08.04][0],
node[TsoETYSERg-DNDpiDxpxKA], [R], s[INITIALIZING], indexUUID
[gxfk0pCiQg2QCJRyUWAYTw], reason [Failed to create shard, message
[IndexShardCreationException[[.marvel-2014.08.04][0] failed to create
shard]; nested: IOException[directory
'/.../.../elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.08.04/0/index'
exists and is a directory, but cannot be listed: list() returned null]; ]]

Also 1 of the 32 cores is at 100% CPU

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/fcc8136d-2305-44ce-ac1f-75e2d5f8f2fc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Ok set my sysctl and user limits to 65536. Node restarted seems to be
recovering so far...

On Monday, 11 August 2014 09:37:33 UTC-4, John Smith wrote:

Oops switched linux boxes and forgot to set the file limit. Let me see if
that works :slight_smile:

On Monday, 11 August 2014 09:33:54 UTC-4, John Smith wrote:

And I see this also...

[2014-08-11 09:31:35,233][WARN ][cluster.action.shard ] [Scarlet
Spiders] [.marvel-2014.08.04][0] sending failed shard for
[.marvel-2014.08.04][0], node[TsoETYSERg-DNDpiDxpxKA], [R],
s[INITIALIZING], indexUUID [gxfk0pCiQg2QCJRyUWAYTw], reason [engine
failure, message [corrupted preexisting
index][FileSystemException[/home/elasticsearch/elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.08.04/0/index/_
2bf.si: Too many open files]]]
[2014-08-11 09:31:35,393][WARN ][index.engine.internal ] [Scarlet
Spiders] [.marvel-2014.07.31][0] failed engine [corrupted preexisting index]
[2014-08-11 09:31:35,394][WARN ][indices.cluster ] [Scarlet
Spiders] [.marvel-2014.07.31][0] failed to start shard
java.nio.file.FileSystemException:
/home/elasticsearch/elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.07.31/0/index/_
3ns.si: Too many open files
at
sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at
sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at
org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:81)
at
org.apache.lucene.store.FileSwitchDirectory.openInput(FileSwitchDirectory.java:172)
at
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80)
at
org.elasticsearch.index.store.DistributorDirectory.openInput(DistributorDirectory.java:130)
at
org.apache.lucene.store.Directory.openChecksumInput(Directory.java:113)
at
org.apache.lucene.codecs.lucene46.Lucene46SegmentInfoReader.read(Lucene46SegmentInfoReader.java:49)
at
org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:361)
at
org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:457)
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:907)
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:753)
at
org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:453)
at
org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:96)
at
org.elasticsearch.index.store.Store.readLastCommittedSegmentsInfo(Store.java:124)
at org.elasticsearch.index.store.Store.access$300(Store.java:74)
at
org.elasticsearch.index.store.Store$MetadataSnapshot.buildMetadata(Store.java:442)
at
org.elasticsearch.index.store.Store$MetadataSnapshot.(Store.java:433)
at org.elasticsearch.index.store.Store.getMetadata(Store.java:144)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:724)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:576)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:183)
at
org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:444)
at
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:153)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2014-08-11 09:31:35,397][WARN ][cluster.action.shard ] [Scarlet
Spiders] [.marvel-2014.07.31][0] sending failed shard for
[.marvel-2014.07.31][0], node[TsoETYSERg-DNDpiDxpxKA], [R],
s[INITIALIZING], indexUUID [hsU3ZVo3T0OlYreN3mZ5aQ], reason [Failed to
start shard, message
[FileSystemException[/home/elasticsearch/elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.07.31/0/index/_
3ns.si: Too many open files]]]
[2014-08-11 09:31:35,399][WARN ][cluster.action.shard ] [Scarlet
Spiders] [.marvel-2014.07.31][0] sending failed shard for
[.marvel-2014.07.31][0], node[TsoETYSERg-DNDpiDxpxKA], [R],
s[INITIALIZING], indexUUID [hsU3ZVo3T0OlYreN3mZ5aQ], reason [engine
failure, message [corrupted preexisting
index][FileSystemException[/home/elasticsearch/elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.07.31/0/index/_
3ns.si: Too many open files]]]

On Monday, 11 August 2014 09:31:50 UTC-4, John Smith wrote:

Hi using Elasticsearch 1.3.0 JAVA 1.8_5 configed for 32g

[2014-08-11 09:23:16,258][WARN ][cluster.action.shard ] [Tyrant]
[.marvel-2014.08.04][0] received shard failed for [.marvel-2014.08.04][0],
node[TsoETYSERg-DNDpiDxpxKA], [R], s[INITIALIZING], indexUUID
[gxfk0pCiQg2QCJRyUWAYTw], reason [Failed to create shard, message
[IndexShardCreationException[[.marvel-2014.08.04][0] failed to create
shard]; nested: IOException[directory
'/.../.../elasticsearch-1.3.0/data/esdashboard/nodes/0/indices/.marvel-2014.08.04/0/index'
exists and is a directory, but cannot be listed: list() returned null]; ]]

Also 1 of the 32 cores is at 100% CPU

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/398748e0-3f4f-4c34-bee2-83e250f449c2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.