IndicesStatsRequest + IndicesStatusRequest failures during normal operation

Hi all,

I'm seeing the following occasionally during normal operation, when trying
to add documents. Auto-create indexes is set to true. Is this expected, or
should I report a bug?

[2012-10-10 14:00:11,259][INFO ][cluster.metadata ] [Iron Man 2020]
[el-2011-11-21-0000] creating index, cause [auto(index api)], shards
[1]/[0], mappings [thread]
[2012-10-10 14:00:11,294][DEBUG][action.admin.indices.stats] [Iron Man
2020] [el-2011-11-21-0000][0], node[LxU0GSjRQtmZM0h8vBxuBg], [P],
s[INITIALIZING]: Failed to execute
[org.elasticsearch.action.admin.indices.stats.IndicesStatsRequest@41201158]
org.elasticsearch.transport.RemoteTransportException: [Crimson
Cowl][inet[/192.168.1.11:9301]][indices/stats/s]
Caused by: org.elasticsearch.indices.IndexMissingException:
[el-2011-11-21-0000] missing
at
org.elasticsearch.indices.InternalIndicesService.indexServiceSafe(InternalIndicesService.java:244)
at
org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:144)
at
org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:53)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$ShardTransportHandler.messageReceived(TransportBroadcastOperationAction.java:398)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$ShardTransportHandler.messageReceived(TransportBroadcastOperationAction.java:384)
at
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:268)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
[2012-10-10 14:00:11,295][DEBUG][action.admin.indices.status] [Iron Man
2020] [el-2011-11-21-0000][0], node[LxU0GSjRQtmZM0h8vBxuBg], [P],
s[INITIALIZING]: Failed to execute
[org.elasticsearch.action.admin.indices.status.IndicesStatusRequest@1b3f459d
]
org.elasticsearch.transport.RemoteTransportException: [Crimson
Cowl][inet[/192.168.1.11:9301]][indices/status/s]
Caused by: org.elasticsearch.indices.IndexMissingException:
[el-2011-11-21-0000] missing
at
org.elasticsearch.indices.InternalIndicesService.indexServiceSafe(InternalIndicesService.java:244)
at
org.elasticsearch.action.admin.indices.status.TransportIndicesStatusAction.shardOperation(TransportIndicesStatusAction.java:152)
at
org.elasticsearch.action.admin.indices.status.TransportIndicesStatusAction.shardOperation(TransportIndicesStatusAction.java:59)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$ShardTransportHandler.messageReceived(TransportBroadcastOperationAction.java:398)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$ShardTransportHandler.messageReceived(TransportBroadcastOperationAction.java:384)
at
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:268)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

--

This might happen, its not a problem really. Open an issue? We can do better in handling those failures and not logging them (even as DEBUG).

On Oct 10, 2012, at 5:15 AM, Itamar Syn-Hershko itamar@code972.com wrote:

Hi all,

I'm seeing the following occasionally during normal operation, when trying to add documents. Auto-create indexes is set to true. Is this expected, or should I report a bug?

[2012-10-10 14:00:11,259][INFO ][cluster.metadata ] [Iron Man 2020] [el-2011-11-21-0000] creating index, cause [auto(index api)], shards [1]/[0], mappings [thread]
[2012-10-10 14:00:11,294][DEBUG][action.admin.indices.stats] [Iron Man 2020] [el-2011-11-21-0000][0], node[LxU0GSjRQtmZM0h8vBxuBg], [P], s[INITIALIZING]: Failed to execute [org.elasticsearch.action.admin.indices.stats.IndicesStatsRequest@41201158]
org.elasticsearch.transport.RemoteTransportException: [Crimson Cowl][inet[/192.168.1.11:9301]][indices/stats/s]
Caused by: org.elasticsearch.indices.IndexMissingException: [el-2011-11-21-0000] missing
at org.elasticsearch.indices.InternalIndicesService.indexServiceSafe(InternalIndicesService.java:244)
at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:144)
at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:53)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$ShardTransportHandler.messageReceived(TransportBroadcastOperationAction.java:398)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$ShardTransportHandler.messageReceived(TransportBroadcastOperationAction.java:384)
at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
[2012-10-10 14:00:11,295][DEBUG][action.admin.indices.status] [Iron Man 2020] [el-2011-11-21-0000][0], node[LxU0GSjRQtmZM0h8vBxuBg], [P], s[INITIALIZING]: Failed to execute [org.elasticsearch.action.admin.indices.status.IndicesStatusRequest@1b3f459d]
org.elasticsearch.transport.RemoteTransportException: [Crimson Cowl][inet[/192.168.1.11:9301]][indices/status/s]
Caused by: org.elasticsearch.indices.IndexMissingException: [el-2011-11-21-0000] missing
at org.elasticsearch.indices.InternalIndicesService.indexServiceSafe(InternalIndicesService.java:244)
at org.elasticsearch.action.admin.indices.status.TransportIndicesStatusAction.shardOperation(TransportIndicesStatusAction.java:152)
at org.elasticsearch.action.admin.indices.status.TransportIndicesStatusAction.shardOperation(TransportIndicesStatusAction.java:59)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$ShardTransportHandler.messageReceived(TransportBroadcastOperationAction.java:398)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$ShardTransportHandler.messageReceived(TransportBroadcastOperationAction.java:384)
at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

--

--

On Thu, Oct 11, 2012 at 5:03 PM, Shay Banon kimchy@gmail.com wrote:

This might happen, its not a problem really. Open an issue? We can do
better in handling those failures and not logging them (even as DEBUG).

On Oct 10, 2012, at 5:15 AM, Itamar Syn-Hershko itamar@code972.com
wrote:

Hi all,

I'm seeing the following occasionally during normal operation, when
trying to add documents. Auto-create indexes is set to true. Is this
expected, or should I report a bug?

[2012-10-10 14:00:11,259][INFO ][cluster.metadata ] [Iron Man
2020] [el-2011-11-21-0000] creating index, cause [auto(index api)],
shards [1]/[0], mappings [thread]
[2012-10-10 14:00:11,294][DEBUG][action.admin.indices.stats] [Iron Man
2020] [el-2011-11-21-0000][0], node[LxU0GSjRQtmZM0h8vBxuBg], [P],
s[INITIALIZING]: Failed to execute
[org.elasticsearch.action.admin.indices.stats.IndicesStatsRequest@41201158
]
org.elasticsearch.transport.RemoteTransportException: [Crimson
Cowl][inet[/192.168.1.11:9301]][indices/stats/s]
Caused by: org.elasticsearch.indices.IndexMissingException:
[el-2011-11-21-0000] missing
at
org.elasticsearch.indices.InternalIndicesService.indexServiceSafe(InternalIndicesService.java:244)
at
org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:144)
at
org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:53)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$ShardTransportHandler.messageReceived(TransportBroadcastOperationAction.java:398)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$ShardTransportHandler.messageReceived(TransportBroadcastOperationAction.java:384)
at
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:268)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
[2012-10-10 14:00:11,295][DEBUG][action.admin.indices.status] [Iron Man
2020] [el-2011-11-21-0000][0], node[LxU0GSjRQtmZM0h8vBxuBg], [P],
s[INITIALIZING]: Failed to execute
[org.elasticsearch.action.admin.indices.status.IndicesStatusRequest@1b3f459d
]
org.elasticsearch.transport.RemoteTransportException: [Crimson
Cowl][inet[/192.168.1.11:9301]][indices/status/s]
Caused by: org.elasticsearch.indices.IndexMissingException:
[el-2011-11-21-0000] missing
at
org.elasticsearch.indices.InternalIndicesService.indexServiceSafe(InternalIndicesService.java:244)
at
org.elasticsearch.action.admin.indices.status.TransportIndicesStatusAction.shardOperation(TransportIndicesStatusAction.java:152)
at
org.elasticsearch.action.admin.indices.status.TransportIndicesStatusAction.shardOperation(TransportIndicesStatusAction.java:59)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$ShardTransportHandler.messageReceived(TransportBroadcastOperationAction.java:398)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$ShardTransportHandler.messageReceived(TransportBroadcastOperationAction.java:384)
at
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:268)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

--

--

--