Failed to execute bulk item (index) index

dear all

    **the elasticsearch often have no response 。**

** when it's NO RESPONSE ,i can stop the service with “service elasticsearch stop”,i olny can ”KILL -9”,Please help me。**
** and i can not open the IP:9200 **
** i got some logs**

[2016-06-06 10:33:30,280][DEBUG][action.bulk ] [Skullfire] [mars1-2016.06.05][2] failed to execute bulk item (index) index {[mars1-2016.06.05][tomcat][AVUjhuGSAZCFCIZWhp3G], source[{"@timestamp":"2016-06-05T16:04:27.798Z","beat":{"hostname":"MARS1","name":"MARS1"},"count":1,"fields":null,"input_type":"log","message":"127.0.0.1 - - [06/Jun/2016:00:00:00 +0800] \"GET /alfresco/service/api/solr/aclchangesets?fromTime=1462053874624\u0026toTime=1462975474624\u0026maxResults=2000 HTTP/1.1\" 200 128","offset":4799,"source":"/app/tomcat/logs/localhost_access_log2016-06-06.txt","type":"tomcat"}]} ProcessClusterEventTimeoutException[failed to process cluster event (put-mapping [tomcat]) within 30s] at org.elasticsearch.cluster.service.InternalClusterService$2$1.run(InternalClusterService.java:349) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2016-06-06 10:33:34,747][WARN ][cluster.service ] [Skullfire] cluster state update task [shard-started ([mars2-2016.06.06][0], node[IfN3KNH_SEu0nz8eUGCamA], [P], v[1], s[INITIALIZING], a[id=BXA1MQUCRwGeuygEvF_HXQ], unassigned_info[[reason=INDEX_CREATED], at[2016-06-06T02:24:15.014Z]]), reason [after recovery from store],shard-started ([mars2-2016.06.06][1], node[IfN3KNH_SEu0nz8eUGCamA], [P], v[1], s[INITIALIZING], a[id=ZjfT926vTDGYoj1_ohKsOA], unassigned_info[[reason=INDEX_CREATED], at[2016-06-06T02:24:15.014Z]]), reason [after recovery from store],shard-started ([al030-2016.05.27][4], node[IfN3KNH_SEu0nz8eUGCamA], [P], v[11], s[INITIALIZING], a[id=B5C3PRJyT7KBL8OjyRk_zQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-06T02:22:38.427Z]]), reason [after recovery from store]] took 1.6m above the warn threshold of 30s [2016-06-06 10:33:37,858][WARN ][transport ] [Skullfire] Received response for a request that has timed out, sent [30989ms] ago, timed out [15360ms] ago, action [cluster:monitor/nodes/stats[n]], node [{Skullfire}{IfN3KNH_SEu0nz8eUGCamA}{172.16.2.32}{BL043.xwjrfw.cn/172.16.2.32:9300}], id [22296] [2016-06-06 10:34:05,985][DEBUG][action.admin.indices.mapping.put] [Skullfire] failed to put mappings on indices [[mars1-2016.06.05]], type [tomcat] ProcessClusterEventTimeoutException[failed to process cluster event (put-mapping [tomcat]) within 30s] at org.elasticsearch.cluster.service.InternalClusterService$2$1.run(InternalClusterService.java:349) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2016-06-06 10:34:05,986][DEBUG][action.bulk ] [Skullfire] [mars1-2016.06.05][1] failed to execute bulk item (index) index {[mars1-2016.06.05][tomcat][AVUjhuGSAZCFCIZWhp22], source[{"@timestamp":"2016-06-05T16:04:27.798Z","beat":{"hostname":"MARS1","name":"MARS1"},"count":1,"fields":null,"input_type":"log","message":"127.0.0.1 - - [06/Jun/2016:00:00:00 +0800] \"GET /alfresco/service/api/solr/aclchangesets?fromTime=1461132274624\u0026toTime=1461135874624\u0026maxResults=2000 HTTP/1.1\" 200 128","offset":1787,"source":"/app/tomcat/logs/localhost_access_log2016-06-06.txt","type":"tomcat"}]} ProcessClusterEventTimeoutException[failed to process cluster event (put-mapping [tomcat]) within 30s] at org.elasticsearch.cluster.service.InternalClusterService$2$1.run(InternalClusterService.java:349) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2016-06-06 10:34:05,989][DEBUG][action.admin.indices.mapping.put] [Skullfire] failed to put mappings on indices [[mars1-2016.06.05]], type [tomcat] ProcessClusterEventTimeoutException[failed to process cluster event (put-mapping [tomcat]) within 30s] at org.elasticsearch.cluster.service.InternalClusterService$2$1.run(InternalClusterService.java:349) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

[2016-06-06 10:51:20,230][WARN ][rest.suppressed ] /_stats Params: {} java.lang.OutOfMemoryError: Java heap space at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:68) at java.lang.StringBuilder.<init>(StringBuilder.java:89) at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.<init>(TransportBroadcastByNodeAction.java:277) at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:222) at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:79) at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:137) at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:85) at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58) at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359) at org.elasticsearch.client.FilterClient.doExecute(FilterClient.java:52) at org.elasticsearch.rest.BaseRestHandler$HeadersAndContextCopyClient.doExecute(BaseRestHandler.java:83) at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359) at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1226) at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.stats(AbstractClient.java:1546) at org.elasticsearch.rest.action.admin.indices.stats.RestIndicesStatsAction.handleRequest(RestIndicesStatsAction.java:102) at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:54) at org.elasticsearch.rest.RestController.executeHandler(RestController.java:205) at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:166) at org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:128) at org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:86) at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:449) at org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:61) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.messageReceived(HttpPipeliningHandler.java:60) at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

You should start by addressing this. What heap size are you running with?

sorry
i don't know
Where am I to edit it

You could start with the docs on setting the heap and ask for clarification on anything that is puzzling?

when i start elasticsearch ,i use "/etc/init.d/elasticsearch start" , can i put "-Xmx10g -Xms10g" in to the "/etc/init.d/elasticsearch".
thx

What version of Elasticsearch are you on? The stack trace above looks like it comes from 2.3.x, is that correct?

Version: 2.3.3, Build: 218bdf1/2016-05-17T15:40:04Z, JVM: 1.8.0_92

my machine with 4GB RAM
i have set "export ES_HEAP_SIZE=2g" ,but i got a new error:

[2016-06-06 12:33:19,135][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in [2016-06-06 12:33:19,302][INFO ][node ] [Kukulcan] version[2.3.3], pid[26966], build[218bdf1/2016-05-17T15:40:04Z] [2016-06-06 12:33:19,303][INFO ][node ] [Kukulcan] initializing ... [2016-06-06 12:33:19,940][INFO ][plugins ] [Kukulcan] modules [reindex, lang-expression, lang-groovy], plugins [head], sites [head] [2016-06-06 12:33:19,961][INFO ][env ] [Kukulcan] using [1] data paths, mounts [[/app (/dev/vdb1)]], net usable_space [31.7gb], net total_space [49.2gb], spins? [possibly], types [ext4] [2016-06-06 12:33:19,961][INFO ][env ] [Kukulcan] heap size [1.9gb], compressed ordinary object pointers [true] [2016-06-06 12:33:23,119][INFO ][node ] [Kukulcan] initialized [2016-06-06 12:33:23,120][INFO ][node ] [Kukulcan] starting ... [2016-06-06 12:33:23,200][INFO ][transport ] [Kukulcan] publish_address {mars1.test.com/172.16.2.32:9300}, bound_addresses {172.16.2.32:9300} [2016-06-06 12:33:23,205][INFO ][discovery ] [Kukulcan] elasticsearch/mGCU0BAxS0qkDt4MVipaKg [2016-06-06 12:33:26,248][INFO ][cluster.service ] [Kukulcan] new_master {Kukulcan}{mGCU0BAxS0qkDt4MVipaKg}{172.16.2.32}{mars1.test.com/172.16.2.32:9300}, reason: zen-disco-join(elected_as_master, [0] joins received) [2016-06-06 12:33:26,287][INFO ][http ] [Kukulcan] publish_address {mars1.test.com/172.16.2.32:9200}, bound_addresses {172.16.2.32:9200} [2016-06-06 12:33:26,287][INFO ][node ] [Kukulcan] started [2016-06-06 12:33:29,830][INFO ][gateway ] [Kukulcan] recovered [562] indices into cluster_state [2016-06-06 12:34:01,869][INFO ][node ] [Kukulcan] stopping ... [2016-06-06 12:34:02,236][WARN ][netty.channel.DefaultChannelPipeline] An exception was thrown by an exception handler. java.util.concurrent.RejectedExecutionException: Worker has already been shutdown at org.jboss.netty.channel.socket.nio.AbstractNioSelector.registerTask(AbstractNioSelector.java:120) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:72) at org.jboss.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:56) at org.jboss.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36) at org.jboss.netty.channel.socket.nio.AbstractNioChannelSink.execute(AbstractNioChannelSink.java:34) at org.jboss.netty.channel.DefaultChannelPipeline.execute(DefaultChannelPipeline.java:636) at org.jboss.netty.channel.Channels.fireExceptionCaughtLater(Channels.java:496) at org.jboss.netty.channel.AbstractChannelSink.exceptionCaught(AbstractChannelSink.java:46) at org.jboss.netty.channel.DefaultChannelPipeline.notifyHandlerException(DefaultChannelPipeline.java:658) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:781) at org.jboss.netty.channel.Channels.write(Channels.java:725) at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71) at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59) at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:784) at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.handleDownstream(HttpPipeliningHandler.java:87) at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591) at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582) at org.elasticsearch.http.netty.NettyHttpChannel.sendResponse(NettyHttpChannel.java:146) at org.elasticsearch.rest.action.support.RestResponseListener.processResponse(RestResponseListener.java:43) at org.elasticsearch.rest.action.support.RestActionListener.onResponse(RestActionListener.java:49) at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:89) at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:85) at org.elasticsearch.action.bulk.TransportBulkAction$2.finishHim(TransportBulkAction.java:356)

Three's no error there. There's a warning about the seccomp filters not being able to be installed because your kernel does not support them, and then it appears that you stopped the node intentionally. Note that the seccomp filters not being able to be installed is not an error, Elasticsearch will operate just fine without them, but your instance will be less secure.

One additional comment. That is a lot of indices for what appears to be a single-node cluster. That's very likely why you were running into heap pressure with the defaults.

1 Like

thk you very much
I will be in the details of the investigation
it's can closed