Our Elastic search server can not serve many connections for a long time

Hi all

I am running Elastic Search in a dedicated server: 4 vCPU, 8GB RAM in
Amazon Cloud. I installed ES version 1.5.

In web application, I have cached in search query, although my ES server
will be respond for many connections at the same time, I think about 20

I write a PHP library interact with ES server via curl such as curl -XGET
http://elastic_server_ip:9200/...

Here is query in web application:

$query = array(
'function_score' => array(
'query' => array(
'bool' => array(
'must' => array(
array(
'multi_match' => array(
'query' => $keyword,
'fields' => array('name', 'vendor'),
'operator' => 'or'
)
),
array('term' => array('publish' => 1)),
array('term' => array('app_type' => $this->
store))
)
)
),
'script_score' => array(
'script' => "_score + doc['rank'].value"
)
)

        );
         */

        $params = array(
            'fields' => array('_id'),
            "query" => $query,
            "size" => $this->limit,
            "from" => $this->start,
            "sort" => array(
                "_score" => "desc"
            )
        );

In /etc/init.d/elasticsearch (init script)
es_heap_size: 2G
es_heap_newsize:
es_min_mem: 512M
es_max_mem: 1G
max_locked_memory: unlimited

In /etc/elasticsearch/elasticsearch.yml, all settings are default except
these following settings:
script.groovy.sandbox.enabled: true

bootstrap.mlockall: true

threadpool.index.type: fixed
threadpool.index.size: 4
threadpool.index.queue_size: 1000
threadpool.search.queue_size: 1000
threadpool.search.type: cached
threadpool.bulk.type: fixed
threadpool.bulk.size: 4 # availableProcessors
threadpool.bulk.queue_size: 1000

With these settings, The ES server will run normally for a long time (about
1 hours)
But after that time, the ES server don't response any more connections.
If I reduce the HEAP SIZE to 1G, I think the time that ES server run
normally is about 10 minutes

Here is some log when ES start:
[2015-03-30 06:59:05,877][DEBUG][gateway.local.state.shards] [Arkus] [
my_index][3] shard state info found: [version [10], primary [true]]
[2015-03-30 06:59:05,877][DEBUG][gateway.local ] [Arkus] [
my_index][3]: throttling allocation [[my_index][3], node[null], [P], s[
UNASSIGNED]] to [[[Arkus][RQCnuFvBQbypN5GCUa04zw][ip-10-0-0-230][inet[/10.0.0.230:9300]]]]
on primary allocation
[2015-03-30 06:59:05,880][DEBUG][cluster.service ] [Arkus] cluster
state updated, version [2], source [local-gateway-elected-state]
[2015-03-30 06:59:05,881][DEBUG][cluster.service ] [Arkus]
publishing cluster state version 2
[2015-03-30 06:59:05,881][DEBUG][cluster.service ] [Arkus] set
local cluster state to version 2
[2015-03-30 06:59:05,881][DEBUG][indices.cluster ] [Arkus]
[my_index] creating index
[2015-03-30 06:59:05,882][DEBUG][indices ] [Arkus]
creating Index [my_index], shards [5]/[1]
[2015-03-30 06:59:06,078][DEBUG][index.mapper ] [Arkus] [
my_index] using dynamic[true], default mapping: default_mapping_location[
null], loaded_from[jar:file:/usr/share/elasticsearch/lib/elasticsearch-1.5.
0.jar!/org/elasticsearch/index/mapper/default-mapping.json], default
percolator mapping: location[null], loaded_from[null]
[2015-03-30 06:59:06,078][DEBUG][index.cache.query.parser.resident] [Arkus]
[my_index] using [resident] query cache with max_size [100], expire [null]
[2015-03-30 06:59:06,082][DEBUG][index.store.fs ] [Arkus] [
my_index] using index.store.throttle.type [node], with index.store.throttle.max_bytes_per_sec
[0b]
[2015-03-30 06:59:06,126][DEBUG][action.search.type ] [Arkus] All
shards failed for phase: [query]
org.elasticsearch.indices.IndexMissingException: [my_index] missing
at org.elasticsearch.indices.IndicesService.indexServiceSafe(
IndicesService.java:284)
at org.elasticsearch.search.SearchService.createContext(
SearchService.java:544)
at org.elasticsearch.search.SearchService.createAndPutContext(
SearchService.java:529)
at org.elasticsearch.search.SearchService.executeQueryPhase(
SearchService.java:291)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.
call(SearchServiceTransportAction.java:231)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.
call(SearchServiceTransportAction.java:228)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.
run(SearchServiceTransportAction.java:559)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-03-30 06:59:06,134][DEBUG][indices.cluster ] [Arkus] [
my_index] adding mapping [apps], source [{"apps":{"properties":{"app_type":{
"type":"string","index_analyzer":"index_analyzer"},"name":{"type":"string",
"index_analyzer":"index_analyzer","search_analyzer":"search_name_analyzer",
"search_quote_analyzer":"index_analyzer"},"pubish":{"type":"integer"},
"publish":{"type":"long"},"rank":{"type":"double"},"vendor":{"type":"string"
,"index_analyzer":"index_analyzer","search_analyzer":"search_name_analyzer",
"search_quote_analyzer":"index_analyzer"}}}}]
[2015-03-30 06:59:06,171][DEBUG][action.search.type ] [Arkus] All
shards failed for phase: [query]
org.elasticsearch.index.IndexShardMissingException: [my_index][0] missing
at org.elasticsearch.index.IndexService.shardSafe(IndexService.java:
210)
at org.elasticsearch.search.SearchService.createContext(
SearchService.java:545)
at org.elasticsearch.search.SearchService.createAndPutContext(
SearchService.java:529)

And here are the log between normal time and no response time:
[2015-03-30 06:59:07,845][DEBUG][script ] [Arkus]
notifying script services of script removal due to: [REPLACED]
[2015-03-30 06:59:07,865][DEBUG][script ] [Arkus]
notifying script services of script removal due to: [REPLACED]
[2015-03-30 06:59:15,849][DEBUG][cluster.service ] [Arkus]
processing [routing-table-updater]: execute
[2015-03-30 06:59:15,850][DEBUG][cluster.service ] [Arkus]
processing [routing-table-updater]: no change in cluster_state
[2015-03-30 06:59:29,711][DEBUG][http.netty ] [Arkus] Caught
exception while handling client http traffic, closing connection [id:
0x6ca35b4e, /10.0.0.166:36271 :> /10.0.0.230:9200]
java.nio.channels.ClosedChannelException
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:433)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:128)
at
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:99)
at
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:36)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:779)
at
org.elasticsearch.common.netty.channel.Channels.write(Channels.java:725)
at
org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
at
org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:784)
at
org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.handleDownstream(HttpPipeliningHandler.java:87)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
at
org.elasticsearch.http.netty.NettyHttpChannel.sendResponse(NettyHttpChannel.java:199)
at
org.elasticsearch.rest.action.support.RestResponseListener.processResponse(RestResponseListener.java:43)
at
org.elasticsearch.rest.action.support.RestActionListener.onResponse(RestActionListener.java:49)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction$2.doRun(TransportSearchQueryThenFetchAction.java:149)
at
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-03-30 06:59:34,811][DEBUG][indices.memory ] [Arkus]
recalculating shard indexing buffer (reason=[[ADDED]]), total is [201.4mb]
with [5] active shards, each shard set to indexing=[40.2mb], translog=[64kb

[2015-03-30 07:05:12,474][DEBUG][http.netty ] [Madame Menace]
Caught exception while handling client http traffic, closing connection
[id: 0x31d02618, /10.0.0.166:59967 :> /10.0.0.230:9200]
java.nio.channels.ClosedChannelException
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:433)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:128)
at
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:99)
at
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:36)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:779)
at
org.elasticsearch.common.netty.channel.Channels.write(Channels.java:725)
at
org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
at
org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:784)
at
org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.handleDownstream(HttpPipeliningHandler.java:87)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
at
org.elasticsearch.http.netty.NettyHttpChannel.sendResponse(NettyHttpChannel.java:199)
at
org.elasticsearch.rest.action.support.RestResponseListener.processResponse(RestResponseListener.java:43)
at
org.elasticsearch.rest.action.support.RestActionListener.onResponse(RestActionListener.java:49)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction$2.doRun(TransportSearchQueryThenFetchAction.java:149)
at
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-03-30 07:05:18,046][INFO ][monitor.jvm ] [Madame Menace]
[gc][old][97][32] duration [5.6s], collections [1]/[5.8s], total
[5.6s]/[2m], memory [1.7gb]->[1.8gb]/[1.9gb], all_pools {[young]
[100.6mb]->[164.7mb]/[266.2mb]}{[survivor] [0b]->[0b]/[33.2mb]}{[old]
[1.6gb]->[1.6gb]/[1.6gb]}
[2015-03-30 07:05:21,899][DEBUG][monitor.jvm ] [Madame Menace]
[gc][old][98][33] duration [3.7s], collections [1]/[3.8s], total
[3.7s]/[2m], memory [1.8gb]->[1.7gb]/[1.9gb], all_pools {[young]
[164.7mb]->[103mb]/[266.2mb]}{[survivor] [0b]->[0b]/[33.2mb]}{[old]
[1.6gb]->[1.6gb]/[1.6gb]}
[2015-03-30 07:05:27,746][INFO ][monitor.jvm ] [Madame Menace]
[gc][old][99][34] duration [5.6s], collections [1]/[5.8s], total
[5.6s]/[2.1m], memory [1.7gb]->[1.7gb]/[1.9gb], all_pools {[young]
[103mb]->[72.4mb]/[266.2mb]}{[survivor] [0b]->[0b]/[33.2mb]}{[old]
[1.6gb]->[1.6gb]/[1.6gb]}

Do you have any suggestion for that problem!

Thank you very much

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e98ed453-ab08-408b-b957-390ed0bf8103%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.