Hi,
First, thank you for this wonderful product!
I'm using it for search and tags on EklaBlog, a french blogs platform.
I have a problem with ES for some weeks: some minutes or hours after
start, a thread of ES takes 100% of a CPU, other threads 0%, and ES
doesn't respond anymore to any request.
Then I restart ES and it works again for some minutes...
ES is installed on a single serveur with this configuration:
network.publish_host: meriadec
cluster:
name: eklablog
path:
logs: /var/log/elasticsearch
data: /home/elasticsearch
discovery.zen.ping:
multicast.enabled: false
set.default.ES_HOME=/opt/elasticsearch
set.default.ES_MIN_MEM=1024
set.default.ES_MAX_MEM=5000
I actually have an error which occurs everytime, but I don't if it's
the cause:
[2011-11-08 00:38:52,263][WARN ][common.jna ] Unknown
mlockall error 0
[2011-11-08 00:38:52,268][INFO ][node ] [Pip the
Troll] {0.18.2}[23851]: initializing ...
[2011-11-08 00:38:52,271][INFO ][plugins ] [Pip the
Troll] loaded [], sites []
[2011-11-08 00:38:53,720][INFO ][node ] [Pip the
Troll] {0.18.2}[23851]: initialized
[2011-11-08 00:38:53,721][INFO ][node ] [Pip the
Troll] {0.18.2}[23851]: starting ...
[2011-11-08 00:38:53,767][INFO ][transport ] [Pip the
Troll] bound_address {inet[/0.0.0.0:9300]}, publish_address
{inet[meriadec/94.23.13.10:9300]}
[2011-11-08 00:38:56,776][INFO ][cluster.service ] [Pip the
Troll] new_master [Pip the Troll][83bg-5soRT-7QqHFEhVfmA]
[inet[meriadec/94.23.13.10:9300]], reason: zen-disco-join
(elected_as_master)
[2011-11-08 00:38:56,816][INFO ][discovery ] [Pip the
Troll] eklablog/83bg-5soRT-7QqHFEhVfmA
[2011-11-08 00:38:56,899][INFO ][http ] [Pip the
Troll] bound_address {inet[/0.0.0.0:9200]}, publish_address
{inet[meriadec/94.23.13.10:9200]}
[2011-11-08 00:38:56,900][INFO ][node ] [Pip the
Troll] {0.18.2}[23851]: started
[2011-11-08 00:38:57,745][INFO ][gateway ] [Pip the
Troll] recovered [4] indices into cluster_state
[2011-11-08 00:52:37,793][DEBUG][action.search.type ] [Pip the
Troll] [1094] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search
context found for id [1094]
at
org.elasticsearch.search.SearchService.findContext(SearchService.java:
411)
at
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:
386)
at
org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:
314)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction
$AsyncAction.executeFetch(TransportSearchQueryThenFetchAction.java:
145)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction
$AsyncAction$2.run(TransportSearchQueryThenFetchAction.java:132)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
[2011-11-08 00:52:38,101][DEBUG][action.search.type ] [Pip the
Troll] [912] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search
context found for id [912]
at
org.elasticsearch.search.SearchService.findContext(SearchService.java:
411)
at
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:
386)
at
org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:
314)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction
$AsyncAction.executeFetch(TransportSearchQueryThenFetchAction.java:
145)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction
$AsyncAction$2.run(TransportSearchQueryThenFetchAction.java:132)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
And I've sometimes another error:
[2011-11-08 01:49:04,063][WARN ][http.netty ] [Pip the
Troll] Caught exception while handling client http traffic, closing
connection
java.io.IOException: Relais brisé (pipe)
at sun.nio.ch.FileDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:72)
at sun.nio.ch.IOUtil.write(IOUtil.java:28)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
at
org.elasticsearch.common.netty.channel.socket.nio.SocketSendBufferPool
$PooledSendBuffer.transferTo(SocketSendBufferPool.java:239)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.write0(NioWorker.java:
470)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.writeFromUserCode(NioWorker.java:
388)
at
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:
137)
at
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:
76)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:
771)
at
org.elasticsearch.common.netty.channel.Channels.write(Channels.java:
632)
at
org.elasticsearch.common.netty.channel.Channels.write(Channels.java:
593)
at
org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:
99)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:
80)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
783)
at
org.elasticsearch.common.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:
104)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:
80)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
783)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
302)
at
org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.unfoldAndFireMessageReceived(ReplayingDecoder.java:
522)
at
org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:
506)
at
org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:
443)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:
80)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
783)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:
65)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
274)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
261)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:
349)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:
280)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:
200)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:
108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
And ES seems to restart sometimes again and again without
intervention:
[2011-11-08 01:59:09,348][DEBUG][action.search.type ] [Pip the
Troll] [3606] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search
context found for id [3606]
at
org.elasticsearch.search.SearchService.findContext(SearchService.java:
411)
at
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:
386)
at
org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:
314)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction
$AsyncAction.executeFetch(TransportSearchQueryThenFetchAction.java:
145)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction
$AsyncAction$2.run(TransportSearchQueryThenFetchAction.java:132)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
[2011-11-08 02:02:42,419][WARN ][common.jna ] Unknown
mlockall error 0
[2011-11-08 02:02:42,424][INFO ][node ] [Bengal]
{0.18.2}[28365]: initializing ...
[2011-11-08 02:02:42,427][INFO ][plugins ] [Bengal]
loaded [], sites []
[2011-11-08 02:02:43,873][INFO ][node ] [Bengal]
{0.18.2}[28365]: initialized
[2011-11-08 02:02:43,874][INFO ][node ] [Bengal]
{0.18.2}[28365]: starting ...
[2011-11-08 02:02:43,920][INFO ][transport ] [Bengal]
bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[meriadec/
94.23.13.10:9300]}
[2011-11-08 02:02:46,929][INFO ][cluster.service ] [Bengal]
new_master [Bengal][P8GUsHRtRc-GAn88FELsYA][inet[meriadec/
94.23.13.10:9300]], reason: zen-disco-join (elected_as_master)
[2011-11-08 02:02:46,967][INFO ][discovery ] [Bengal]
eklablog/P8GUsHRtRc-GAn88FELsYA
[2011-11-08 02:02:47,051][INFO ][http ] [Bengal]
bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[meriadec/
94.23.13.10:9200]}
[2011-11-08 02:02:47,052][INFO ][node ] [Bengal]
{0.18.2}[28365]: started
[2011-11-08 02:02:47,905][INFO ][gateway ] [Bengal]
recovered [4] indices into cluster_state
[2011-11-08 02:04:06,603][DEBUG][action.index ] [Bengal]
[eklablog][2], node[P8GUsHRtRc-GAn88FELsYA], [P], s[STARTED]: Failed
to execute [index {[eklablog][mod_html][247468], source[]}]
org.elasticsearch.ElasticSearchParseException: Failed to derive
xcontent from (offset=0, length=0): []
at
org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:
147)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:
430)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:
409)
at
org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:
302)
at
org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:
181)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction
$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:
487)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction
$AsyncShardOperationAction
$1.run(TransportShardReplicationOperationAction.java:400)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
[2011-11-08 02:12:16,600][WARN ][common.jna ] Unknown
mlockall error 0
[2011-11-08 02:12:16,604][INFO ][node ] [Wrath]
{0.18.2}[29245]: initializing ...
[2011-11-08 02:12:16,608][INFO ][plugins ] [Wrath]
loaded [], sites []
[2011-11-08 02:12:18,054][INFO ][node ] [Wrath]
{0.18.2}[29245]: initialized
[2011-11-08 02:12:18,055][INFO ][node ] [Wrath]
{0.18.2}[29245]: starting ...
[2011-11-08 02:12:18,101][INFO ][transport ] [Wrath]
bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[meriadec/
94.23.13.10:9300]}
[2011-11-08 02:12:21,109][INFO ][cluster.service ] [Wrath]
new_master [Wrath][I-Ps5UZiQYOKaHakZqRw8g][inet[meriadec/
94.23.13.10:9300]], reason: zen-disco-join (elected_as_master)
[2011-11-08 02:12:21,138][INFO ][discovery ] [Wrath]
eklablog/I-Ps5UZiQYOKaHakZqRw8g
[2011-11-08 02:12:21,223][INFO ][http ] [Wrath]
bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[meriadec/
94.23.13.10:9200]}
[2011-11-08 02:12:21,223][INFO ][node ] [Wrath]
{0.18.2}[29245]: started
[2011-11-08 02:12:22,248][INFO ][gateway ] [Wrath]
recovered [4] indices into cluster_state
[2011-11-08 02:30:46,460][WARN ][common.jna ] Unknown
mlockall error 0
[2011-11-08 02:30:46,464][INFO ][node ] [Warstar]
{0.18.2}[30351]: initializing ...
[2011-11-08 02:30:46,468][INFO ][plugins ] [Warstar]
loaded [], sites []
[2011-11-08 02:30:47,917][INFO ][node ] [Warstar]
{0.18.2}[30351]: initialized
[2011-11-08 02:30:47,917][INFO ][node ] [Warstar]
{0.18.2}[30351]: starting ...
[2011-11-08 02:30:47,963][INFO ][transport ] [Warstar]
bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[meriadec/
94.23.13.10:9300]}
[2011-11-08 02:30:51,093][INFO ][node ] [Warstar]
{0.18.2}[30351]: started
[2011-11-08 02:30:52,001][INFO ][gateway ] [Warstar]
recovered [4] indices into cluster_state
[2011-11-08 02:54:00,009][WARN ][common.jna ] Unknown
mlockall error 0
[2011-11-08 02:54:00,013][INFO ][node ] [Starr the
Slayer] {0.18.2}[31715]: initializing ...
[2011-11-08 02:54:00,017][INFO ][plugins ] [Starr the
Slayer] loaded [], sites []
[2011-11-08 02:54:01,469][INFO ][node ] [Starr the
Slayer] {0.18.2}[31715]: initialized
[2011-11-08 02:54:01,469][INFO ][node ] [Starr the
Slayer] {0.18.2}[31715]: starting ...
[2011-11-08 02:54:01,516][INFO ][transport ] [Starr the
Slayer] bound_address {inet[/0.0.0.0:9300]}, publish_address
{inet[meriadec/94.23.13.10:9300]}
[2011-11-08 02:54:04,524][INFO ][cluster.service ] [Starr the
Slayer] new_master [Starr the Slayer][jQhaiX6MRhaZFor1FPB-jg]
[inet[meriadec/94.23.13.10:9300]], reason: zen-disco-join
(elected_as_master)
[2011-11-08 02:54:04,551][INFO ][discovery ] [Starr the
Slayer] eklablog/jQhaiX6MRhaZFor1FPB-jg
[2011-11-08 02:54:04,635][INFO ][http ] [Starr the
Slayer] bound_address {inet[/0.0.0.0:9200]}, publish_address
{inet[meriadec/94.23.13.10:9200]}
[2011-11-08 02:54:04,635][INFO ][node ] [Starr the
Slayer] {0.18.2}[31715]: started
[2011-11-08 02:54:05,613][INFO ][gateway ] [Starr the
Slayer] recovered [4] indices into cluster_state
[2011-11-08 02:56:00,649][WARN ][http.netty ] [Starr the
Slayer] Caught exception while handling client http traffic, closing
connection
java.io.IOException: Connexion ré-initialisée par le correspondant
at sun.nio.ch.FileDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:202)
at sun.nio.ch.IOUtil.read(IOUtil.java:169)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:
321)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:
280)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:
200)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:
108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
[2011-11-08 03:05:50,114][WARN ][common.jna ] Unknown
mlockall error 0
[2011-11-08 03:05:50,119][INFO ][node ] [Golden
Oldie] {0.18.2}[554]: initializing ...
[2011-11-08 03:05:50,122][INFO ][plugins ] [Golden
Oldie] loaded [], sites []
[2011-11-08 03:05:51,568][INFO ][node ] [Golden
Oldie] {0.18.2}[554]: initialized
[2011-11-08 03:05:51,569][INFO ][node ] [Golden
Oldie] {0.18.2}[554]: starting ...
[2011-11-08 03:05:51,614][INFO ][transport ] [Golden
Oldie] bound_address {inet[/0.0.0.0:9300]}, publish_address
{inet[meriadec/94.23.13.10:9300]}
[2011-11-08 03:05:54,623][INFO ][cluster.service ] [Golden
Oldie] new_master [Golden Oldie][jK09VMmGS62LZpII6CDKSQ][inet[meriadec/
94.23.13.10:9300]], reason: zen-disco-join (elected_as_master)
[2011-11-08 03:05:54,663][INFO ][discovery ] [Golden
Oldie] eklablog/jK09VMmGS62LZpII6CDKSQ
[2011-11-08 03:05:54,747][INFO ][http ] [Golden
Oldie] bound_address {inet[/0.0.0.0:9200]}, publish_address
{inet[meriadec/94.23.13.10:9200]}
[2011-11-08 03:05:54,748][INFO ][node ] [Golden
Oldie] {0.18.2}[554]: started
[2011-11-08 03:05:55,721][INFO ][gateway ] [Golden
Oldie] recovered [4] indices into cluster_state
[2011-11-08 03:16:23,876][WARN ][common.jna ] Unknown
mlockall error 0
[2011-11-08 03:16:23,881][INFO ][node ] [Kraven
the Hunter] {0.18.2}[1345]: initializing ...
[2011-11-08 03:16:23,885][INFO ][plugins ] [Kraven
the Hunter] loaded [], sites []
[2011-11-08 03:16:25,334][INFO ][node ] [Kraven
the Hunter] {0.18.2}[1345]: initialized
[2011-11-08 03:16:25,334][INFO ][node ] [Kraven
the Hunter] {0.18.2}[1345]: starting ...
[2011-11-08 03:16:25,380][INFO ][transport ] [Kraven
the Hunter] bound_address {inet[/0.0.0.0:9300]}, publish_address
{inet[meriadec/94.23.13.10:9300]}
[2011-11-08 03:16:28,389][INFO ][cluster.service ] [Kraven
the Hunter] new_master [Kraven the Hunter][_d9OtWCkQ_e3NZ-ncuotcQ]
[inet[meriadec/94.23.13.10:9300]], reason: zen-disco-join
(elected_as_master)
[2011-11-08 03:16:28,423][INFO ][discovery ] [Kraven
the Hunter] eklablog/_d9OtWCkQ_e3NZ-ncuotcQ
[2011-11-08 03:16:28,511][INFO ][http ] [Kraven
the Hunter] bound_address {inet[/0.0.0.0:9200]}, publish_address
{inet[meriadec/94.23.13.10:9200]}
[2011-11-08 03:16:28,511][INFO ][node ] [Kraven
the Hunter] {0.18.2}[1345]: started
[2011-11-08 03:16:29,475][INFO ][gateway ] [Kraven
the Hunter] recovered [4] indices into cluster_state
[2011-11-08 03:36:34,137][WARN ][common.jna ] Unknown
mlockall error 0
[2011-11-08 03:36:34,141][INFO ][node ]
[Dragoness] {0.18.2}[2557]: initializing ...
[2011-11-08 03:36:34,145][INFO ][plugins ]
[Dragoness] loaded [], sites []
[2011-11-08 03:36:35,592][INFO ][node ]
[Dragoness] {0.18.2}[2557]: initialized
[2011-11-08 03:36:35,592][INFO ][node ]
[Dragoness] {0.18.2}[2557]: starting ...
[2011-11-08 03:36:35,638][INFO ][transport ]
[Dragoness] bound_address {inet[/0.0.0.0:9300]}, publish_address
{inet[meriadec/94.23.13.10:9300]}
[2011-11-08 03:36:38,647][INFO ][cluster.service ]
[Dragoness] new_master [Dragoness][JWxLhulPTPif66_2a0giKg]
[inet[meriadec/94.23.13.10:9300]], reason: zen-disco-join
(elected_as_master)
[2011-11-08 03:36:38,671][INFO ][discovery ]
[Dragoness] eklablog/JWxLhulPTPif66_2a0giKg
[2011-11-08 03:36:38,756][INFO ][http ]
[Dragoness] bound_address {inet[/0.0.0.0:9200]}, publish_address
{inet[meriadec/94.23.13.10:9200]}
[2011-11-08 03:36:38,757][INFO ][node ]
[Dragoness] {0.18.2}[2557]: started
[2011-11-08 03:36:39,669][INFO ][gateway ]
[Dragoness] recovered [4] indices into cluster_state
[2011-11-08 03:53:02,838][WARN ][common.jna ] Unknown
mlockall error 0
[2011-11-08 03:53:02,842][INFO ][node ] [Ghaur]
{0.18.2}[3612]: initializing ...
[2011-11-08 03:53:02,846][INFO ][plugins ] [Ghaur]
loaded [], sites []
[2011-11-08 03:53:04,294][INFO ][node ] [Ghaur]
{0.18.2}[3612]: initialized
[2011-11-08 03:53:04,294][INFO ][node ] [Ghaur]
{0.18.2}[3612]: starting ...
[2011-11-08 03:53:04,340][INFO ][transport ] [Ghaur]
bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[meriadec/
94.23.13.10:9300]}
[2011-11-08 03:53:07,349][INFO ][cluster.service ] [Ghaur]
new_master [Ghaur][gGw47XfjT2ih_UcdOcScnw][inet[meriadec/
94.23.13.10:9300]], reason: zen-disco-join (elected_as_master)
[2011-11-08 03:53:07,378][INFO ][discovery ] [Ghaur]
eklablog/gGw47XfjT2ih_UcdOcScnw
[2011-11-08 03:53:07,468][INFO ][http ] [Ghaur]
bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[meriadec/
94.23.13.10:9200]}
[2011-11-08 03:53:07,469][INFO ][node ] [Ghaur]
{0.18.2}[3612]: started
[2011-11-08 03:53:08,463][INFO ][gateway ] [Ghaur]
recovered [4] indices into cluster_state
Thanks in advance for your help!!