Unavailable_shards_exception, reason: primary shard is not active


(CaoPing) #1

Anyone can help me on that? It's pretty weird.

I'm using a simple architecture: Filebeat -> Logstash -> Elasticsearch. Besides, only one instance of logstash and elasticsearch.

When I started them, they worked well. However for a while, there were many error information in logstash.

{:timestamp=>"2016-07-28T16:07:26.984000+0800", :message=>"Beats input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover.", :exception=>LogStash::Inputs::BeatsSupport::CircuitBreaker::HalfOpenBreaker, :level=>:warn}

{:timestamp=>"2016-07-28T16:08:28.805000+0800", :message=>"retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[logstash-ehn-2016.07.20][3] primary shard is not active Timeout: [1m], request: [BulkShardRequest to [logstash-ehn-2016.07.20] containing [1] requests]"})", :level=>:info}

{:timestamp=>"2016-07-28T16:07:26.983000+0800", :message=>"CircuitBreaker::rescuing exceptions", :name=>"Beats input", :exception=>LogStash::Inputs::Beats::InsertingToQueueTakeTooLong, :level=>:warn}

What's more, the elasticsearch log seems there were several nodes. But I do have only one elasticsearch started.

[2016-07-28 12:51:44,178][INFO ][node ] [Stygyro] stopping ...
[2016-07-28 12:51:44,368][INFO ][node ] [Stygyro] closing ...
[2016-07-28 12:51:44,374][INFO ][node ] [Stygyro] closed
[2016-07-28 12:51:45,965][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in
[2016-07-28 12:51:46,238][INFO ][node ] [Betty Brant Leeds] version[2.3.4], pid[4393], build[e455fd0/2016-06-30T11:24:31Z]
[2016-07-28 12:51:46,238][INFO ][node ] [Betty Brant Leeds] initializing ...
[2016-07-28 12:51:46,813][INFO ][plugins ] [Betty Brant Leeds] modules [lang-groovy, reindex, lang-expression], plugins [], sites []
[2016-07-28 12:51:46,837][INFO ][env ] [Betty Brant Leeds] using [1] data paths, mounts [[/home/work (/dev/sda3)]], net usable_space [65.6gb], net total_space [116.1gb], spins? [possibly], types [ext3]
[2016-07-28 12:51:46,837][WARN ][env ] [Betty Brant Leeds] max file descriptors [65535] for elasticsearch process likely too low, consider increasing to at least [65536]
[2016-07-28 16:27:50,089][WARN ][netty.channel.DefaultChannelPipeline] An exception was thrown by an exception handler.
java.util.concurrent.RejectedExecutionException: Worker has already been shutdown
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.registerTask(AbstractNioSelector.java:120)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:72)
at org.jboss.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36)

Does anyone know why it happened and how to resolve it? Thanks a lot.


(Vinod Hy) #2

Even i am facing the same issue. Please help me with the solution.
Some time ack i was getting an error in kibana as "Elasticsearch is still initializing the kibana index".
So i ran the below command.
I had run curl -XDELETE http://localhost:9200/.kibana command earlier.

Will this cause any issue. Please help me.


(system) #3