My elasticsearch doesn't work


Hello, this morning I have seen that suddenly kibana didn't represent any message.
I have seen that the problem is in elasticsearch but the cluster health is yellow (it seems to be OK), however everytime I want to send a message I'm not able to see it, I have to restart the service to start seeing the messages. What could I do?

(Luca Wintergerst) #2

Can you acces any kind of ES API? for example these:

$ curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'

$ curl -XGET 'http://localhost:9200'


Yes, this is what I get:

"cluster_name" : "logstash",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 1,
"active_primary_shards" : 221,
"active_shards" : 221,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 221,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0

(Luca Wintergerst) #4

Can you also give me the output of just $ curl -XGET 'http://localhost:9200'?

Did you restart it already? Or is it still in this odd state?


"status" : 200,
"name" : "Loggy",
"cluster_name" : "logstash",
"version" : {
"number" : "1.7.1",
"build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
"build_timestamp" : "2015-07-29T09:54:16Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
"tagline" : "You Know, for Search"

(Luca Wintergerst) #6

Okay, this looks as good as it can get..

What exactly do you mean when you say "everytime I want to send a message I'm not able to see it"?


I sent UDP messages to logstash (port 30000) but I'm not able to see them in Kibana.
I tried to delete some indexes I'm not using from EShead and the ack is false.
What can be happening?

(Luca Wintergerst) #8

Oh, okay.

Can you set logging to debug and check the logs located at $ES-HOME/logs ?

PUT /_cluster/settings


I think that this is the problem:
[2015-10-26 11:36:26,748][WARN ][index.engine ] [Loggy] [logstash-syslog-2015.10.26][0] failed engine [out of memory (source: [maybe_merge])]
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(
at org.apache.lucene.index.ConcurrentMergeScheduler.merge(
at org.elasticsearch.index.merge.EnableMergeScheduler.merge(
at org.apache.lucene.index.IndexWriter.maybeMerge(
at org.apache.lucene.index.IndexWriter.maybeMerge(
at org.elasticsearch.index.engine.InternalEngine.maybeMerge(
at org.elasticsearch.index.shard.IndexShard$EngineMerger$
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$

(Mark Walkom) #10

Yeah, you need to check your heap size and, ideally, increase it.


I had 3 GB, but with the previous version (elasticsearch 1.1.1 and Kibana4) we had only 1,5 GB for ES_MAX_MEM and it worked fine. What could be the reason?

thank you


I have increased ES_HEAP_SIZE=4g
but I have tried sending 100 messages and it get stuck...(same problem)
How could this be possible?

(Luca Wintergerst) #13

How much memory does your host machine have?


8GB, for this reason I can't increase it more.

(Luca Wintergerst) #15

Can you tell me a little more about the data in your ES?

How many documents, how much total in GB, how many indices, what kind of messages you are trying to send


I have 44 indexes,
here I have a copy from the elasticsearch head

I made a migration from another machine and in order to have the same indexes I moved the directory /var/lib/elasticsearch/NAMEOFtHECLUSTER to another directory and there I have now the past indexes (from the other machine) and the new indexes...

(Luca Wintergerst) #17

This looks fine. Should not be a problem :confused:

So you still get the outofMemory if you try to index new data?


yes, when I send a big amount of data.

(Luca Wintergerst) #19

I dont know what else to do. Im sorry. Maybe @warkolm can help you out

(Mark Walkom) #20

You should reduce your shard count, that's likely causing the heap pressure.