My elasticsearch doesn't work

Hello, this morning I have seen that suddenly kibana didn't represent any message.
I have seen that the problem is in elasticsearch but the cluster health is yellow (it seems to be OK), however everytime I want to send a message I'm not able to see it, I have to restart the service to start seeing the messages. What could I do?

Can you acces any kind of ES API? for example these:

$ curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'

$ curl -XGET 'http://localhost:9200'

Yes, this is what I get:

{
"cluster_name" : "logstash",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 1,
"active_primary_shards" : 221,
"active_shards" : 221,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 221,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0
}

Can you also give me the output of just $ curl -XGET 'http://localhost:9200'?

Did you restart it already? Or is it still in this odd state?

{
"status" : 200,
"name" : "Loggy",
"cluster_name" : "logstash",
"version" : {
"number" : "1.7.1",
"build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
"build_timestamp" : "2015-07-29T09:54:16Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}

Okay, this looks as good as it can get..

What exactly do you mean when you say "everytime I want to send a message I'm not able to see it"?

I sent UDP messages to logstash (port 30000) but I'm not able to see them in Kibana.
I tried to delete some indexes I'm not using from EShead and the ack is false.
What can be happening?

Oh, okay.

Can you set logging to debug and check the logs located at $ES-HOME/logs ?

PUT /_cluster/settings
{"transient":{"logger._root":"DEBUG"}}

I think that this is the problem:
[2015-10-26 11:36:26,748][WARN ][index.engine ] [Loggy] [logstash-syslog-2015.10.26][0] failed engine [out of memory (source: [maybe_merge])]
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at org.apache.lucene.index.ConcurrentMergeScheduler.merge(ConcurrentMergeScheduler.java:391)
at org.elasticsearch.index.merge.EnableMergeScheduler.merge(EnableMergeScheduler.java:50)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1985)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1979)
at org.elasticsearch.index.engine.InternalEngine.maybeMerge(InternalEngine.java:778)
at org.elasticsearch.index.shard.IndexShard$EngineMerger$1.run(IndexShard.java:1241)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Yeah, you need to check your heap size and, ideally, increase it.

I had 3 GB, but with the previous version (elasticsearch 1.1.1 and Kibana4) we had only 1,5 GB for ES_MAX_MEM and it worked fine. What could be the reason?

thank you

I have increased ES_HEAP_SIZE=4g
but I have tried sending 100 messages and it get stuck...(same problem)
How could this be possible?

How much memory does your host machine have?

8GB, for this reason I can't increase it more.

Can you tell me a little more about the data in your ES?

How many documents, how much total in GB, how many indices, what kind of messages you are trying to send

I have 44 indexes,
here I have a copy from the elasticsearch head

I made a migration from another machine and in order to have the same indexes I moved the directory /var/lib/elasticsearch/NAMEOFtHECLUSTER to another directory and there I have now the past indexes (from the other machine) and the new indexes...

This looks fine. Should not be a problem :confused:

So you still get the outofMemory if you try to index new data?

yes, when I send a big amount of data.

I dont know what else to do. Im sorry. Maybe @warkolm can help you out

You should reduce your shard count, that's likely causing the heap pressure.