ES going down?

I am testing my ELK stack on a setup as follows:

Logs of multiple servers of 1week are placed centrally on a server by
rsyslog. So I have ~1week logs of placed in file structure like:

Server-name/logdate/logname.log
I counted number of log lines in all the files to know the count of number
of events (documents) that will go in ES which came out to be:

Size: 22177095 (~2 Crore) events

*TESTCASE:*Net Data size to be read by logstash: 2.23 GB
No_Of_Files: 2379

openjdk version "1.8.0_20"
64bit RED HAT Server
RAM: 32GB
ES heap: 4GB
Hard Disk: 600GB

I have only 1 node in my cluster. 5 shards 1 replica.
Result:

ES Stopped automatically after ~4hours.
From Kibana I can see out ~2crore log lines 1crore were read after that ES
stopped.

I check ES logs it just shows

[2014-12-10 16:28:36,732][INFO ][node ] [Dr. Marla
Jameson] stopping ...[2014-12-10 16:28:37,346][INFO
][node ] [Dr. Marla Jameson] stopped[2014-12-10
16:28:37,347][INFO ][node ] [Dr. Marla Jameson] closing
...[2014-12-10 16:28:37,381][INFO ][node ] [Dr. Marla
Jameson] closed

But no error message.

And logstash logs:

og4j, [2014-12-10T13:52:09.854] INFO: org.elasticsearch.discovery.zen:
[logstash-XXXX-18286-6068] master_left
[[Toxin][zNiBpLdOTVG6uiyK_Pg9WA][XXXX][inet[/xx.xx.xx.x:9301]]], reason
[transport disconnected (with verified connect)]log4j,
[2014-12-10T13:52:09.859] WARN: org.elasticsearch.discovery.zen:
[logstash-XXXX-18286-6068] master_left and no other node elected to become
master, current nodes:
{[logstash-XXXX-18286-6068][RIlLo2hKQ0WvAGYnyTr9vQ][XXXX][inet[/xx.xx.xx.xx9300]]{data=false,
client=true},}log4j, [2014-12-10T13:52:09.860] INFO:
org.elasticsearch.cluster.service: [logstash-XXXX-18286-6068] removed
{[Toxin][zNiBpLdOTVG6uiyK_Pg9WA][XXXX][inet[/xx.xx.xx.xx:9301]],}, reason:
zen-disco-master_failed
([Toxin][zNiBpLdOTVG6uiyK_Pg9WA][XXXX][inet[/xx.xx.xx.xx:9301]])

Then when I restart ES and logstash, then again it reads logs from where it
left but then again ES goes down after ~1hour.

Is there a limit to size of an index?
When I do du -sh * on my index:

640M 0
643M 1
640M 2
642M 3
634M 4
8.0K _state

*Why is ES going down without any error message?? *How to solve this??

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/8dc7a449-7968-4a82-b367-a094893440b6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

If ES isn't crashing then something is stopping the process.

Are you using monit or anything like that?

On 10 December 2014 at 14:21, Siddharth Trikha siddharthtrikha9@gmail.com
wrote:

I am testing my ELK stack on a setup as follows:

Logs of multiple servers of 1week are placed centrally on a server by
rsyslog. So I have ~1week logs of placed in file structure like:

Server-name/logdate/logname.log
I counted number of log lines in all the files to know the count of number
of events (documents) that will go in ES which came out to be:

Size: 22177095 (~2 Crore) events

*TESTCASE:*Net Data size to be read by logstash: 2.23 GB
No_Of_Files: 2379

openjdk version "1.8.0_20"
64bit RED HAT Server
RAM: 32GB
ES heap: 4GB
Hard Disk: 600GB

I have only 1 node in my cluster. 5 shards 1 replica.
Result:

ES Stopped automatically after ~4hours.
From Kibana I can see out ~2crore log lines 1crore were read after that ES
stopped.

I check ES logs it just shows

[2014-12-10 16:28:36,732][INFO ][node ] [Dr. Marla
Jameson] stopping ...[2014-12-10 16:28:37,346][INFO
][node ] [Dr. Marla Jameson] stopped[2014-12-10
16:28:37,347][INFO ][node ] [Dr. Marla Jameson] closing
...[2014-12-10 16:28:37,381][INFO ][node ] [Dr. Marla
Jameson] closed

But no error message.

And logstash logs:

og4j, [2014-12-10T13:52:09.854] INFO: org.elasticsearch.discovery.zen:
[logstash-XXXX-18286-6068] master_left
[[Toxin][zNiBpLdOTVG6uiyK_Pg9WA][XXXX][inet[/xx.xx.xx.x:9301]]], reason
[transport disconnected (with verified connect)]log4j,
[2014-12-10T13:52:09.859] WARN: org.elasticsearch.discovery.zen:
[logstash-XXXX-18286-6068] master_left and no other node elected to become
master, current nodes:
{[logstash-XXXX-18286-6068][RIlLo2hKQ0WvAGYnyTr9vQ][XXXX][inet[/xx.xx.xx.xx9300]]{data=false,
client=true},}log4j, [2014-12-10T13:52:09.860] INFO:
org.elasticsearch.cluster.service: [logstash-XXXX-18286-6068] removed
{[Toxin][zNiBpLdOTVG6uiyK_Pg9WA][XXXX][inet[/xx.xx.xx.xx:9301]],}, reason:
zen-disco-master_failed
([Toxin][zNiBpLdOTVG6uiyK_Pg9WA][XXXX][inet[/xx.xx.xx.xx:9301]])

Then when I restart ES and logstash, then again it reads logs from where
it left but then again ES goes down after ~1hour.

Is there a limit to size of an index?
When I do du -sh * on my index:

640M 0
643M 1
640M 2
642M 3
634M 4
8.0K _state

*Why is ES going down without any error message?? *How to solve this??

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/8dc7a449-7968-4a82-b367-a094893440b6%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/8dc7a449-7968-4a82-b367-a094893440b6%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEYi1X9ENHyZKAg1B_WMX%2B8D3Vo3-rugAhrxPW2BLa8muBW%2BCw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.