How many tcp connections should ES/logstash generate?

I have a basic setup with a logstash shipper, an indexer and an
elasticsearch cluster.
Elasticsearch listen on the standart 9200/9300 and logstash indexer
9301/9302.

When I do a netstat | wc -l for the ES process: 184 found
(sample)

tcp 0 0 ::ffff:172.17.7.87:9300
::ffff:172.17.8.39:59573 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300
::ffff:172.17.7.87:47609 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:53493
::ffff:172.17.7.87:9302 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300
::ffff:172.17.8.39:59564 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300
::ffff:172.17.7.87:47657 ESTABLISHED 23224/java

Same thing for the logstash indexer : 160 found
(sample)

tcp 0 0 ::ffff:172.17.7.87:50132
::ffff:172.17.8.39:9300 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9301
::ffff:172.17.7.87:60153 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9301
::ffff:172.17.7.87:60145 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:50129
::ffff:172.17.8.39:9300 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9302
::ffff:172.17.7.87:53501 ESTABLISHED 1516/java

Also, not sure if related, when I try to delete some documents by query (
curl -XDELETE 'http://localhost:9200/check/_query?pretty=1' -d
'{"query":{"range":{"@timestamp":{"from":"2014-07-10T00:00:00","to":"2014-07-14T05:00:00"}}}}'
)

"RemoteTransportException[[Stonewall][inet[/172.17.8.39:9300]][deleteByQuery/shard]];
nested: OutOfMemoryError[unable to create new native thread]; "

I have a script that run this kind of query every 30 seconds to clean up
this particular index.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1167d4a6-6b87-45a9-835a-eba0ba696825%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

It'd depend on your config I'd guess, in particular how many
workers/threads you have and what ES output you are using in LS.

Why are you cleaning an index like this anyway? It seems horribly
inefficient.
Basically the error is "OutOfMemoryError", which means you've run out of
heap for the operation to complete. What are the specs for your node, how
much heap does ES have?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 16 July 2014 00:43, Bastien Chong bastien974@gmail.com wrote:

I have a basic setup with a logstash shipper, an indexer and an
elasticsearch cluster.
Elasticsearch listen on the standart 9200/9300 and logstash indexer
9301/9302.

When I do a netstat | wc -l for the ES process: 184 found
(sample)

tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:172.17.8.39:59573
ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:172.17.7.87:47609
ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:53493 ::ffff:172.17.7.87:9302
ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:172.17.8.39:59564
ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:172.17.7.87:47657
ESTABLISHED 23224/java

Same thing for the logstash indexer : 160 found
(sample)

tcp 0 0 ::ffff:172.17.7.87:50132 ::ffff:172.17.8.39:9300
ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9301 ::ffff:172.17.7.87:60153
ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9301 ::ffff:172.17.7.87:60145
ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:50129 ::ffff:172.17.8.39:9300
ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9302 ::ffff:172.17.7.87:53501
ESTABLISHED 1516/java

Also, not sure if related, when I try to delete some documents by query (
curl -XDELETE 'http://localhost:9200/check/_query?pretty=1' -d
'{"query":{"range":{"@timestamp":{"from":"2014-07-10T00:00:00","to":"2014-07-14T05:00:00"}}}}'
)

"RemoteTransportException[[Stonewall][inet[/172.17.8.39:9300]][deleteByQuery/shard]];
nested: OutOfMemoryError[unable to create new native thread]; "

I have a script that run this kind of query every 30 seconds to clean up
this particular index.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/1167d4a6-6b87-45a9-835a-eba0ba696825%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/1167d4a6-6b87-45a9-835a-eba0ba696825%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624Z%2BoDy0a7teHvD-TsSYYhHPs1xZRn9gSM4Qf0C7KsHBTg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

I'm not sure how to find answer that, I use the default settings in ES. The
cluster is composed of 2 read/write node, and a read-only node.
There is 1 Logstash instance that simply output 2 type of data to ES.
Nothing fancy.

I need to delete documents older than a day, for this particular thing, I
can't create a daily index. Is there a better way ?

I'm using an EC2 m3.large instance, ES has 1.5GB of heap.

It seems like I'm hitting an OS limit, I can't "su - elasticsearch" :

su: /bin/bash: Resource temporarily unavailable

Stopping elasticsearch fix this issue, so this is directly linked.

-bash-4.1$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29841
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 65536
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

On Tuesday, July 15, 2014 6:35:22 PM UTC-4, Mark Walkom wrote:

It'd depend on your config I'd guess, in particular how many
workers/threads you have and what ES output you are using in LS.

Why are you cleaning an index like this anyway? It seems horribly
inefficient.
Basically the error is "OutOfMemoryError", which means you've run out of
heap for the operation to complete. What are the specs for your node, how
much heap does ES have?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com <javascript:>
web: www.campaignmonitor.com

On 16 July 2014 00:43, Bastien Chong <basti...@gmail.com <javascript:>>
wrote:

I have a basic setup with a logstash shipper, an indexer and an
elasticsearch cluster.
Elasticsearch listen on the standart 9200/9300 and logstash indexer
9301/9302.

When I do a netstat | wc -l for the ES process: 184 found
(sample)

tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:172.17.8.39:59573
ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:172.17.7.87:47609
ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:53493 ::ffff:172.17.7.87:9302
ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:172.17.8.39:59564
ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:172.17.7.87:47657
ESTABLISHED 23224/java

Same thing for the logstash indexer : 160 found
(sample)

tcp 0 0 ::ffff:172.17.7.87:50132 ::ffff:172.17.8.39:9300
ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9301 ::ffff:172.17.7.87:60153
ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9301 ::ffff:172.17.7.87:60145
ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:50129 ::ffff:172.17.8.39:9300
ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9302 ::ffff:172.17.7.87:53501
ESTABLISHED 1516/java

Also, not sure if related, when I try to delete some documents by query (
curl -XDELETE 'http://localhost:9200/check/_query?pretty=1' -d
'{"query":{"range":{"@timestamp":{"from":"2014-07-10T00:00:00","to":"2014-07-14T05:00:00"}}}}'
)

"RemoteTransportException[[Stonewall][inet[/172.17.8.39:9300]][deleteByQuery/shard]];
nested: OutOfMemoryError[unable to create new native thread]; "

I have a script that run this kind of query every 30 seconds to clean up
this particular index.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/1167d4a6-6b87-45a9-835a-eba0ba696825%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/1167d4a6-6b87-45a9-835a-eba0ba696825%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/8da97ed6-8ca5-42c0-863c-7c19ffca9afc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Looks like I have the same issue, is it normal that ES spawns that much
process, over 1000 ?

On Wednesday, July 16, 2014 9:23:45 AM UTC-4, Bastien Chong wrote:

I'm not sure how to find answer that, I use the default settings in ES.
The cluster is composed of 2 read/write node, and a read-only node.
There is 1 Logstash instance that simply output 2 type of data to ES.
Nothing fancy.

I need to delete documents older than a day, for this particular thing, I
can't create a daily index. Is there a better way ?

I'm using an EC2 m3.large instance, ES has 1.5GB of heap.

It seems like I'm hitting an OS limit, I can't "su - elasticsearch" :

su: /bin/bash: Resource temporarily unavailable

Stopping elasticsearch fix this issue, so this is directly linked.

-bash-4.1$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29841
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 65536
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

On Tuesday, July 15, 2014 6:35:22 PM UTC-4, Mark Walkom wrote:

It'd depend on your config I'd guess, in particular how many
workers/threads you have and what ES output you are using in LS.

Why are you cleaning an index like this anyway? It seems horribly
inefficient.
Basically the error is "OutOfMemoryError", which means you've run out of
heap for the operation to complete. What are the specs for your node, how
much heap does ES have?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 16 July 2014 00:43, Bastien Chong basti...@gmail.com wrote:

I have a basic setup with a logstash shipper, an indexer and an
elasticsearch cluster.
Elasticsearch listen on the standart 9200/9300 and logstash indexer
9301/9302.

When I do a netstat | wc -l for the ES process: 184 found
(sample)

tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.8.39:59573 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.7.87:47609 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:53493 ::ffff:172.17.7.87:9302
ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.8.39:59564 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.7.87:47657 ESTABLISHED 23224/java

Same thing for the logstash indexer : 160 found
(sample)

tcp 0 0 ::ffff:172.17.7.87:50132 ::ffff:172.17.8.39:9300
ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9301 ::ffff:
172.17.7.87:60153 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9301 ::ffff:
172.17.7.87:60145 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:50129 ::ffff:172.17.8.39:9300
ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9302 ::ffff:
172.17.7.87:53501 ESTABLISHED 1516/java

Also, not sure if related, when I try to delete some documents by query
( curl -XDELETE 'http://localhost:9200/check/_query?pretty=1' -d
'{"query":{"range":{"@timestamp":{"from":"2014-07-10T00:00:00","to":"2014-07-14T05:00:00"}}}}'
)

"RemoteTransportException[[Stonewall][inet[/172.17.8.39:9300]][deleteByQuery/shard]];
nested: OutOfMemoryError[unable to create new native thread]; "

I have a script that run this kind of query every 30 seconds to clean
up this particular index.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/1167d4a6-6b87-45a9-835a-eba0ba696825%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/1167d4a6-6b87-45a9-835a-eba0ba696825%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/c1950ee8-3401-4cf8-929c-181c2a6fb870%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

First, you should always run ES under another user with least-possible
privileges, so you can login, even if ES is running out of process space.
(There are more security related issues that everyone should care about, I
leave them out here)

Second, it is not intended that ES runs so many processes. On the other
hand ES does not refuse to execute plenties of threads when retrying hard
to recover from network-related problems. Maybe you see what the threads
are doing by executing a "hot thread" command

Third, you run every 30 secs a command to "delete by query" with a range of
many days. That does not seem to make sense. You should always take care to
complete such queries before continuing, they can take very long time (I
mean hours). They put a burden on your system. Set up daily indices, this
is much more efficient, deletions by day are a matter of seconds then.

Jörg

On Wed, Jul 16, 2014 at 4:30 PM, Bastien Chong bastien974@gmail.com wrote:

ulimit - Cannot switch, ssh to specific user: su: cannot set user id: Resource temporarily unavailable? - Server Fault

Looks like I have the same issue, is it normal that ES spawns that much
process, over 1000 ?

On Wednesday, July 16, 2014 9:23:45 AM UTC-4, Bastien Chong wrote:

I'm not sure how to find answer that, I use the default settings in ES.
The cluster is composed of 2 read/write node, and a read-only node.
There is 1 Logstash instance that simply output 2 type of data to ES.
Nothing fancy.

I need to delete documents older than a day, for this particular thing, I
can't create a daily index. Is there a better way ?

I'm using an EC2 m3.large instance, ES has 1.5GB of heap.

It seems like I'm hitting an OS limit, I can't "su - elasticsearch" :

su: /bin/bash: Resource temporarily unavailable

Stopping elasticsearch fix this issue, so this is directly linked.

-bash-4.1$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29841
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 65536
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

On Tuesday, July 15, 2014 6:35:22 PM UTC-4, Mark Walkom wrote:

It'd depend on your config I'd guess, in particular how many
workers/threads you have and what ES output you are using in LS.

Why are you cleaning an index like this anyway? It seems horribly
inefficient.
Basically the error is "OutOfMemoryError", which means you've run out
of heap for the operation to complete. What are the specs for your node,
how much heap does ES have?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 16 July 2014 00:43, Bastien Chong basti...@gmail.com wrote:

I have a basic setup with a logstash shipper, an indexer and an
elasticsearch cluster.
Elasticsearch listen on the standart 9200/9300 and logstash indexer
9301/9302.

When I do a netstat | wc -l for the ES process: 184 found
(sample)

tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.8.39:59573 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.7.87:47609 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:53493 ::ffff:
172.17.7.87:9302 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.8.39:59564 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.7.87:47657 ESTABLISHED 23224/java

Same thing for the logstash indexer : 160 found
(sample)

tcp 0 0 ::ffff:172.17.7.87:50132 ::ffff:
172.17.8.39:9300 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9301 ::ffff:
172.17.7.87:60153 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9301 ::ffff:
172.17.7.87:60145 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:50129 ::ffff:
172.17.8.39:9300 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9302 ::ffff:
172.17.7.87:53501 ESTABLISHED 1516/java

Also, not sure if related, when I try to delete some documents by query
( curl -XDELETE 'http://localhost:9200/check/_query?pretty=1' -d
'{"query":{"range":{"@timestamp":{"from":"2014-07-
10T00:00:00","to":"2014-07-14T05:00:00"}}}}' )

"RemoteTransportException[[Stonewall][inet[/172.17.8.39:9300]][deleteByQuery/shard]];
nested: OutOfMemoryError[unable to create new native thread]; "

I have a script that run this kind of query every 30 seconds to clean
up this particular index.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/1167d4a6-6b87-45a9-835a-eba0ba696825%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/1167d4a6-6b87-45a9-835a-eba0ba696825%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/c1950ee8-3401-4cf8-929c-181c2a6fb870%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/c1950ee8-3401-4cf8-929c-181c2a6fb870%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFf3obcqJqeza4x4rMwe5kg6oUm3pZnyu%2B2%2Bsxtcat7fQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Thanks for you input. I'm running ES as another user, i still had root
access.

I will refactor and create index per day, and each 30sec i'll simply delete
index from yesterday. I'm hoping this greatly reduce the number of threads.

On Wednesday, July 16, 2014 12:02:33 PM UTC-4, Jörg Prante wrote:

First, you should always run ES under another user with least-possible
privileges, so you can login, even if ES is running out of process space.
(There are more security related issues that everyone should care about, I
leave them out here)

Second, it is not intended that ES runs so many processes. On the other
hand ES does not refuse to execute plenties of threads when retrying hard
to recover from network-related problems. Maybe you see what the threads
are doing by executing a "hot thread" command

Elasticsearch Platform — Find real-time answers at scale | Elastic

Third, you run every 30 secs a command to "delete by query" with a range
of many days. That does not seem to make sense. You should always take care
to complete such queries before continuing, they can take very long time (I
mean hours). They put a burden on your system. Set up daily indices, this
is much more efficient, deletions by day are a matter of seconds then.

Jörg

On Wed, Jul 16, 2014 at 4:30 PM, Bastien Chong <basti...@gmail.com
<javascript:>> wrote:

ulimit - Cannot switch, ssh to specific user: su: cannot set user id: Resource temporarily unavailable? - Server Fault

Looks like I have the same issue, is it normal that ES spawns that much
process, over 1000 ?

On Wednesday, July 16, 2014 9:23:45 AM UTC-4, Bastien Chong wrote:

I'm not sure how to find answer that, I use the default settings in ES.
The cluster is composed of 2 read/write node, and a read-only node.
There is 1 Logstash instance that simply output 2 type of data to ES.
Nothing fancy.

I need to delete documents older than a day, for this particular thing,
I can't create a daily index. Is there a better way ?

I'm using an EC2 m3.large instance, ES has 1.5GB of heap.

It seems like I'm hitting an OS limit, I can't "su - elasticsearch" :

su: /bin/bash: Resource temporarily unavailable

Stopping elasticsearch fix this issue, so this is directly linked.

-bash-4.1$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29841
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 65536
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

On Tuesday, July 15, 2014 6:35:22 PM UTC-4, Mark Walkom wrote:

It'd depend on your config I'd guess, in particular how many
workers/threads you have and what ES output you are using in LS.

Why are you cleaning an index like this anyway? It seems horribly
inefficient.
Basically the error is "OutOfMemoryError", which means you've run out
of heap for the operation to complete. What are the specs for your node,
how much heap does ES have?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 16 July 2014 00:43, Bastien Chong basti...@gmail.com wrote:

I have a basic setup with a logstash shipper, an indexer and an
elasticsearch cluster.
Elasticsearch listen on the standart 9200/9300 and logstash indexer
9301/9302.

When I do a netstat | wc -l for the ES process: 184 found
(sample)

tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.8.39:59573 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.7.87:47609 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:53493 ::ffff:
172.17.7.87:9302 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.8.39:59564 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.7.87:47657 ESTABLISHED 23224/java

Same thing for the logstash indexer : 160 found
(sample)

tcp 0 0 ::ffff:172.17.7.87:50132 ::ffff:
172.17.8.39:9300 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9301 ::ffff:
172.17.7.87:60153 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9301 ::ffff:
172.17.7.87:60145 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:50129 ::ffff:
172.17.8.39:9300 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9302 ::ffff:
172.17.7.87:53501 ESTABLISHED 1516/java

Also, not sure if related, when I try to delete some documents by
query ( curl -XDELETE 'http://localhost:9200/check/_query?pretty=1'
-d '{"query":{"range":{"@timestamp":{"from":"2014-07-
10T00:00:00","to":"2014-07-14T05:00:00"}}}}' )

"RemoteTransportException[[Stonewall][inet[/172.17.8.39:9300]][deleteByQuery/shard]];
nested: OutOfMemoryError[unable to create new native thread]; "

I have a script that run this kind of query every 30 seconds to clean
up this particular index.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/1167d4a6-6b87-45a9-835a-eba0ba696825%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/1167d4a6-6b87-45a9-835a-eba0ba696825%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/c1950ee8-3401-4cf8-929c-181c2a6fb870%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/c1950ee8-3401-4cf8-929c-181c2a6fb870%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/12700b1f-281a-48bd-a002-44a912197778%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

If you are using daily indexes then don't even bother running the delete,
just drop the index when the next day rolls around.

"Resource temporarily unavailable" could indicate you may need to increase
the ulimit for the user, did you set this in /etc/default/elasticsearch?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 17 July 2014 03:07, Bastien Chong bastien974@gmail.com wrote:

Thanks for you input. I'm running ES as another user, i still had root
access.

I will refactor and create index per day, and each 30sec i'll simply
delete index from yesterday. I'm hoping this greatly reduce the number of
threads.

On Wednesday, July 16, 2014 12:02:33 PM UTC-4, Jörg Prante wrote:

First, you should always run ES under another user with least-possible
privileges, so you can login, even if ES is running out of process space.
(There are more security related issues that everyone should care about, I
leave them out here)

Second, it is not intended that ES runs so many processes. On the other
hand ES does not refuse to execute plenties of threads when retrying hard
to recover from network-related problems. Maybe you see what the threads
are doing by executing a "hot thread" command

Elasticsearch Platform — Find real-time answers at scale | Elastic
reference/current/cluster-nodes-hot-threads.html

Third, you run every 30 secs a command to "delete by query" with a range
of many days. That does not seem to make sense. You should always take care
to complete such queries before continuing, they can take very long time (I
mean hours). They put a burden on your system. Set up daily indices, this
is much more efficient, deletions by day are a matter of seconds then.

Jörg

On Wed, Jul 16, 2014 at 4:30 PM, Bastien Chong basti...@gmail.com
wrote:

ulimit - Cannot switch, ssh to specific user: su: cannot set user id: Resource temporarily unavailable? - Server Fault
switch-ssh-to-specific-user-su-cannot-set-user-id-resource-temporaril

Looks like I have the same issue, is it normal that ES spawns that much
process, over 1000 ?

On Wednesday, July 16, 2014 9:23:45 AM UTC-4, Bastien Chong wrote:

I'm not sure how to find answer that, I use the default settings in ES.
The cluster is composed of 2 read/write node, and a read-only node.
There is 1 Logstash instance that simply output 2 type of data to ES.
Nothing fancy.

I need to delete documents older than a day, for this particular thing,
I can't create a daily index. Is there a better way ?

I'm using an EC2 m3.large instance, ES has 1.5GB of heap.

It seems like I'm hitting an OS limit, I can't "su - elasticsearch" :

su: /bin/bash: Resource temporarily unavailable

Stopping elasticsearch fix this issue, so this is directly linked.

-bash-4.1$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29841
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 65536
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

On Tuesday, July 15, 2014 6:35:22 PM UTC-4, Mark Walkom wrote:

It'd depend on your config I'd guess, in particular how many
workers/threads you have and what ES output you are using in LS.

Why are you cleaning an index like this anyway? It seems horribly
inefficient.
Basically the error is "OutOfMemoryError", which means you've run out
of heap for the operation to complete. What are the specs for your node,
how much heap does ES have?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 16 July 2014 00:43, Bastien Chong basti...@gmail.com wrote:

I have a basic setup with a logstash shipper, an indexer and an
elasticsearch cluster.
Elasticsearch listen on the standart 9200/9300 and logstash indexer
9301/9302.

When I do a netstat | wc -l for the ES process: 184 found
(sample)

tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.8.39:59573 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.7.87:47609 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:53493 ::ffff:
172.17.7.87:9302 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.8.39:59564 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.7.87:47657 ESTABLISHED 23224/java

Same thing for the logstash indexer : 160 found
(sample)

tcp 0 0 ::ffff:172.17.7.87:50132 ::ffff:
172.17.8.39:9300 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9301 ::ffff:
172.17.7.87:60153 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9301 ::ffff:
172.17.7.87:60145 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:50129 ::ffff:
172.17.8.39:9300 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9302 ::ffff:
172.17.7.87:53501 ESTABLISHED 1516/java

Also, not sure if related, when I try to delete some documents by
query ( curl -XDELETE 'http://localhost:9200/check/_query?pretty=1'
-d '{"query":{"range":{"@timestamp":{"from":"2014-07-10T00:00:
00","to":"2014-07-14T05:00:00"}}}}' )

"RemoteTransportException[[Stonewall][inet[/172.17.8.39:9300]][deleteByQuery/shard]];
nested: OutOfMemoryError[unable to create new native thread]; "

I have a script that run this kind of query every 30 seconds to
clean up this particular index.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/1167d4a6-6b87-45a9-835a-eba0ba696825%40goo
glegroups.com
https://groups.google.com/d/msgid/elasticsearch/1167d4a6-6b87-45a9-835a-eba0ba696825%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/c1950ee8-3401-4cf8-929c-181c2a6fb870%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/c1950ee8-3401-4cf8-929c-181c2a6fb870%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/12700b1f-281a-48bd-a002-44a912197778%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/12700b1f-281a-48bd-a002-44a912197778%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624Z-WLVkTmvuVmdMrTbdVstZ8TMj0%2BBvNxxuJeHTRf6DjA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

My issue is fixed by creating and dropping daily index.

The "resouce temporarily unavailable" was due to the 1024 maximum process
for elasticsearch user. By not deleting per range, it decreased by 10x the
number of process, and I also increase the ulimit for nproc.

Thanks all for your help.

On Wednesday, July 16, 2014 7:03:40 PM UTC-4, Mark Walkom wrote:

If you are using daily indexes then don't even bother running the delete,
just drop the index when the next day rolls around.

"Resource temporarily unavailable" could indicate you may need to
increase the ulimit for the user, did you set this in
/etc/default/elasticsearch?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com <javascript:>
web: www.campaignmonitor.com

On 17 July 2014 03:07, Bastien Chong <basti...@gmail.com <javascript:>>
wrote:

Thanks for you input. I'm running ES as another user, i still had root
access.

I will refactor and create index per day, and each 30sec i'll simply
delete index from yesterday. I'm hoping this greatly reduce the number of
threads.

On Wednesday, July 16, 2014 12:02:33 PM UTC-4, Jörg Prante wrote:

First, you should always run ES under another user with least-possible
privileges, so you can login, even if ES is running out of process space.
(There are more security related issues that everyone should care about, I
leave them out here)

Second, it is not intended that ES runs so many processes. On the other
hand ES does not refuse to execute plenties of threads when retrying hard
to recover from network-related problems. Maybe you see what the threads
are doing by executing a "hot thread" command

Elasticsearch Platform — Find real-time answers at scale | Elastic
reference/current/cluster-nodes-hot-threads.html

Third, you run every 30 secs a command to "delete by query" with a range
of many days. That does not seem to make sense. You should always take care
to complete such queries before continuing, they can take very long time (I
mean hours). They put a burden on your system. Set up daily indices, this
is much more efficient, deletions by day are a matter of seconds then.

Jörg

On Wed, Jul 16, 2014 at 4:30 PM, Bastien Chong basti...@gmail.com
wrote:

ulimit - Cannot switch, ssh to specific user: su: cannot set user id: Resource temporarily unavailable? - Server Fault
switch-ssh-to-specific-user-su-cannot-set-user-id-resource-temporaril

Looks like I have the same issue, is it normal that ES spawns that much
process, over 1000 ?

On Wednesday, July 16, 2014 9:23:45 AM UTC-4, Bastien Chong wrote:

I'm not sure how to find answer that, I use the default settings in
ES. The cluster is composed of 2 read/write node, and a read-only node.
There is 1 Logstash instance that simply output 2 type of data to ES.
Nothing fancy.

I need to delete documents older than a day, for this particular
thing, I can't create a daily index. Is there a better way ?

I'm using an EC2 m3.large instance, ES has 1.5GB of heap.

It seems like I'm hitting an OS limit, I can't "su - elasticsearch" :

su: /bin/bash: Resource temporarily unavailable

Stopping elasticsearch fix this issue, so this is directly linked.

-bash-4.1$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29841
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 65536
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

On Tuesday, July 15, 2014 6:35:22 PM UTC-4, Mark Walkom wrote:

It'd depend on your config I'd guess, in particular how many
workers/threads you have and what ES output you are using in LS.

Why are you cleaning an index like this anyway? It seems horribly
inefficient.
Basically the error is "OutOfMemoryError", which means you've run
out of heap for the operation to complete. What are the specs for your
node, how much heap does ES have?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 16 July 2014 00:43, Bastien Chong basti...@gmail.com wrote:

I have a basic setup with a logstash shipper, an indexer and an
elasticsearch cluster.
Elasticsearch listen on the standart 9200/9300 and logstash indexer
9301/9302.

When I do a netstat | wc -l for the ES process: 184 found
(sample)

tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.8.39:59573 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.7.87:47609 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:53493 ::ffff:
172.17.7.87:9302 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.8.39:59564 ESTABLISHED 23224/java
tcp 0 0 ::ffff:172.17.7.87:9300 ::ffff:
172.17.7.87:47657 ESTABLISHED 23224/java

Same thing for the logstash indexer : 160 found
(sample)

tcp 0 0 ::ffff:172.17.7.87:50132 ::ffff:
172.17.8.39:9300 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9301 ::ffff:
172.17.7.87:60153 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9301 ::ffff:
172.17.7.87:60145 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:50129 ::ffff:
172.17.8.39:9300 ESTABLISHED 1516/java
tcp 0 0 ::ffff:172.17.7.87:9302 ::ffff:
172.17.7.87:53501 ESTABLISHED 1516/java

Also, not sure if related, when I try to delete some documents by
query ( curl -XDELETE 'http://localhost:9200/check/_query?pretty=1'
-d '{"query":{"range":{"@timestamp":{"from":"2014-07-10T00:00:
00","to":"2014-07-14T05:00:00"}}}}' )

"RemoteTransportException[[Stonewall][inet[/172.17.8.39:9300]][deleteByQuery/shard]];
nested: OutOfMemoryError[unable to create new native thread]; "

I have a script that run this kind of query every 30 seconds to
clean up this particular index.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/1167d4a6-6b8
7-45a9-835a-eba0ba696825%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/1167d4a6-6b87-45a9-835a-eba0ba696825%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/c1950ee8-3401-4cf8-929c-181c2a6fb870%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/c1950ee8-3401-4cf8-929c-181c2a6fb870%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/12700b1f-281a-48bd-a002-44a912197778%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/12700b1f-281a-48bd-a002-44a912197778%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/65aa3b2f-b1b6-43e1-901e-45036fc10a4e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.