I am new to elasticsearch and logstash.
using elasticsearch-1.1.1 and logstash-1.4.2-1 with Kibana.
Its a single node with 4 vCPU and 30GB of physical memory.
Currently logstash (single node) receive logs from 40 jboss servers.
Most of the time elasticsearch use almost all the CPU resource.
Is there any way I can limit the CPU consumption by tuning?
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12216 elastics 20 0 514g 19g 3.6g S 389.8 63.5 3224:32 /usr/bin/java
-Xms15g -Xmx15g -Xss256k -Djava.awt.headles
11722 logstash 39 19 3443m 1.2g 6496 S 8.6 3.9 2037:27 /usr/bin/java
-Djava.io.tmpdir=/var/lib/logstash -Xmx1g -X
I am new to elasticsearch and logstash.
using elasticsearch-1.1.1 and logstash-1.4.2-1 with Kibana.
Its a single node with 4 vCPU and 30GB of physical memory.
Currently logstash (single node) receive logs from 40 jboss servers.
Most of the time elasticsearch use almost all the CPU resource.
Is there any way I can limit the CPU consumption by tuning?
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12216 elastics 20 0 514g 19g 3.6g S 389.8 63.5 3224:32
/usr/bin/java -Xms15g -Xmx15g -Xss256k -Djava.awt.headles
11722 logstash 39 19 3443m 1.2g 6496 S 8.6 3.9 2037:27 /usr/bin/java
-Djava.io.tmpdir=/var/lib/logstash -Xmx1g -X
I agree with what Mark said. Nice will be just masking a deeper issue. Have
you tried looking at hot threads?
Also, if you are seeing CPUs sustained at 100% CPU that seems like old GCs
that just are never finishing. So check the GC logs.
Do you have any idea on the number of events per second you are trying to
index and size of the events? If you are using logstash and Redis is the
queue backing up because it can't index?
I am new to elasticsearch and logstash.
using elasticsearch-1.1.1 and logstash-1.4.2-1 with Kibana.
Its a single node with 4 vCPU and 30GB of physical memory.
Currently logstash (single node) receive logs from 40 jboss servers.
Most of the time elasticsearch use almost all the CPU resource.
Is there any way I can limit the CPU consumption by tuning?
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12216 elastics 20 0 514g 19g 3.6g S 389.8 63.5 3224:32
/usr/bin/java -Xms15g -Xmx15g -Xss256k -Djava.awt.headles
11722 logstash 39 19 3443m 1.2g 6496 S 8.6 3.9 2037:27
/usr/bin/java -Djava.io.tmpdir=/var/lib/logstash -Xmx1g -X
I agree with what Mark said. Nice will be just masking a deeper issue.
Have you tried looking at hot threads? Elasticsearch Platform — Find real-time answers at scale | Elastic
Also, if you are seeing CPUs sustained at 100% CPU that seems like old GCs
that just are never finishing. So check the GC logs.
Do you have any idea on the number of events per second you are trying to
index and size of the events? If you are using logstash and Redis is the
queue backing up because it can't index?
I am new to elasticsearch and logstash.
using elasticsearch-1.1.1 and logstash-1.4.2-1 with Kibana.
Its a single node with 4 vCPU and 30GB of physical memory.
Currently logstash (single node) receive logs from 40 jboss servers.
Most of the time elasticsearch use almost all the CPU resource.
Is there any way I can limit the CPU consumption by tuning?
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12216 elastics 20 0 514g 19g 3.6g S 389.8 63.5 3224:32
/usr/bin/java -Xms15g -Xmx15g -Xss256k -Djava.awt.headles
11722 logstash 39 19 3443m 1.2g 6496 S 8.6 3.9 2037:27
/usr/bin/java -Djava.io.tmpdir=/var/lib/logstash -Xmx1g -X
You're just hitting limits of your node. Drop some data, add more nodes or
more heap are pretty much the options you have. Upgrade to 1.4.2 while you
are at it.
Setting indices.memory.index_buffer_size so high probably isn't a good idea
unless you know what it does, if you have such a high index rate then look
at adding more nodes to spread the load.
I agree with what Mark said. Nice will be just masking a deeper issue.
Have you tried looking at hot threads? Elasticsearch Platform — Find real-time answers at scale | Elastic
Also, if you are seeing CPUs sustained at 100% CPU that seems like old GCs
that just are never finishing. So check the GC logs.
Do you have any idea on the number of events per second you are trying to
index and size of the events? If you are using logstash and Redis is the
queue backing up because it can't index?
I am new to elasticsearch and logstash.
using elasticsearch-1.1.1 and logstash-1.4.2-1 with Kibana.
Its a single node with 4 vCPU and 30GB of physical memory.
Currently logstash (single node) receive logs from 40 jboss servers.
Most of the time elasticsearch use almost all the CPU resource.
Is there any way I can limit the CPU consumption by tuning?
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12216 elastics 20 0 514g 19g 3.6g S 389.8 63.5 3224:32
/usr/bin/java -Xms15g -Xmx15g -Xss256k -Djava.awt.headles
11722 logstash 39 19 3443m 1.2g 6496 S 8.6 3.9 2037:27
/usr/bin/java -Djava.io.tmpdir=/var/lib/logstash -Xmx1g -X
I have removed the indices.memory.index_buffer_size entry and also update
elasticsearch to 1.4.2
Moreover I have deleted lot of indexes.
I can't see any difference of CPU usage by elasticsearch.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
27719 elastics 20 0 152g 16g 4.5g S 395.1 55.1 29:06.51 /usr/bin/java
-Xms15g -Xmx15g -Xss256k -Djava.awt.headles
27634 logstash 39 19 3459m 670m 13m S 4.0 2.2 2:56.79 /usr/bin/java
-Djava.io.tmpdir=/var/lib/logstash -Xmx1g -X
here is the jstat information for both elasticsearch and logstash
You're just hitting limits of your node. Drop some data, add more nodes or
more heap are pretty much the options you have. Upgrade to 1.4.2 while you
are at it.
Setting indices.memory.index_buffer_size so high probably isn't a good
idea unless you know what it does, if you have such a high index rate then
look at adding more nodes to spread the load.
I agree with what Mark said. Nice will be just masking a deeper issue.
Have you tried looking at hot threads? Elasticsearch Platform — Find real-time answers at scale | Elastic
Also, if you are seeing CPUs sustained at 100% CPU that seems like old GCs
that just are never finishing. So check the GC logs.
Do you have any idea on the number of events per second you are trying
to index and size of the events? If you are using logstash and Redis is the
queue backing up because it can't index?
I am new to elasticsearch and logstash.
using elasticsearch-1.1.1 and logstash-1.4.2-1 with Kibana.
Its a single node with 4 vCPU and 30GB of physical memory.
Currently logstash (single node) receive logs from 40 jboss servers.
Most of the time elasticsearch use almost all the CPU resource.
Is there any way I can limit the CPU consumption by tuning?
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12216 elastics 20 0 514g 19g 3.6g S 389.8 63.5 3224:32
/usr/bin/java -Xms15g -Xmx15g -Xss256k -Djava.awt.headles
11722 logstash 39 19 3443m 1.2g 6496 S 8.6 3.9 2037:27
/usr/bin/java -Djava.io.tmpdir=/var/lib/logstash -Xmx1g -X
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.