Elasticsearch threads behaves different from each other

Elasticsearch runs on my low-configured system which has 4G memory and 4
cores CPUs. I get high-cpu usage problem with ES. Even after closing
analyzer(s), reduce threads size and etc.

While anaylzing the situation I got a stack trace for Elasticsearch and see
there are hundreds of threads, it is defined by config NP, but only some of
them running and only one thread has much percentage of CPU time.

Here is stack trace:

top - 09:51:44 up 1 day, 1:46, 2 users, load average: 4.94, 5.35, 5.29
Tasks: 684 total, 2 running, 682 sleeping, 0 stopped, 0 zombie
Cpu(s): 7.2%us, 1.1%sy, 0.8%ni, 83.7%id, 6.7%wa, 0.1%hi, 0.4%si,
0.0%st
Mem: 4043340k total, 3466748k used, 576592k free, 30272k buffers
Swap: 4192960k total, 410704k used, 3782256k free, 465868k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND   
                                                                        
                                                                        
                                            
25134 root      20   0 1086m 978m  14m S  0.0 24.8   0:00.00 /usr/bin/java 

-Xms808m -Xmx808m -Xss256k
25136 root 20 0 1086m 978m 14m S 0.0 24.8 0:03.34 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25137 root 20 0 1086m 978m 14m S 0.0 24.8 6:43.82 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25138 root 20 0 1086m 978m 14m S 0.0 24.8 6:43.17 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25139 root 20 0 1086m 978m 14m S 0.0 24.8 6:43.05 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25140 root 20 0 1086m 978m 14m S 0.0 24.8 6:40.66 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25141 root 20 0 1086m 978m 14m S 15.3 24.8 204:53.20 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25143 root 20 0 1086m 978m 14m S 0.0 24.8 10:47.46 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25144 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.37 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25145 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.04 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25146 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.20 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25147 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25148 root 20 0 1086m 978m 14m S 0.0 24.8 0:19.46 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25149 root 20 0 1086m 978m 14m S 0.0 24.8 0:24.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25150 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25151 root 20 0 1086m 978m 14m S 0.0 24.8 0:16.63 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25156 root 20 0 1086m 978m 14m S 0.0 24.8 0:03.07 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25159 root 20 0 1086m 978m 14m S 0.3 24.8 3:46.78 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25201 root 20 0 1086m 978m 14m S 0.0 24.8 0:01.07 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25202 root 20 0 1086m 978m 14m S 0.0 24.8 0:01.16 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25205 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25206 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25207 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25208 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.52 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25209 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.56 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25210 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25211 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.55 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25212 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.53 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25213 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.55 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25214 root 20 0 1086m 978m 14m S 0.0 24.8 0:02.74 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25215 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.55 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25216 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25217 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.55 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25218 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25219 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.55 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25220 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25221 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.52 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25222 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.55 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25223 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25233 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25241 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25245 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25249 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25252 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25259 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25264 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.24 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25266 root 20 0 1086m 978m 14m S 0.7 24.8 0:40.80 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25269 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.44 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25276 root 20 0 1086m 978m 14m S 0.0 24.8 0:37.28 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25278 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.80 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25280 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.04 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25318 root 20 0 1086m 978m 14m S 0.0 24.8 0:15.87 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25329 root 20 0 1086m 978m 14m S 0.0 24.8 0:09.73 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25333 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.01 /usr/bin/java
-Xms808m -Xmx808m -Xss256k

As it is seem on code block, the thread with id 25141 has much more than
others and some of them has not been used at least once.

Why does it occurrs and what should i do to prevent high CPU usage.

PS: Also asked in
stackoverflow.com: http://stackoverflow.com/questions/26032515/elasticsearch-threads-behaves-different-from-each-other

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d4aa4f7b-edbb-4026-ba07-923c793aca57%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Can you post your settings from config/elasticsearch.yml?

Jörg

On Thu, Sep 25, 2014 at 9:03 AM, Umut Yerci umutyerci@gmail.com wrote:

Elasticsearch runs on my low-configured system which has 4G memory and 4
cores CPUs. I get high-cpu usage problem with ES. Even after closing
analyzer(s), reduce threads size and etc.

While anaylzing the situation I got a stack trace for Elasticsearch and
see there are hundreds of threads, it is defined by config NP, but only
some of them running and only one thread has much percentage of CPU time.

Here is stack trace:

top - 09:51:44 up 1 day, 1:46, 2 users, load average: 4.94, 5.35, 5.29
Tasks: 684 total, 2 running, 682 sleeping, 0 stopped, 0 zombie
Cpu(s): 7.2%us, 1.1%sy, 0.8%ni, 83.7%id, 6.7%wa, 0.1%hi, 0.4%si,
0.0%st
Mem: 4043340k total, 3466748k used, 576592k free, 30272k
buffers
Swap: 4192960k total, 410704k used, 3782256k free, 465868k
cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND



25134 root      20   0 1086m 978m  14m S  0.0 24.8   0:00.00 /usr/bin/java

-Xms808m -Xmx808m -Xss256k
25136 root 20 0 1086m 978m 14m S 0.0 24.8 0:03.34 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25137 root 20 0 1086m 978m 14m S 0.0 24.8 6:43.82 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25138 root 20 0 1086m 978m 14m S 0.0 24.8 6:43.17 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25139 root 20 0 1086m 978m 14m S 0.0 24.8 6:43.05 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25140 root 20 0 1086m 978m 14m S 0.0 24.8 6:40.66 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25141 root 20 0 1086m 978m 14m S 15.3 24.8 204:53.20 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25143 root 20 0 1086m 978m 14m S 0.0 24.8 10:47.46 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25144 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.37 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25145 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.04 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25146 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.20 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25147 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25148 root 20 0 1086m 978m 14m S 0.0 24.8 0:19.46 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25149 root 20 0 1086m 978m 14m S 0.0 24.8 0:24.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25150 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25151 root 20 0 1086m 978m 14m S 0.0 24.8 0:16.63 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25156 root 20 0 1086m 978m 14m S 0.0 24.8 0:03.07 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25159 root 20 0 1086m 978m 14m S 0.3 24.8 3:46.78 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25201 root 20 0 1086m 978m 14m S 0.0 24.8 0:01.07 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25202 root 20 0 1086m 978m 14m S 0.0 24.8 0:01.16 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25205 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25206 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25207 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25208 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.52 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25209 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.56 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25210 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25211 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.55 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25212 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.53 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25213 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.55 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25214 root 20 0 1086m 978m 14m S 0.0 24.8 0:02.74 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25215 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.55 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25216 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25217 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.55 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25218 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25219 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.55 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25220 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25221 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.52 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25222 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.55 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25223 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25233 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25241 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25245 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25249 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25252 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25259 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25264 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.24 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25266 root 20 0 1086m 978m 14m S 0.7 24.8 0:40.80 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25269 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.44 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25276 root 20 0 1086m 978m 14m S 0.0 24.8 0:37.28 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25278 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.80 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25280 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.04 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25318 root 20 0 1086m 978m 14m S 0.0 24.8 0:15.87 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25329 root 20 0 1086m 978m 14m S 0.0 24.8 0:09.73 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25333 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.01 /usr/bin/java
-Xms808m -Xmx808m -Xss256k

As it is seem on code block, the thread with id 25141 has much more than
others and some of them has not been used at least once.

Why does it occurrs and what should i do to prevent high CPU usage.

PS: Also asked in stackoverflow.com:
linux - Elasticsearch threads behaves different from each other - Stack Overflow

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/d4aa4f7b-edbb-4026-ba07-923c793aca57%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/d4aa4f7b-edbb-4026-ba07-923c793aca57%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFRCs8sLRcbhxh%2BOSRsTL8RbiPhD92i1XKmY3yGnArcGg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

##################### Elasticsearch Configuration Example
#####################

This file contains an overview of various configuration settings,

targeted at operations staff. Application developers should

consult the guide at http://elasticsearch.org/guide.

The installation procedure is covered at

http://elasticsearch.org/guide/reference/setup/installation.html.

Elasticsearch comes with reasonable defaults for most settings,

so you can try it out without bothering with configuration.

Most of the time, these defaults are just fine for running a production

cluster. If you're fine-tuning your cluster, or wondering about the

effect of certain configuration option, please do ask on the

mailing list or IRC channel [Elasticsearch Platform — Find real-time answers at scale | Elastic].

Any element in the configuration can be replaced with environment

variables

by placing them in ${...} notation. For example:

node.rack: ${RACK_ENV_VAR}

See http://elasticsearch.org/guide/reference/setup/configuration.html

for information on supported formats and syntax for the configuration

file.

################################### Cluster
###################################

Cluster name identifies your cluster for auto-discovery. If you're running

multiple clusters on the same network, make sure you're using unique

names.

cluster.name: test-cluster

#################################### Node
#####################################

Node names are generated dynamically on startup, so you're relieved

from configuring them manually. You can tie this node to a specific name:

node.name: "test"

Every node can be configured to allow or deny being eligible as the

master,

and to allow or deny to store the data.

Allow this node to be eligible as a master node (enabled by default):

node.master: true

Allow this node to store data (enabled by default):

node.data: true

You can exploit these settings to design advanced cluster topologies.

1. You want this node to never become a master node, only to hold data.

This will be the "workhorse" of your cluster.

node.master: false

node.data: true

2. You want this node to only serve as a master: to not store any data and

to have free resources. This will be the "coordinator" of your cluster.

node.master: true

node.data: false

3. You want this node to be neither master nor data node, but

to act as a "search load balancer" (fetching data from nodes,

aggregating results, etc.)

node.master: false

node.data: false

Use the Cluster Health API [http://localhost:9200/_cluster/health], the

Node Info API [http://localhost:9200/_cluster/nodes] or GUI tools

such as http://github.com/lukas-vlcek/bigdesk and

http://mobz.github.com/elasticsearch-head to inspect the cluster state.

A node can have generic attributes associated with it, which can later be

used

for customized shard allocation filtering, or allocation awareness. An

attribute

is a simple key value pair, similar to node.key: value, here is an

example:

node.rack: rack314

By default, multiple nodes are allowed to start from the same

installation location

to disable it, set the following:

node.max_local_storage_nodes: 1

#################################### Index
####################################

You can set a number of options (such as shard/replica options, mapping

or analyzer definitions, translog settings, ...) for indices globally,

in this file.

Note, that it makes more sense to configure index settings specifically

for

a certain index, either when creating it or by using the index templates

API.

See http://elasticsearch.org/guide/reference/index-modules/ and

http://elasticsearch.org/guide/reference/api/admin-indices-create-index.html

for more information.

Analizer for case insensitive search

index:
analysis:
analyzer:
string_lowercase:
tokenizer: keyword
filter: lowercase
mapper:
dynamic: false

Set the number of shards (splits) of an index (5 by default):

#index.number_of_shards: 3

Set the number of replicas (additional copies) of an index (1 by default):

index.number_of_replicas: 1

Note, that for development on a local machine, with small indices, it

usually

makes sense to "disable" the distributed features:

index.number_of_shards: 1
index.number_of_replicas: 0

These settings directly affect the performance of index and search

operations

in your cluster. Assuming you have enough machines to hold shards and

replicas, the rule of thumb is:

1. Having more shards enhances the indexing performance and allows to

distribute a big index across machines.

2. Having more replicas enhances the search performance and improves

the

cluster availability.

The "number_of_shards" is a one-time setting for an index.

The "number_of_replicas" can be increased or decreased anytime,

by using the Index Update Settings API.

Elasticsearch takes care about load balancing, relocating, gathering the

results from nodes, etc. Experiment with different settings to fine-tune

your setup.

Use the Index Status API (http://localhost:9200/A/_status) to inspect

the index status.

max_open_files: false

#################################### Paths
####################################

Path to directory containing configuration (this file and logging.yml):

path.conf: /path/to/conf

Path to directory where to store index data allocated for this node.

path.data: /var/lib/elasticsearch/data

Can optionally include more than one location, causing data to be striped

across

the locations (a la RAID 0) on a file level, favouring locations with

most free

space on creation. For example:

path.data: /path/to/data1,/path/to/data2

Path to temporary files:

path.work: /tmp/elastic/

Path to log files:

path.logs: /var/log/elasticsearch

Path to where plugins are installed:

path.plugins: /path/to/plugins

#################################### Plugin
###################################

If a plugin listed here is not installed for current node, the node will

not start.

plugin.mandatory: mapper-attachments,lang-groovy

################################### Memory
####################################

Elasticsearch performs poorly when JVM starts swapping: you should ensure

that

it never swaps.

Set this property to true to lock the memory:

bootstrap.mlockall: true

Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set

to the same value, and that the machine has enough memory to allocate

for Elasticsearch, leaving enough memory for the operating system itself.

You should also make sure that the Elasticsearch process is allowed to

lock

the memory, eg. by using ulimit -l unlimited.

############################## Network And HTTP
###############################

Elasticsearch, by default, binds itself to the 0.0.0.0 address, and

listens

on port [9200-9300] for HTTP traffic and on port [9300-9400] for

node-to-node

communication. (the range means that if the port is busy, it will

automatically

try the next port).

Set the bind address specifically (IPv4 or IPv6):

#network.bind_host: 192.168.0.88

Set the address other nodes will use to communicate with this node. If not

set, it is automatically derived. It must point to an actual IP address.

#network.publish_host: 192.168.0.88

Set both 'bind_host' and 'publish_host':

network.host: 127.0.0.1

Set a custom port for the node to node communication (9300 by default):

#transport.tcp.port: 9300

Enable compression for all communication between nodes (disabled by

default):

transport.tcp.compress: true

Set a custom port to listen for HTTP traffic:

http.port: 9200

Set a custom allowed content length:

http.max_content_length: 100mb

Disable HTTP completely:

http.enabled: false

################################### Gateway
###################################

The gateway allows for persisting the cluster state between full cluster

restarts. Every change to the state (such as adding an index) will be

stored

in the gateway, and when the cluster starts up for the first time,

it will read its state from the gateway.

There are several types of gateway implementations. For more information,

see http://elasticsearch.org/guide/reference/modules/gateway.

The default gateway type is the "local" gateway (recommended):

gateway.type: local

Settings below control how and when to start the initial recovery process

on

a full cluster restart (to reuse as much local data as possible when

using shared

gateway).

Allow recovery process after N nodes in a cluster are up:

gateway.recover_after_nodes:2

Set the timeout to initiate the recovery process, once the N nodes

from previous setting are up (accepts time value):

gateway.recover_after_time: 5m

Set how many nodes are expected in this cluster. Once these N nodes

are up (and recover_after_nodes is met), begin recovery process

immediately

(without waiting for recover_after_time to expire):

gateway.expected_nodes: 1

############################# Recovery Throttling
#############################

These settings allow to control the process of shards allocation between

nodes during initial recovery, replica allocation, rebalancing,

or when adding and removing nodes.

Set the number of concurrent recoveries happening on a node:

1. During the initial recovery

cluster.routing.allocation.node_initial_primaries_recoveries: 1

2. During adding/removing nodes, rebalancing, etc

cluster.routing.allocation.node_concurrent_recoveries: 1

Set to throttle throughput when recovering (eg. 100mb, by default

unlimited):

indices.recovery.max_size_per_sec: 0

Set to limit the number of open concurrent streams when

recovering a shard from a peer:

indices.recovery.concurrent_streams: 5

################################## Discovery
##################################

Discovery infrastructure ensures nodes can be found within a cluster

and master node is elected. Multicast discovery is the default.

Set to ensure a node sees N other master eligible nodes to be considered

operational within the cluster. Set this option to a higher value (2-4)

for large clusters (>3 nodes):

#discovery.zen.minimum_master_nodes: 1

Set the time to wait for ping responses from other nodes when discovering.

Set this option to a higher value on a slow or congested network

to minimize discovery failures:

discovery.zen.ping.timeout: 3s

See http://elasticsearch.org/guide/reference/modules/discovery/zen.html

for more information.

Unicast discovery allows to explicitly control which nodes will be used

to discover the cluster. It can be used when multicast is not present,

or to restrict the cluster communication-wise.

1. Disable multicast discovery (enabled by default):

discovery.zen.ping.multicast.enabled: false

2. Configure an initial list of master nodes in the cluster

to perform discovery when new nodes (master or data) are started:

discovery.zen.ping.unicast.hosts: ["localhost"]

EC2 discovery allows to use AWS EC2 API in order to perform discovery.

You have to install the cloud-aws plugin for enabling the EC2 discovery.

See http://elasticsearch.org/guide/reference/modules/discovery/ec2.html

for more information.

See

http://elasticsearch.org/tutorials/2011/08/22/elasticsearch-on-ec2.html

for a step-by-step tutorial.

################################## Slow Log
##################################

Shard level query and fetch threshold logging.

#index.search.slowlog.threshold.query.warn: 10s
#index.search.slowlog.threshold.query.info: 5s
#index.search.slowlog.threshold.query.debug: 2s
#index.search.slowlog.threshold.query.trace: 500ms

#index.search.slowlog.threshold.fetch.warn: 1s
#index.search.slowlog.threshold.fetch.info: 800ms
#index.search.slowlog.threshold.fetch.debug: 500ms
#index.search.slowlog.threshold.fetch.trace: 200ms

#index.indexing.slowlog.threshold.index.warn: 10s
#index.indexing.slowlog.threshold.index.info: 5s
#index.indexing.slowlog.threshold.index.debug: 2s
#index.indexing.slowlog.threshold.index.trace: 500ms

################################## GC Logging
################################

#monitor.jvm.gc.ParNew.warn: 1000ms
#monitor.jvm.gc.ParNew.info: 700ms
#monitor.jvm.gc.ParNew.debug: 400ms

#monitor.jvm.gc.ConcurrentMarkSweep.warn: 10s
#monitor.jvm.gc.ConcurrentMarkSweep.info: 5s
#monitor.jvm.gc.ConcurrentMarkSweep.debug: 2s
#index.translog.flush_threshold_ops: 10000
#index.warmer.enabled: false
#ignore_conflicts: true
#index.mapping.ignore_malformed
indices.memory.index_buffer_size: 80%
index.store.compress.stored: true
index.store.fs.lock: none

threadpool.search.type: fixed
threadpool.search.size: 600
threadpool.search.queue_size: 10000

threadpool.bulk.type: fixed
threadpool.bulk.size: 600
threadpool.bulk.queue_size: 10000

threadpool.index.type: fixed
threadpool.index.size: 100
threadpool.index.queue_size: 5000

index.cache.field.type: soft
index.cache.field.max_size: 50000
index.cache.field.expire: 24h

This is elasticsearch conf file.

On Thursday, September 25, 2014 10:06:19 AM UTC+3, Jörg Prante wrote:

Can you post your settings from config/elasticsearch.yml?

Jörg

On Thu, Sep 25, 2014 at 9:03 AM, Umut Yerci <umut...@gmail.com
<javascript:>> wrote:

Elasticsearch runs on my low-configured system which has 4G memory and 4
cores CPUs. I get high-cpu usage problem with ES. Even after closing
analyzer(s), reduce threads size and etc.

While anaylzing the situation I got a stack trace for Elasticsearch and
see there are hundreds of threads, it is defined by config NP, but only
some of them running and only one thread has much percentage of CPU time.

Here is stack trace:

top - 09:51:44 up 1 day, 1:46, 2 users, load average: 4.94, 5.35, 5.29
Tasks: 684 total, 2 running, 682 sleeping, 0 stopped, 0 zombie
Cpu(s): 7.2%us, 1.1%sy, 0.8%ni, 83.7%id, 6.7%wa, 0.1%hi, 0.4%si,
0.0%st
Mem: 4043340k total, 3466748k used, 576592k free, 30272k
buffers
Swap: 4192960k total, 410704k used, 3782256k free, 465868k
cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 
                                                                        
                                                                        
                                              
25134 root      20   0 1086m 978m  14m S  0.0 24.8   0:00.00 /usr/bin/java 

-Xms808m -Xmx808m -Xss256k
25136 root 20 0 1086m 978m 14m S 0.0 24.8 0:03.34 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25137 root 20 0 1086m 978m 14m S 0.0 24.8 6:43.82 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25138 root 20 0 1086m 978m 14m S 0.0 24.8 6:43.17 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25139 root 20 0 1086m 978m 14m S 0.0 24.8 6:43.05 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25140 root 20 0 1086m 978m 14m S 0.0 24.8 6:40.66 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25141 root 20 0 1086m 978m 14m S 15.3 24.8 204:53.20 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25143 root 20 0 1086m 978m 14m S 0.0 24.8 10:47.46 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25144 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.37 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25145 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.04 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25146 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.20 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25147 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25148 root 20 0 1086m 978m 14m S 0.0 24.8 0:19.46 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25149 root 20 0 1086m 978m 14m S 0.0 24.8 0:24.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25150 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25151 root 20 0 1086m 978m 14m S 0.0 24.8 0:16.63 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25156 root 20 0 1086m 978m 14m S 0.0 24.8 0:03.07 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25159 root 20 0 1086m 978m 14m S 0.3 24.8 3:46.78 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25201 root 20 0 1086m 978m 14m S 0.0 24.8 0:01.07 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25202 root 20 0 1086m 978m 14m S 0.0 24.8 0:01.16 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25205 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.00 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25206 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25207 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25208 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.52 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25209 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.56 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25210 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.54 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25211 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.55 /usr/bin/java
-Xms808m -Xmx808m -Xss256k
25212 root 20 0 1086m 978m 14m S 0.0 24.8 0:00.53<span
style=

...

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d94a7db0-977c-4282-a442-b2a85ebbbab4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Do not manipulate the threadpool to "fixed" and do not use such high
numbers like 100,600, 5000, 50000.... This will sooner or later congest
your machine. The long list of threads in OS is just one (harmless) symptom
of a misconfiguration. Use the default setting.

Do not use 80% buffer for index. This hurts search performance and cache
resources. Use the default setting.

Do not use cache field type "soft". This hides cache and GC problems and
gives cryptic exceptions plus very bad performance in spikes. Use the
default setting.

I hope the single node is a development machine. For better performance,
use at least 3 nodes on 3 machines.

Jörg

indices.memory.index_buffer_size: 80%
index.store.compress.stored: true
index.store.fs.lock: none

threadpool.search.type: fixed
threadpool.search.size: 600
threadpool.search.queue_size: 10000

threadpool.bulk.type: fixed
threadpool.bulk.size: 600
threadpool.bulk.queue_size: 10000

threadpool.index.type: fixed
threadpool.index.size: 100
threadpool.index.queue_size: 5000

index.cache.field.type: soft
index.cache.field.max_size: 50000
index.cache.field.expire: 24h

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFJiaGvdZ1DUpDjUkW7ZskXAeJSzRgdS9uTasW9S-_uTA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Thank you for your answer Jörg. I remove them in conf file but my
performance problem continues. I have another question about it. My
network stat is that;

curl -XGET 'http://localhost:9200/_nodes/stats/network?human&pretty'
{
"cluster_name" : "test-cluster",
"nodes" : {
"XB95yJZhS7WLAPBm9O994Q" : {
"timestamp" : 1411643924755,
"name" : "test",
"transport_address" : "inet[/127.0.0.1:9300]",
"host" : "test-host",
"ip" : [ "inet[/127.0.0.1:9300]", "NONE" ],
"attributes" : {
"master" : "true"
},
"network" : {
"tcp" : {
"active_opens" : 794549,
"passive_opens" : 489890,
"curr_estab" : 402,
"in_segs" : 99304476,
"out_segs" : 100905758,
"retrans_segs" : 53037,
"estab_resets" : 10989,
"attempt_fails" : 420884,
"in_errs" : 18226,
"out_rsts" : 439001
}
}
}
}
}

Have you any idea about why curr_estab value is so big? I use
elasticsearch.py for insertion and only 8 connections are active for
indexing data on 2 processes.

On Thursday, September 25, 2014 11:21:35 AM UTC+3, Jörg Prante wrote:

Do not manipulate the threadpool to "fixed" and do not use such high
numbers like 100,600, 5000, 50000.... This will sooner or later congest
your machine. The long list of threads in OS is just one (harmless) symptom
of a misconfiguration. Use the default setting.

Do not use 80% buffer for index. This hurts search performance and cache
resources. Use the default setting.

Do not use cache field type "soft". This hides cache and GC problems and
gives cryptic exceptions plus very bad performance in spikes. Use the
default setting.

I hope the single node is a development machine. For better performance,
use at least 3 nodes on 3 machines.

Jörg

indices.memory.index_buffer_size: 80%
index.store.compress.stored: true
index.store.fs.lock: none

threadpool.search.type: fixed
threadpool.search.size: 600
threadpool.search.queue_size: 10000

threadpool.bulk.type: fixed
threadpool.bulk.size: 600
threadpool.bulk.queue_size: 10000

threadpool.index.type: fixed
threadpool.index.size: 100
threadpool.index.queue_size: 5000

index.cache.field.type: soft
index.cache.field.max_size: 50000
index.cache.field.expire: 24h

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/82a3d79b-19a4-4493-aab2-0dcfdd062bbe%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

What you see are some of the TCP/IP stack counters of your hardware network
interface since it was started. It is provided by the OS and read by sigar

It is not related to ES or to ES specific connections. It's just a
nice-to-have if you do not want to enter CLI and execute OS commands.

Jörg

On Thu, Sep 25, 2014 at 1:26 PM, Umut Yerci umutyerci@gmail.com wrote:

Thank you for your answer Jörg. I remove them in conf file but my
performance problem continues. I have another question about it. My
network stat is that;

curl -XGET 'http://localhost:9200/_nodes/stats/network?human&pretty'
{
"cluster_name" : "test-cluster",
"nodes" : {
"XB95yJZhS7WLAPBm9O994Q" : {
"timestamp" : 1411643924755,
"name" : "test",
"transport_address" : "inet[/127.0.0.1:9300]",
"host" : "test-host",
"ip" : [ "inet[/127.0.0.1:9300]", "NONE" ],
"attributes" : {
"master" : "true"
},
"network" : {
"tcp" : {
"active_opens" : 794549,
"passive_opens" : 489890,
"curr_estab" : 402,
"in_segs" : 99304476,
"out_segs" : 100905758,
"retrans_segs" : 53037,
"estab_resets" : 10989,
"attempt_fails" : 420884,
"in_errs" : 18226,
"out_rsts" : 439001
}
}
}
}
}

Have you any idea about why curr_estab value is so big? I use
elasticsearch.py for insertion and only 8 connections are active for
indexing data on 2 processes.

On Thursday, September 25, 2014 11:21:35 AM UTC+3, Jörg Prante wrote:

Do not manipulate the threadpool to "fixed" and do not use such high
numbers like 100,600, 5000, 50000.... This will sooner or later congest
your machine. The long list of threads in OS is just one (harmless) symptom
of a misconfiguration. Use the default setting.

Do not use 80% buffer for index. This hurts search performance and cache
resources. Use the default setting.

Do not use cache field type "soft". This hides cache and GC problems and
gives cryptic exceptions plus very bad performance in spikes. Use the
default setting.

I hope the single node is a development machine. For better performance,
use at least 3 nodes on 3 machines.

Jörg

indices.memory.index_buffer_size: 80%
index.store.compress.stored: true
index.store.fs.lock: none

threadpool.search.type: fixed
threadpool.search.size: 600
threadpool.search.queue_size: 10000

threadpool.bulk.type: fixed
threadpool.bulk.size: 600
threadpool.bulk.queue_size: 10000

threadpool.index.type: fixed
threadpool.index.size: 100
threadpool.index.queue_size: 5000

index.cache.field.type: soft
index.cache.field.max_size: 50000
index.cache.field.expire: 24h

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/82a3d79b-19a4-4493-aab2-0dcfdd062bbe%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/82a3d79b-19a4-4493-aab2-0dcfdd062bbe%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHxVEmOD7CGGhhR8aLRXxmMW9gR-5DWgTT%3Dm9VTm3Z20A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Thank you for answers Jörg. I really appreciated your helps :-).

On Thursday, September 25, 2014 7:07:51 PM UTC+3, Jörg Prante wrote:

What you see are some of the TCP/IP stack counters of your hardware
network interface since it was started. It is provided by the OS and read
by sigar
VMware Products

It is not related to ES or to ES specific connections. It's just a
nice-to-have if you do not want to enter CLI and execute OS commands.

Jörg

On Thu, Sep 25, 2014 at 1:26 PM, Umut Yerci <umut...@gmail.com
<javascript:>> wrote:

Thank you for your answer Jörg. I remove them in conf file but my
performance problem continues. I have another question about it. My
network stat is that;

curl -XGET 'http://localhost:9200/_nodes/stats/network?human&pretty'
{
"cluster_name" : "test-cluster",
"nodes" : {
"XB95yJZhS7WLAPBm9O994Q" : {
"timestamp" : 1411643924755,
"name" : "test",
"transport_address" : "inet[/127.0.0.1:9300]",
"host" : "test-host",
"ip" : [ "inet[/127.0.0.1:9300]", "NONE" ],
"attributes" : {
"master" : "true"
},
"network" : {
"tcp" : {
"active_opens" : 794549,
"passive_opens" : 489890,
"curr_estab" : 402,
"in_segs" : 99304476,
"out_segs" : 100905758,
"retrans_segs" : 53037,
"estab_resets" : 10989,
"attempt_fails" : 420884,
"in_errs" : 18226,
"out_rsts" : 439001
}
}
}
}
}

Have you any idea about why curr_estab value is so big? I use
elasticsearch.py for insertion and only 8 connections are active for
indexing data on 2 processes.

On Thursday, September 25, 2014 11:21:35 AM UTC+3, Jörg Prante wrote:

Do not manipulate the threadpool to "fixed" and do not use such high
numbers like 100,600, 5000, 50000.... This will sooner or later congest
your machine. The long list of threads in OS is just one (harmless) symptom
of a misconfiguration. Use the default setting.

Do not use 80% buffer for index. This hurts search performance and cache
resources. Use the default setting.

Do not use cache field type "soft". This hides cache and GC problems and
gives cryptic exceptions plus very bad performance in spikes. Use the
default setting.

I hope the single node is a development machine. For better performance,
use at least 3 nodes on 3 machines.

Jörg

indices.memory.index_buffer_size: 80%
index.store.compress.stored: true
index.store.fs.lock: none

threadpool.search.type: fixed
threadpool.search.size: 600
threadpool.search.queue_size: 10000

threadpool.bulk.type: fixed
threadpool.bulk.size: 600
threadpool.bulk.queue_size: 10000

threadpool.index.type: fixed
threadpool.index.size: 100
threadpool.index.queue_size: 5000

index.cache.field.type: soft
index.cache.field.max_size: 50000
index.cache.field.expire: 24h

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/82a3d79b-19a4-4493-aab2-0dcfdd062bbe%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/82a3d79b-19a4-4493-aab2-0dcfdd062bbe%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/0b496fb3-46d8-44bf-8dc4-991e370a1d7b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.