High cpu load on load test with 300 rps

hi,all
my es configuration:
4 shard
1 replica
1 node
17646067 documents
8g index size

es version:
0.19.11

java version:
Java(TM) SE Runtime Environment (build 1.6.0_37-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)

os:
CPU vendor: Intel
CPU model: Xeon (2393 MHz)
CPU total logical cores: 16
CPU cache: 12kb
Total mem: 30.9gb (33271316480 b)
Total swap: 3.9gb (4293586944 b)

pressure:
300 search request per second

One of my query:
[2012-12-05 13:12:15,465][TRACE][index.search.slowlog.query]
[Berzerker] [ptc][1] took[500.8ms], took_millis[500],
search_type[QUERY_THEN_FETCH], total_shards[4],
source[{"from":0,"size":10,"timeout":5000,"query":
{"query_string":{"query":"出租车叫车","fields":["address^1.0","category.name^1.0","name^10.0","trade.name^1.0"],"default_operator":"and","allow_leading_wildcard":false}},"filter":{"bool":{"must":{"term":{"location.cityId":"411200"}}}},"explain":false,"fields":["id","address","name"]}],
extra_source[]

hprof cpu samples:
see attachement hprof.cpu.samples.txt

Yourkit profile snapshot:
see attachment es3.png, the original snapshot is too large for
attachement.

It's not the first time to come across this problem with es and it's a very
big problem for production usage.

--

Additional info about es configuration:

bootstrap.mlockall: true
index:
store:
type: mmapfs
analysis:
analyzer:
edgeNGramAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,englishSnowball,edgeNGramFilter]
nGramAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,englishSnowball,nGramFilter]
standardAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,englishSnowball]
mmsegAnalyzer:
type: custome
tokenizer: mmseg_maxword
filter: [standard,lowercase,englishSnowball]
complexAnalyzer:
type: custome
tokenizer: mmseg_complex
filter: [standard,lowercase,englishSnowball]
simpleAnalyzer:
type: custome
tokenizer: mmseg_simple
filter: [standard,lowercase,englishSnowball]
tokenizer:
mmseg_maxword:
type: mmseg
seg_type: "max_word"
mmseg_complex:
type: mmseg
seg_type: "complex"
mmseg_simple:
type: mmseg
seg_type: "simple"
filter:
nGramFilter:
type: nGram
min_gram: 1
max_gram: 64
edgeNGramFilter:
type: edgeNGram
min_gram: 1
max_gram: 64
side: front
englishSnowball:
type: snowball
language: English

here mmseg is a chinese analyzer, the homepage can be found here:
https://code.google.com/p/mmseg4j/

On Wednesday, December 5, 2012 2:07:36 PM UTC+8, Weiwei Wang wrote:

hi,all
my es configuration:
4 shard
1 replica
1 node
17646067 documents
8g index size

es version:
0.19.11

java version:
Java(TM) SE Runtime Environment (build 1.6.0_37-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)

os:
CPU vendor: Intel
CPU model: Xeon (2393 MHz)
CPU total logical cores: 16
CPU cache: 12kb
Total mem: 30.9gb (33271316480 b)
Total swap: 3.9gb (4293586944 b)

pressure:
300 search request per second

One of my query:
[2012-12-05 13:12:15,465][TRACE][index.search.slowlog.query]
[Berzerker] [ptc][1] took[500.8ms], took_millis[500],
search_type[QUERY_THEN_FETCH], total_shards[4],
source[{"from":0,"size":10,"timeout":5000,"query":
{"query_string":{"query":"出租车叫车","fields":["address^1.0","category.name
^1.0","name^10.0","trade.name^1.0"],"default_operator":"and","allow_leading_wildcard":false}},"filter":{"bool":{"must":{"term":{"location.cityId":"411200"}}}},"explain":false,"fields":["id","address","name"]}],
extra_source[]

hprof cpu samples:
see attachement hprof.cpu.samples.txt

Yourkit profile snapshot:
see attachment es3.png, the original snapshot is too large for
attachement.

It's not the first time to come across this problem with es and it's a
very big problem for production usage.

--

Additional info about es configuration:

bootstrap.mlockall: true
index:
store:
type: mmapfs
analysis:
analyzer:
edgeNGramAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,englishSnowball,edgeNGramFilter]
nGramAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,englishSnowball,nGramFilter]
standardAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,englishSnowball]
mmsegAnalyzer:
type: custome
tokenizer: mmseg_maxword
filter: [standard,lowercase,englishSnowball]
complexAnalyzer:
type: custome
tokenizer: mmseg_complex
filter: [standard,lowercase,englishSnowball]
simpleAnalyzer:
type: custome
tokenizer: mmseg_simple
filter: [standard,lowercase,englishSnowball]
tokenizer:
mmseg_maxword:
type: mmseg
seg_type: "max_word"
mmseg_complex:
type: mmseg
seg_type: "complex"
mmseg_simple:
type: mmseg
seg_type: "simple"
filter:
nGramFilter:
type: nGram
min_gram: 1
max_gram: 64
edgeNGramFilter:
type: edgeNGram
min_gram: 1
max_gram: 64
side: front
englishSnowball:
type: snowball
language: English

here mmseg is a chinese analyzer, the homepage can be found here:
https://code.google.com/p/mmseg4j/

cpu load is 1200% for 300rps.

On Wednesday, December 5, 2012 2:07:36 PM UTC+8, Weiwei Wang wrote:

hi,all
my es configuration:
4 shard
1 replica
1 node
17646067 documents
8g index size

es version:
0.19.11

java version:
Java(TM) SE Runtime Environment (build 1.6.0_37-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)

os:
CPU vendor: Intel
CPU model: Xeon (2393 MHz)
CPU total logical cores: 16
CPU cache: 12kb
Total mem: 30.9gb (33271316480 b)
Total swap: 3.9gb (4293586944 b)

pressure:
300 search request per second

One of my query:
[2012-12-05 13:12:15,465][TRACE][index.search.slowlog.query]
[Berzerker] [ptc][1] took[500.8ms], took_millis[500],
search_type[QUERY_THEN_FETCH], total_shards[4],
source[{"from":0,"size":10,"timeout":5000,"query":
{"query_string":{"query":"出租车叫车","fields":["address^1.0","category.name
^1.0","name^10.0","trade.name^1.0"],"default_operator":"and","allow_leading_wildcard":false}},"filter":{"bool":{"must":{"term":{"location.cityId":"411200"}}}},"explain":false,"fields":["id","address","name"]}],
extra_source[]

hprof cpu samples:
see attachement hprof.cpu.samples.txt

Yourkit profile snapshot:
see attachment es3.png, the original snapshot is too large for
attachement.

It's not the first time to come across this problem with es and it's a
very big problem for production usage.

--

Hello,

Just two questions and one remark:

  • how much memory do you allocate to ES?
  • what OS do you use?
  • I see you have min_ngram=1 and max_ngram=64. I would assume that would
    create lots of terms, and make your queries slow. Depending on how critical
    those settings are for the functionality of your application, I would look
    at shrinking theinterval, especially on the min_ngram side

Best regards,
Radu

http://sematext.com/ -- ElasticSearch -- Solr -- Lucene

On Wed, Dec 5, 2012 at 9:46 AM, Weiwei Wang ww.wang.cs@gmail.com wrote:

Additional info about es configuration:

bootstrap.mlockall: true
index:
store:
type: mmapfs
analysis:
analyzer:
edgeNGramAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,englishSnowball,
edgeNGramFilter]
nGramAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,**englishSnowball,nGramFilter]
standardAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,**englishSnowball]
mmsegAnalyzer:
type: custome
tokenizer: mmseg_maxword
filter: [standard,lowercase,**englishSnowball]
complexAnalyzer:
type: custome
tokenizer: mmseg_complex
filter: [standard,lowercase,**englishSnowball]
simpleAnalyzer:
type: custome
tokenizer: mmseg_simple
filter: [standard,lowercase,**englishSnowball]
tokenizer:
mmseg_maxword:
type: mmseg
seg_type: "max_word"
mmseg_complex:
type: mmseg
seg_type: "complex"
mmseg_simple:
type: mmseg
seg_type: "simple"
filter:
nGramFilter:
type: nGram
min_gram: 1
max_gram: 64
edgeNGramFilter:
type: edgeNGram
min_gram: 1
max_gram: 64
side: front
englishSnowball:
type: snowball
language: English

here mmseg is a chinese analyzer, the homepage can be found here:
https://code.google.com/**p/mmseg4j/ https://code.google.com/p/mmseg4j/

cpu load is 1200% for 300rps.

On Wednesday, December 5, 2012 2:07:36 PM UTC+8, Weiwei Wang wrote:

hi,all
my es configuration:
4 shard
1 replica
1 node
17646067 documents
8g index size

es version:
0.19.11

java version:
Java(TM) SE Runtime Environment (build 1.6.0_37-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)

os:
CPU vendor: Intel
CPU model: Xeon (2393 MHz)
CPU total logical cores: 16
CPU cache: 12kb
Total mem: 30.9gb (33271316480 b)
Total swap: 3.9gb (4293586944 b)

pressure:
300 search request per second

One of my query:
[2012-12-05 13:12:15,465][TRACE][index.search.slowlog.query]
[Berzerker] [ptc][1] took[500.8ms], took_millis[500],
search_type[QUERY_THEN_FETCH], total_shards[4], source[{"from":0,"size":10,"
timeout":5000,"query": {"query_string":{"query":"
出租车叫车","fields":["address^1.0"
,"category.name^1.0","name^10.0","
trade.name^1.0"],"default_operator":"and","allow_
leading_wildcard":false}},"filter":{"bool":{"must":{"
term":{"location.cityId":"411200"}}}},"explain":false,"
fields":["id","address","name"
]}], extra_source[]

hprof cpu samples:
see attachement hprof.cpu.samples.txt

Yourkit profile snapshot:
see attachment es3.png, the original snapshot is too large for
attachement.

It's not the first time to come across this problem with es and it's a
very big problem for production usage.

--

Hi, Radu,
I attach the bigdesk snapshot and, I only use mmsegAnalyzer in my
mapping, nGram and edgeNGram is not used now.

Any more information please let me know.

On Wednesday, December 5, 2012 6:55:41 PM UTC+8, Radu Gheorghe wrote:

Hello,

Just two questions and one remark:

  • how much memory do you allocate to ES?
  • what OS do you use?
  • I see you have min_ngram=1 and max_ngram=64. I would assume that would
    create lots of terms, and make your queries slow. Depending on how critical
    those settings are for the functionality of your application, I would look
    at shrinking theinterval, especially on the min_ngram side

Best regards,
Radu

http://sematext.com/ -- ElasticSearch -- Solr -- Lucene

On Wed, Dec 5, 2012 at 9:46 AM, Weiwei Wang <ww.wa...@gmail.com<javascript:>

wrote:

Additional info about es configuration:

bootstrap.mlockall: true
index:
store:
type: mmapfs
analysis:
analyzer:
edgeNGramAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,englishSnowball,
edgeNGramFilter]
nGramAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,**englishSnowball,nGramFilter]
standardAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,**englishSnowball]
mmsegAnalyzer:
type: custome
tokenizer: mmseg_maxword
filter: [standard,lowercase,**englishSnowball]
complexAnalyzer:
type: custome
tokenizer: mmseg_complex
filter: [standard,lowercase,**englishSnowball]
simpleAnalyzer:
type: custome
tokenizer: mmseg_simple
filter: [standard,lowercase,**englishSnowball]
tokenizer:
mmseg_maxword:
type: mmseg
seg_type: "max_word"
mmseg_complex:
type: mmseg
seg_type: "complex"
mmseg_simple:
type: mmseg
seg_type: "simple"
filter:
nGramFilter:
type: nGram
min_gram: 1
max_gram: 64
edgeNGramFilter:
type: edgeNGram
min_gram: 1
max_gram: 64
side: front
englishSnowball:
type: snowball
language: English

here mmseg is a chinese analyzer, the homepage can be found here:
https://code.google.com/**p/mmseg4j/ https://code.google.com/p/mmseg4j/

cpu load is 1200% for 300rps.

On Wednesday, December 5, 2012 2:07:36 PM UTC+8, Weiwei Wang wrote:

hi,all
my es configuration:
4 shard
1 replica
1 node
17646067 documents
8g index size

es version:
0.19.11

java version:
Java(TM) SE Runtime Environment (build 1.6.0_37-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)

os:
CPU vendor: Intel
CPU model: Xeon (2393 MHz)
CPU total logical cores: 16
CPU cache: 12kb
Total mem: 30.9gb (33271316480 b)
Total swap: 3.9gb (4293586944 b)

pressure:
300 search request per second

One of my query:
[2012-12-05 13:12:15,465][TRACE][index.search.slowlog.query]
[Berzerker] [ptc][1] took[500.8ms], took_millis[500],
search_type[QUERY_THEN_FETCH], total_shards[4], source[{"from":0,"size":10,"
timeout":5000,"query": {"query_string":{"query":"
出租车叫车","fields":["address^1.0"
,"category.name^1.0","name^10.0","
trade.name^1.0"],"default_operator":"and","allow_
leading_wildcard":false}},"filter":{"bool":{"must":{"
term":{"location.cityId":"411200"}}}},"explain":false,"
fields":["id","address","name"
]}], extra_source[]

hprof cpu samples:
see attachement hprof.cpu.samples.txt

Yourkit profile snapshot:
see attachment es3.png, the original snapshot is too large for
attachement.

It's not the first time to come across this problem with es and it's a
very big problem for production usage.

--

Hello,

Except for the high load, do you get any weird behavior, like ES getting
unresponsive?

Looking at the screenshots, I see more than 1K search requests per second
(that would be your 300rps sent to each of your 4 shards), with more than
500 fetches, while the transport goes to 80-90MB/s both ways. That's a lot
of load in my book, or maybe I'm missing something.

That said, you might make things better by reducing the number of shards.
That would hurt your indexing speed, though. Also, adding nodes and
replicas (together) should help raise the number of concurrent queries your
cluster can hold.

And if you don't have a lot of indexing, you might benefit from optimizing
your indices in off-peak intervals:
http://www.elasticsearch.org/guide/reference/api/admin-indices-optimize.html

Best regards,
Radu

http://sematext.com/ -- ElasticSearch -- Solr -- Lucene

On Mon, Dec 10, 2012 at 5:24 AM, Weiwei Wang ww.wang.cs@gmail.com wrote:

Hi, Radu,
I attach the bigdesk snapshot and, I only use mmsegAnalyzer in my
mapping, nGram and edgeNGram is not used now.

Any more information please let me know.

On Wednesday, December 5, 2012 6:55:41 PM UTC+8, Radu Gheorghe wrote:

Hello,

Just two questions and one remark:

  • how much memory do you allocate to ES?
  • what OS do you use?
  • I see you have min_ngram=1 and max_ngram=64. I would assume that would
    create lots of terms, and make your queries slow. Depending on how critical
    those settings are for the functionality of your application, I would look
    at shrinking theinterval, especially on the min_ngram side

Best regards,
Radu

http://sematext.com/ -- ElasticSearch -- Solr -- Lucene

On Wed, Dec 5, 2012 at 9:46 AM, Weiwei Wang ww.wa...@gmail.com wrote:

Additional info about es configuration:

bootstrap.mlockall: true
index:
store:
type: mmapfs
analysis:
analyzer:
edgeNGramAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,englishSnowball,**
edgeNGramFilter]
nGramAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,englishSno
wball,nGramFilter]
standardAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,englishSnowball]
mmsegAnalyzer:
type: custome
tokenizer: mmseg_maxword
filter: [standard,lowercase,englishSnowball]
complexAnalyzer:
type: custome
tokenizer: mmseg_complex
filter: [standard,lowercase,englishSnowball]
simpleAnalyzer:
type: custome
tokenizer: mmseg_simple
filter: [standard,lowercase,englishSnowball]
tokenizer:
mmseg_maxword:
type: mmseg
seg_type: "max_word"
mmseg_complex:
type: mmseg
seg_type: "complex"
mmseg_simple:
type: mmseg
seg_type: "simple"
filter:
nGramFilter:
type: nGram
min_gram: 1
max_gram: 64
edgeNGramFilter:
type: edgeNGram
min_gram: 1
max_gram: 64
side: front
englishSnowball:
type: snowball
language: English

here mmseg is a chinese analyzer, the homepage can be found here:
https://code.google.com/p/mmseg4j/https://code.google.com/p/mmseg4j/

cpu load is 1200% for 300rps.

On Wednesday, December 5, 2012 2:07:36 PM UTC+8, Weiwei Wang wrote:

hi,all
my es configuration:
4 shard
1 replica
1 node
17646067 documents
8g index size

es version:
0.19.11

java version:
Java(TM) SE Runtime Environment (build 1.6.0_37-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)

os:
CPU vendor: Intel
CPU model: Xeon (2393 MHz)
CPU total logical cores: 16
CPU cache: 12kb
Total mem: 30.9gb (33271316480 b)
Total swap: 3.9gb (4293586944 b)

pressure:
300 search request per second

One of my query:
[2012-12-05 13:12:15,465][TRACE][index.search.slowlog.query]
[Berzerker] [ptc][1] took[500.8ms], took_millis[500],
search_type[QUERY_THEN_FETCH], total_shards[4], source[{"from":0,"size":10,"
timeout":5000,"query": {"query_string":{"query":"出租车
叫车","fields":["address^1.0","category.name http://category.name
^1.0","name^10.0","trade.name^1.0"],"default_ope
rator":"and","allow_**leading_**wildcard":false}},"filter":{"
bool":{"must":{"**term":{"location.cityId":"411200"}}}},
"explain":false,"fields":["id","address","name"
]}],
extra_source[]

hprof cpu samples:
see attachement hprof.cpu.samples.txt

Yourkit profile snapshot:
see attachment es3.png, the original snapshot is too large for
attachement.

It's not the first time to come across this problem with es and it's a
very big problem for production usage.

--

Thanks, Radu,
I use jmeter to run my load test, I confirmed I only configured load
300 rps, but the load is not directly to es but to my java program which
use a NodeBuilder to build a elasticsearch node(client=true). My java
program use a SearchRequestBuilder to do my search and each search return
10 results(query on multi field without highlight).

 As you suggested in your post, I already add a node(now 2 nodes) and 

the response time decays to around 100ms, however the cpu usage is still
high on each node(now decay to around 600% for each node).

To my understanding, the cpu is used by lucene to do computaion and 

currently I have no idea to decrease the cpu load instead of adding nodes
and replica.

I still have a question about what is the maximum index size for a 

shard? I need this information to set the number of shard. Currently i have
8g index size total(2g for each shard), take into the development of my
product, the index size may increase to 80g or 100g.

see the attachment for the jmeter conf

On Monday, December 10, 2012 11:58:06 PM UTC+8, Radu Gheorghe wrote:

Hello,

Except for the high load, do you get any weird behavior, like ES getting
unresponsive?

Looking at the screenshots, I see more than 1K search requests per second
(that would be your 300rps sent to each of your 4 shards), with more than
500 fetches, while the transport goes to 80-90MB/s both ways. That's a lot
of load in my book, or maybe I'm missing something.

That said, you might make things better by reducing the number of shards.
That would hurt your indexing speed, though. Also, adding nodes and
replicas (together) should help raise the number of concurrent queries your
cluster can hold.

And if you don't have a lot of indexing, you might benefit from optimizing
your indices in off-peak intervals:

http://www.elasticsearch.org/guide/reference/api/admin-indices-optimize.html

Best regards,
Radu

http://sematext.com/ -- ElasticSearch -- Solr -- Lucene

On Mon, Dec 10, 2012 at 5:24 AM, Weiwei Wang <ww.wa...@gmail.com<javascript:>

wrote:

Hi, Radu,
I attach the bigdesk snapshot and, I only use mmsegAnalyzer in my
mapping, nGram and edgeNGram is not used now.

Any more information please let me know.

On Wednesday, December 5, 2012 6:55:41 PM UTC+8, Radu Gheorghe wrote:

Hello,

Just two questions and one remark:

  • how much memory do you allocate to ES?
  • what OS do you use?
  • I see you have min_ngram=1 and max_ngram=64. I would assume that would
    create lots of terms, and make your queries slow. Depending on how critical
    those settings are for the functionality of your application, I would look
    at shrinking theinterval, especially on the min_ngram side

Best regards,
Radu

http://sematext.com/ -- ElasticSearch -- Solr -- Lucene

On Wed, Dec 5, 2012 at 9:46 AM, Weiwei Wang ww.wa...@gmail.com wrote:

Additional info about es configuration:

bootstrap.mlockall: true
index:
store:
type: mmapfs
analysis:
analyzer:
edgeNGramAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,englishSnowball,**
edgeNGramFilter]
nGramAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,englishSno
wball,nGramFilter]
standardAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,englishSnowball]
mmsegAnalyzer:
type: custome
tokenizer: mmseg_maxword
filter: [standard,lowercase,englishSnowball]
complexAnalyzer:
type: custome
tokenizer: mmseg_complex
filter: [standard,lowercase,englishSnowball]
simpleAnalyzer:
type: custome
tokenizer: mmseg_simple
filter: [standard,lowercase,englishSnowball]
tokenizer:
mmseg_maxword:
type: mmseg
seg_type: "max_word"
mmseg_complex:
type: mmseg
seg_type: "complex"
mmseg_simple:
type: mmseg
seg_type: "simple"
filter:
nGramFilter:
type: nGram
min_gram: 1
max_gram: 64
edgeNGramFilter:
type: edgeNGram
min_gram: 1
max_gram: 64
side: front
englishSnowball:
type: snowball
language: English

here mmseg is a chinese analyzer, the homepage can be found here:
https://code.google.com/p/mmseg4j/https://code.google.com/p/mmseg4j/

cpu load is 1200% for 300rps.

On Wednesday, December 5, 2012 2:07:36 PM UTC+8, Weiwei Wang wrote:

hi,all
my es configuration:
4 shard
1 replica
1 node
17646067 documents
8g index size

es version:
0.19.11

java version:
Java(TM) SE Runtime Environment (build 1.6.0_37-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)

os:
CPU vendor: Intel
CPU model: Xeon (2393 MHz)
CPU total logical cores: 16
CPU cache: 12kb
Total mem: 30.9gb (33271316480 b)
Total swap: 3.9gb (4293586944 b)

pressure:
300 search request per second

One of my query:
[2012-12-05 13:12:15,465][TRACE][index.search.slowlog.query]
[Berzerker] [ptc][1] took[500.8ms], took_millis[500],
search_type[QUERY_THEN_FETCH], total_shards[4], source[{"from":0,"size":10,"
timeout":5000,"query": {"query_string":{"query":"出租车
叫车","fields":["address^1.0","category.name http://category.name
^1.0","name^10.0","trade.name^1.0"],"default_ope
rator":"and","allow_**leading_**wildcard":false}},"filter":{"
bool":{"must":{"**term":{"location.cityId":"411200"}}}},
"explain":false,"fields":["id","address","name"
]}],
extra_source[]

hprof cpu samples:
see attachement hprof.cpu.samples.txt

Yourkit profile snapshot:
see attachment es3.png, the original snapshot is too large for
attachement.

It's not the first time to come across this problem with es and it's a
very big problem for production usage.

--

Hello Weiwei,

Good to know that adding nodes and replicas has such a good impact on
performance. At this point I don't have any good ideas on how to further
improve performance. Maybe upgrading to Java 1.7.x would help? I think it's
worth a test.

As for maximum shard size, AFAIK it depends on:

  • how your data looks like (how many documents, how many terms)
  • heap size

So again, you have to test to be sure. But a high-level look brings good
news: now you have 8G and ES used ~1G of heap, and goes to ~2G at times. So
if you configure it to allocate about half of your total RAM (which is
usually recommended), it should work with a single, 80G shard.

Best regards,
Radu

http://sematext.com/ -- ElasticSearch -- Solr -- Lucene
On Tue, Dec 18, 2012 at 4:25 AM, Weiwei Wang ww.wang.cs@gmail.com wrote:

Thanks, Radu,
I use jmeter to run my load test, I confirmed I only configured
load 300 rps, but the load is not directly to es but to my java program
which use a NodeBuilder to build a elasticsearch node(client=true). My java
program use a SearchRequestBuilder to do my search and each search return
10 results(query on multi field without highlight).

 As you suggested in your post, I already add a node(now 2 nodes) and

the response time decays to around 100ms, however the cpu usage is still
high on each node(now decay to around 600% for each node).

To my understanding, the cpu is used by lucene to do computaion and

currently I have no idea to decrease the cpu load instead of adding nodes
and replica.

I still have a question about what is the maximum index size for a

shard? I need this information to set the number of shard. Currently i have
8g index size total(2g for each shard), take into the development of my
product, the index size may increase to 80g or 100g.

see the attachment for the jmeter conf

On Monday, December 10, 2012 11:58:06 PM UTC+8, Radu Gheorghe wrote:

Hello,

Except for the high load, do you get any weird behavior, like ES getting
unresponsive?

Looking at the screenshots, I see more than 1K search requests per second
(that would be your 300rps sent to each of your 4 shards), with more than
500 fetches, while the transport goes to 80-90MB/s both ways. That's a lot
of load in my book, or maybe I'm missing something.

That said, you might make things better by reducing the number of shards.
That would hurt your indexing speed, though. Also, adding nodes and
replicas (together) should help raise the number of concurrent queries your
cluster can hold.

And if you don't have a lot of indexing, you might benefit from
optimizing your indices in off-peak intervals:
http://www.elasticsearch.org/guide/reference/api/admin-
indices-optimize.htmlhttp://www.elasticsearch.org/guide/reference/api/admin-indices-optimize.html

Best regards,
Radu

http://sematext.com/ -- ElasticSearch -- Solr -- Lucene

On Mon, Dec 10, 2012 at 5:24 AM, Weiwei Wang ww.wa...@gmail.com wrote:

Hi, Radu,
I attach the bigdesk snapshot and, I only use mmsegAnalyzer in my
mapping, nGram and edgeNGram is not used now.

Any more information please let me know.

On Wednesday, December 5, 2012 6:55:41 PM UTC+8, Radu Gheorghe wrote:

Hello,

Just two questions and one remark:

  • how much memory do you allocate to ES?
  • what OS do you use?
  • I see you have min_ngram=1 and max_ngram=64. I would assume that
    would create lots of terms, and make your queries slow. Depending on how
    critical those settings are for the functionality of your application, I
    would look at shrinking theinterval, especially on the min_ngram side

Best regards,
Radu

http://sematext.com/ -- ElasticSearch -- Solr -- Lucene

On Wed, Dec 5, 2012 at 9:46 AM, Weiwei Wang ww.wa...@gmail.com wrote:

Additional info about es configuration:

bootstrap.mlockall: true
index:
store:
type: mmapfs
analysis:
analyzer:
edgeNGramAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,englishSno****wball,
edgeNGramFilter]
nGramAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,englishSno**
wball,nGramFilter]
standardAnalyzer:
type: custome
tokenizer: standard
filter: [standard,lowercase,**englishSnowball]
mmsegAnalyzer:
type: custome
tokenizer: mmseg_maxword
filter: [standard,lowercase,**englishSno
wball]
complexAnalyzer:
type: custome
tokenizer: mmseg_complex
filter: [standard,lowercase,**englishSnowball]
simpleAnalyzer:
type: custome
tokenizer: mmseg_simple
filter: [standard,lowercase,**englishSno
wball]
tokenizer:
mmseg_maxword:
type: mmseg
seg_type: "max_word"
mmseg_complex:
type: mmseg
seg_type: "complex"
mmseg_simple:
type: mmseg
seg_type: "simple"
filter:
nGramFilter:
type: nGram
min_gram: 1
max_gram: 64
edgeNGramFilter:
type: edgeNGram
min_gram: 1
max_gram: 64
side: front
englishSnowball:
type: snowball
language: English

here mmseg is a chinese analyzer, the homepage can be found here:
https://code.google.com/p**/mmseg4j/https://code.google.com/p/mmseg4j/

cpu load is 1200% for 300rps.

On Wednesday, December 5, 2012 2:07:36 PM UTC+8, Weiwei Wang wrote:

hi,all
my es configuration:
4 shard
1 replica
1 node
17646067 documents
8g index size

es version:
0.19.11

java version:
Java(TM) SE Runtime Environment (build 1.6.0_37-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)

os:
CPU vendor: Intel
CPU model: Xeon (2393 MHz)
CPU total logical cores: 16
CPU cache: 12kb
Total mem: 30.9gb (33271316480 b)
Total swap: 3.9gb (4293586944 b)

pressure:
300 search request per second

One of my query:
[2012-12-05 13:12:15,465][TRACE][index.search.slowlog.query]
[Berzerker] [ptc][1] took[500.8ms], took_millis[500],
search_type[QUERY_THEN_FETCH], total_shards[4], source[{"from":0,"size":10,"
**ti
meout":5000,"query": {"query_string":{"query":"出租车

叫车","fields":["address^1.0"**,"category.namehttp://category.name
^1.0","name^10.**0",
"trade.name^1.0"],"default_ope**
rator":"and","allow_leading_wildcard":false}},"filter":{"
boo
l":{"must":{"**term":{"**location.cityId":"411200"}}}},
"explain"
:false,"fields":["id","address","name"]}],
extra_source[]

hprof cpu samples:
see attachement hprof.cpu.samples.txt

Yourkit profile snapshot:
see attachment es3.png, the original snapshot is too large for
attachement.

It's not the first time to come across this problem with es and it's
a very big problem for production usage.

--

--