Put Mapping call -- performance blow?


(Antek Piechnik) #1

Hi,

I'm trying to debug a performance blow that we've had in our ES cluster
today and I was wondering if updating a mapping could have caused this ?

So I've got a cluster of 5 nodes, 10 indices, out of which 2 quite huge.
One is 300 shards, the other 100 shards, both with 2 replicas everywhere.
Those two indices contain about 40M documents in total.
I use custom routing for all calls. And it's been performing great.

Today I essentially added a new multi_field with a new analyzer for both
indices using the Put Mapping API:

url -XPUT 'http://localhost:9200//document/_mapping' -d '
{
"document" : {
"properties" : {
"name": {
"type": "multi_field",
"fields": {
"name": { "type": "string", "index": "not_analyzed" },
"sortable": { "type": "string", "analyzer": "sortable" }
}
}
}
}
}
'

At first I couldn't really spot anything happening, but after some time,
when we've had more and more requests of different kind,
ElasticSearch started responding extremely slowly. We've never experienced
a slowdown of this kind. It lasted for about 20minutes, after which it
stopped and everything is back to normal.

Do you have any thoughts ? Could updating the mapping create a performance
hit of this kind ?

Regards,
Antek

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Adrien Grand) #2

Hi,

My guess would be that the changes you performed in your mappings had
impact on field data. So maybe this change made field data grow enough to
come close the the JVM heap capacity, and the garbage collector had to run
all the time in order to find space for the newly allocated objects. It
would be interesting to try to correlate your problem with GC activity or
heap size. If garbage collection is actually the problem, then the solution
is to either to increase the heap size on your nodes or to add more
machines so that a single server needs to handle fewer shards.

On Fri, Nov 8, 2013 at 8:08 PM, Antek Piechnik antek.piechnik@gmail.comwrote:

Hi,

I'm trying to debug a performance blow that we've had in our ES cluster
today and I was wondering if updating a mapping could have caused this ?

So I've got a cluster of 5 nodes, 10 indices, out of which 2 quite huge.
One is 300 shards, the other 100 shards, both with 2 replicas everywhere.
Those two indices contain about 40M documents in total.
I use custom routing for all calls. And it's been performing great.

Today I essentially added a new multi_field with a new analyzer for both
indices using the Put Mapping API:

url -XPUT 'http://localhost:9200//document/_mapping' -d '
{
"document" : {
"properties" : {
"name": {
"type": "multi_field",
"fields": {
"name": { "type": "string", "index": "not_analyzed" },
"sortable": { "type": "string", "analyzer": "sortable" }
}
}
}
}
}
'

At first I couldn't really spot anything happening, but after some time,
when we've had more and more requests of different kind,
ElasticSearch started responding extremely slowly. We've never experienced
a slowdown of this kind. It lasted for about 20minutes, after which it
stopped and everything is back to normal.

Do you have any thoughts ? Could updating the mapping create a performance
hit of this kind ?

Regards,
Antek

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
Adrien Grand

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(system) #3