Elasticsearch index creation / deletion incredibly slow

Hi all, I have a an ES cluster hosted on amazon with ~ 7000 indexes (most
of which are sparsely populated < 100 docs). Up till today, creating or
deleting an index in the cluster took ~3 seconds. All of a sudden, creating
or deleting an index is taking ~30 seconds. We have looked through all the
logs and can't find anything. The cluster state is ~5.5MB, which doesn't
seem big enough to be prohibitive.

Any thoughts on why this happened and how I can debug? Any help would be
greatly appreciated.

Thanks,
Swaraj

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/9d3827d3-3f0b-435d-b3e2-13f21c1187ab%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

On Dec 17, 2014 11:20 PM, "Swaraj Banerjee" swarajban@gmail.com wrote:

Hi all, I have a an ES cluster hosted on amazon with ~ 7000 indexes (most
of which are sparsely populated < 100 docs). Up till today, creating or
deleting an index in the cluster took ~3 seconds. All of a sudden, creating
or deleting an index is taking ~30 seconds. We have looked through all the
logs and can't find anything. The cluster state is ~5.5MB, which doesn't
seem big enough to be prohibitive.

Standard warning that that is a lot. If possible try to squash them
together somehow.

That aside I have the same number of indices and feel like I get better
performance on created than you do. Not sure why though. I feel like my
mapping updates are sluggish but don't know if that is caused by the number
of indexes.

What part of index creation is taking 30 seconds? The rest call or the
actual share assignment?

Any thoughts on why this happened and how I can debug? Any help would be
greatly appreciated.

Are you getting swamped by other cluster admin actions? You should be able
to get information about that from the cat API. Maybe also your cluster
state exploded in size? Any change on CPU usage on the currently elected
master? Any hot threads look like admin actions? Maybe hit it with jstack
and look for admin actions there. They might not be hot threads but maybe
there are lots running at once?

Nik

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAPmjWd1gVjxgF5aoQ4jtDxfWpBuVit4%3DsgnYaV1LAV5ehs_S1g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Thanks for the reply,

To answer your first question: how can I tell what part of the index
creation is taking longer? When I make the REST call, it takes ~30 seconds
before it is complete. As soon as I receive an HTTP response, however, the
new index looks to be allocated and I can add documents to it.

To answer your second set of questions: I don't believe we are being
swamped by other cluster admin actions. The only admin actions we are
taking are index creation / deletion, and that is infrequently (few times
per day). How can I get this information via cat API? I checked the cluster
state file, and I don't believe it got significantly larger in size. What
is a hot thread, a thread that is pinning CPU at 100%? How can I tell if a
given thread is due to an admin action?

~ Swaraj

On Wednesday, December 17, 2014 8:53:34 PM UTC-8, Nikolas Everett wrote:

On Dec 17, 2014 11:20 PM, "Swaraj Banerjee" <swar...@gmail.com
<javascript:>> wrote:

Hi all, I have a an ES cluster hosted on amazon with ~ 7000 indexes
(most of which are sparsely populated < 100 docs). Up till today, creating
or deleting an index in the cluster took ~3 seconds. All of a sudden,
creating or deleting an index is taking ~30 seconds. We have looked through
all the logs and can't find anything. The cluster state is ~5.5MB, which
doesn't seem big enough to be prohibitive.

Standard warning that that is a lot. If possible try to squash them
together somehow.

That aside I have the same number of indices and feel like I get better
performance on created than you do. Not sure why though. I feel like my
mapping updates are sluggish but don't know if that is caused by the number
of indexes.

What part of index creation is taking 30 seconds? The rest call or the
actual share assignment?

Any thoughts on why this happened and how I can debug? Any help would be
greatly appreciated.

Are you getting swamped by other cluster admin actions? You should be
able to get information about that from the cat API. Maybe also your
cluster state exploded in size? Any change on CPU usage on the currently
elected master? Any hot threads look like admin actions? Maybe hit it
with jstack and look for admin actions there. They might not be hot threads
but maybe there are lots running at once?

Nik

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/aeda1ea8-cb67-4f34-9482-bca7240ff500%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.