Sometimes, if I query over lots of indices, my cluster stops indexing stuff
and takes 100% CPU of one core.
It cries a lot with ConcurrentMarkSweep messages and I could see this trace
(http://sprunge.us/CYhF) in logs too. logstash-2013.03.02 is actually an
index which was created on 2nd March. So, it looks like there is some query
which was executed on that index and it stopped indexing in the cluster.
Is there anything I can do to avoid this thing? I don't want a single
search query to bring down indexing in cluster.
Sometimes, if I query over lots of indices, my cluster stops indexing
stuff and takes 100% CPU of one core.
It cries a lot with ConcurrentMarkSweep messages and I could see this
trace (http://sprunge.us/CYhF) in logs too. logstash-2013.03.02 is
actually an index which was created on 2nd March. So, it looks like there
is some query which was executed on that index and it stopped indexing in
the cluster.
Is there anything I can do to avoid this thing? I don't want a single
search query to bring down indexing in cluster.
Ok, I truely understand that I need more RAM but isn't there a way to limit
the number of threads/memory used for searching?
I'll prefer a timeout than a ES reaching it's max heap value. This is the
graph for one of the nodes http://i.imgur.com/IYly5Hq.png Is it too
stressed for the node? (Machine has 48GB ram and have set max heap to
24GB).
It stays at 17-20GB while I'm indexing stuff (and minor searching)..
On Thu, Mar 14, 2013 at 4:23 PM, Alexander Reelsen alr@spinscale.de wrote:
Hey,
your log shows an out of memory exception. You might solve this problem by
simply adding more memory to your elasticsearch java virtual machine.
Sometimes, if I query over lots of indices, my cluster stops indexing
stuff and takes 100% CPU of one core.
It cries a lot with ConcurrentMarkSweep messages and I could see this
trace (http://sprunge.us/CYhF) in logs too. logstash-2013.03.02 is
actually an index which was created on 2nd March. So, it looks like there
is some query which was executed on that index and it stopped indexing in
the cluster.
Is there anything I can do to avoid this thing? I don't want a single
search query to bring down indexing in cluster.
Ok, I truely understand that I need more RAM but isn't there a way to limit the number of threads/memory used for searching?
I'll prefer a timeout than a ES reaching it's max heap value. This is the graph for one of the nodes http://i.imgur.com/IYly5Hq.png Is it too stressed for the node? (Machine has 48GB ram and have set max heap to 24GB).
It stays at 17-20GB while I'm indexing stuff (and minor searching)..
On Thu, Mar 14, 2013 at 4:23 PM, Alexander Reelsen alr@spinscale.de wrote:
Hey,
your log shows an out of memory exception. You might solve this problem by simply adding more memory to your elasticsearch java virtual machine.
Sometimes, if I query over lots of indices, my cluster stops indexing stuff and takes 100% CPU of one core.
It cries a lot with ConcurrentMarkSweep messages and I could see this trace (http://sprunge.us/CYhF) in logs too. logstash-2013.03.02 is actually an index which was created on 2nd March. So, it looks like there is some query which was executed on that index and it stopped indexing in the cluster.
Is there anything I can do to avoid this thing? I don't want a single search query to bring down indexing in cluster.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.