Analyzing why ES Node has lots of i-o waits

My cluster shows a lot of io-waits (about 50%).

I do a lot of indexing and reindexing.
I'm not terribly bothered with consistency, but more with "eventual" consistency. I can go as long as 5 minutes before data needs to be consistent.
I thought maybe the re-indexing of lucene is the cause of much IO. Thought of maybe upping the refresh_interval or maybe the index.translog options - is that the right way to go?

My main problem is I do not know how to find out what my setting are. In http://www.elasticsearch.org/guide/reference/api/admin-indices-update-settings/ it lists alot of options, none of which are available when I use:

curl -xget 'http://localhost:9200/my_index/_settings'

I only get the number of shards, replicas. The elasticsearch.yml file does not tell what the defaults are. How would I know my changes took places, and what are the values now?

Help much appreciated as I cant find documentation for this.

Hello,

On Wed, Jul 3, 2013 at 4:51 PM, eranid eranid@gmail.com wrote:

My cluster shows a lot of io-waits (about 50%).

I do a lot of indexing and reindexing.
I'm not terribly bothered with consistency, but more with "eventual"
consistency. I can go as long as 5 minutes before data needs to be
consistent.
I thought maybe the re-indexing of lucene is the cause of much IO. Thought
of maybe upping the refresh_interval or maybe the index.translog options -
is that the right way to go?

Should be. Another way to go is might be to increase
indices.memory.index_buffer_size:

Another way to go is to tweak merge policies (usually there's a trade-off
between search performance and CPU+I/O load here):

To get a more clear idea of where your ES cluster is busy, I'd say you
should monitor it with something like SPM:

My main problem is I do not know how to find out what my setting are. In

Elasticsearch Platform — Find real-time answers at scale | Elastic
it lists alot of options, none of which are available when I use:

curl -xget 'http://localhost:9200/my_index/_settings'

I only get the number of shards, replicas. The elasticsearch.yml file does
not tell what the defaults are. How would I know my changes took places,
and
what are the values now?

You should see there the settings you changed by using the Update Settings
API. If settings are not there, they should either be in the configuration
file or ES is using the defaults.

To find the defaults, you need to look in the documentation for each
option. For example, indices.memory.index_buffer_size defaults to 10% (from
the heap of the node). refresh_interval defaults to 1s, and index.translog
options are shown here:

Best regards,
Radu

http://sematext.com/ -- Elasticsearch -- Solr -- Lucene

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Can you clarify how you get this information? Is it a single node only
with iowait? What is 50%? Is it constantly high or only peaks? Do you
use Linux? iostat? How are your disks organized?

If high iowait does not disappear and is constantly high, it may
indicate a drive failure in a RAID.

Jörg

Am 03.07.13 15:51, schrieb eranid:

My cluster shows a lot of io-waits (about 50%).

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Wow, thanks for the much detailed answer.

Much obliged!

@Jorg

Hi,

It is between 30-50% all the time (monitored by Newrelic). Single node on a m3.xl machine on AWS.

It resides on an attached, non-root, EBS drive (non-EBS optimized).

Eran.

If you're on an EBS drive, there is not much you can do, maybe neighbors
do saturate resources like network links.

Jörg

Am 03.07.13 19:09, schrieb eranid:

@Jorg

Hi,

It is between 30-50% all the time (monitored by Newrelic). Single node on a
m3.xl machine on AWS.

It resides on an attached, non-root, EBS drive (non-EBS optimized).

Eran.

--
View this message in context: http://elasticsearch-users.115913.n3.nabble.com/Analyzing-why-ES-Node-has-lots-of-i-o-waits-tp4037475p4037493.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hey, not sure how newrelic gets that data itself...
What do u see with iostat -x 1?

Ebs may or may not be enough, depending on your case. Are you using PIOpS
or std?
We've had great improvements by tweaking how often commits to index happen
( anything from 5 to 120 secs depending on use case), and good old RAID .
even RAID-0 on ephemeral disks is faster (and cheaper) than a single Ebs.
For higher loads, the sky is the limit (50+ EBS raided vols can push a heck
of a lot of iops..)
On 04/07/2013 3:09 AM, "eranid" eranid@gmail.com wrote:

@Jorg

Hi,

It is between 30-50% all the time (monitored by Newrelic). Single node on a
m3.xl machine on AWS.

It resides on an attached, non-root, EBS drive (non-EBS optimized).

Eran.

--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/Analyzing-why-ES-Node-has-lots-of-i-o-waits-tp4037475p4037493.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.