Bulk throughput issues

Hi all, I'm currently working on a project where elasticsearch is our
backend but have been running into issues with insert rates. Some
background is our cluster is four physical boxes, each with 32 CPU cores
and 252 gigs of RAM. Each box runs a data node, a master node and a search
node. On two other machines that have the same hardware specs we have a
java app running that pulls our data from Kafka, does some adjusting of the
data and then inserts it into Elasticsearch.

In the java app we are using the "node" style client along with the
BulkProcessor class to handle our inserts. Everything is running on
Elasticsearch 0.90.5 with Java 1.7.0_45. The issue we are running into is
we can't seem to be able to get over about 7k inserts per second per java
app (so 14k total since we have two instances of our java app running). It
seems around 6500k-7k the Elasticsearch inserts start to lag behind how
fast we're pulling the data from Kafka. Our initial thoughts were that the
"data adjusting" stage of our app was causing the latency but we've been
able to rule that out by adding some metrics around that part of the app.
Everything is fine until we reach the point where we want to do inserts. My
question is are there any other users out there pushing ~10k inserts per
second (that is our goal) using the Java API? If so would you mind sharing
some of the settings you are using? We've tried adjusting the BulkProcessor
concurrent count and bulk size but nothing seems to really improve it. One
thing I've noticed with our monitoring is that sometimes it seems like our
Elasticsearch client gets backed up or something. We'll see inserts
chugging along at 6k and then just start dropping and then after a few
seconds they start coming back up. No GCs or anything happen during this
time so I'm not sure what would be causing that.

The health of the boxes while we're running looks fine (both on the ES
nodes as well as where our app lives) and inside of the JVM everything
seems to be ok as well (no huge GCs or anything). I've searched this list
and have found people talking about doing 10k inserts per second so we know
it's totally possible, we just can't seem to get the right setup to get to
that number. Any suggestions or tips would be greatly appreciated!

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/804767e1-480e-49be-8a79-7fbf4f0ce62e%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Yes, I can push >10k docs (= 10 MB/sec) with the Java bulk API, on a single
node.

How many docs can your app generate if you disable the bulk indexing and
run a "dry feed"?

7k per sec depend also on doc size. The larger the docs, the slower. Have
you checked how much capacity is used on network bandwidth, between client
and servers, and between servers? Maybe it is saturated.

Here is a small checklist for bulk:

  • most important the refresh rate, disable it

  • shard number should be reasonable (for four nodes, maybe four or eight
    shards to distribute the load evenly)

  • replica level should be 0 during bulk

  • the Java client should connect to all nodes (I prefer TransportClient)

  • if you have predefined mappings, create the index and the mappings before
    bulk start, it saves some overhead of dynamic mapping

  • after bulk, re-enable refresh rate and increase replica level (and maybe
    send an optimize request)

Some more tunables exist for advanced usage. I'm quite sure you do not need
to modify the advanced settings, since with 32 cores, ES selects reasonable
thread pool size.

I recommend to move from 0.90.5 to 0.90.7 / 0.90.8

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHViRtTQwpN6%3DwcKFz5EySyjB30ce0jAR7AkzjbmOGr6g%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

I forgot the disk subsystem. Watch the disks for I/O load and delays. If
you have spindle disks, they are the slowest part in the bulk chain, and
should be checked first if performance is ok. Filesystems can be tuned for
better throughput.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGC%2B_NvV%2B7YnQGWZF5JNBB3xZuSr%3DOqaDz6so4jDG2GEg%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thank you Jörg for the suggestions. I will continue to test today but here
is what I can answer for now:

  • The network traffic only hits 2-2.5mb so I think we're good there
  • Right now we are using the default of 5 shards with 1 replica
  • We do have predefined mappings for the index
  • iostat didn't show any issues during our testing but I can verify again
    today.

I see a lot of people talking about the refresh rate, I guess the issue I
have with that is this system is supposed to be as close to real time as
possible. We are taking log information from Kafka and sending it to
Elasticsearch and we want users to be able to search the data as quickly as
possible. I guess I can play around with making the refresh rate higher but
I don't thinking turning it off is an option right now.

We are using the BulkProcessor simply to keep up with the speed our
incoming messages are being generated at, not because we are loading large
amounts of data once a day or something. Is this the wrong thing to be
doing? Again, we're trying to get as close to real time as possible from
the time a log message is created to the point when it hits Elasticsearch
and can be searched on.

On Thursday, December 19, 2013 7:39:43 PM UTC-7, tdjb wrote:

Hi all, I'm currently working on a project where elasticsearch is our
backend but have been running into issues with insert rates. Some
background is our cluster is four physical boxes, each with 32 CPU cores
and 252 gigs of RAM. Each box runs a data node, a master node and a search
node. On two other machines that have the same hardware specs we have a
java app running that pulls our data from Kafka, does some adjusting of the
data and then inserts it into Elasticsearch.

In the java app we are using the "node" style client along with the
BulkProcessor class to handle our inserts. Everything is running on
Elasticsearch 0.90.5 with Java 1.7.0_45. The issue we are running into is
we can't seem to be able to get over about 7k inserts per second per java
app (so 14k total since we have two instances of our java app running). It
seems around 6500k-7k the Elasticsearch inserts start to lag behind how
fast we're pulling the data from Kafka. Our initial thoughts were that the
"data adjusting" stage of our app was causing the latency but we've been
able to rule that out by adding some metrics around that part of the app.
Everything is fine until we reach the point where we want to do inserts. My
question is are there any other users out there pushing ~10k inserts per
second (that is our goal) using the Java API? If so would you mind sharing
some of the settings you are using? We've tried adjusting the BulkProcessor
concurrent count and bulk size but nothing seems to really improve it. One
thing I've noticed with our monitoring is that sometimes it seems like our
Elasticsearch client gets backed up or something. We'll see inserts
chugging along at 6k and then just start dropping and then after a few
seconds they start coming back up. No GCs or anything happen during this
time so I'm not sure what would be causing that.

The health of the boxes while we're running looks fine (both on the ES
nodes as well as where our app lives) and inside of the JVM everything
seems to be ok as well (no huge GCs or anything). I've searched this list
and have found people talking about doing 10k inserts per second so we know
it's totally possible, we just can't seem to get the right setup to get to
that number. Any suggestions or tips would be greatly appreciated!

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/da70d4ee-6f62-400c-a7b0-97e8cfc0b54a%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Of course you can use BulkProcessor on-the fly for maintaining current
messages.

If you spend resources for querying it will affect indexing so that bulk
indexing speed suffers. Note that by default, 50% heap memory of a node is
dedicated for search, but 10% for bulk. This is reasonable, it shows that
ES puts indexing in the background, for better comfort and response time
when searching.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHcmZKx9%2BGTU%3D4%3DQ9Zyqoj%2BDPfUd803cKHZZP2_X5icrQ%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi, this is interesting discussion, have a few extra questions if it is
okay.

// 7k per sec depend also on doc size. The larger the docs, the slower.
Is there a way to quickly determine what is the size of a doc?

// Note that by default, 50% heap memory of a node is dedicated for search,
but 10% for bulk.
Assuming 16GB is allocated as heap to the elasticsearch jvm instance, where
can I determine
how much is used in what?

Thank you.

Jason

On Sat, Dec 21, 2013 at 4:17 AM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

Of course you can use BulkProcessor on-the fly for maintaining current
messages.

If you spend resources for querying it will affect indexing so that bulk
indexing speed suffers. Note that by default, 50% heap memory of a node is
dedicated for search, but 10% for bulk. This is reasonable, it shows that
ES puts indexing in the background, for better comfort and response time
when searching.

Jörg

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHcmZKx9%2BGTU%3D4%3DQ9Zyqoj%2BDPfUd803cKHZZP2_X5icrQ%40mail.gmail.com
.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHO4itySAyyt1v62sa-8xFsXxVJqkEvDEv-BfHqqJ709VJXG6Q%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Sorry for the delay, I had to take some unplanned leave and I wasn't able
to get to this while I was out. With some more testing I was able to get
~10k documents a second but I had to make some code changes.

1: I changed to the transport client in our Java code
2: It seemed as if one client wasn't able to keep up so what I did in the
code was actually spawn a couple of transport clients, each with it's own
bulk processor with concurrent set at 32. The part of our code that is
reading in the messages from Kafka then submits them at random to these
various thrift clients. Is anyone else having to do this or should a single
thrift client be able to do this?

I wasn't able to get much more out of it because the CPU usage started to
get really high but I don't think that's an Elasticsearch thing, I think
it's because we are doing so many regex tasks.

While hitting around ~10k a second the network output was only about 5mb a
second so we don't seem to be blocked there.

I did determine that was are basically able to pull from Kafka as fast as
the messages come in when NOT doing inserts into Elasticsearch so I don't
think that is the problem.

I plan on doing some testing today where we have multiple consumers running
so see if we can hit our ~40k inserts per second goal (4 consumers doing
~10k each).

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/56dd795e-fb98-4059-8ab9-5959c2bc3c52%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Jason, you have to measure the docs at the indexing API with your client
code.

You can use the _cluster/stats or /_cluster/stats/nodes/{nodeId} endpoint
to inspect the node caches (store, fielddata, filter_cache, id_cache,
completion). The indexing buffer and the translog buffer state can not be
inspected, but you can increase the log level to DEBUG to follow how ES
dynamicall resizes these buffers.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEuybOGsCZXdxRcenHfM31_cia_qhYKjVzaFbkn2E%3Dv8A%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

I forgot to mention that the reason I went with multiple clients is that
when doing some monitoring I was seeing a lot of blocked threads for
org.elasticsearch.action.bulk.BulkProcessor.internalAdd(). Looking at the
code this appears to be synchronized so my guess is we are just sending too
much data to it at once so I tried to break it up.

On Tuesday, December 31, 2013 8:35:30 AM UTC-7, tdjb wrote:

Sorry for the delay, I had to take some unplanned leave and I wasn't able
to get to this while I was out. With some more testing I was able to get
~10k documents a second but I had to make some code changes.

1: I changed to the transport client in our Java code
2: It seemed as if one client wasn't able to keep up so what I did in the
code was actually spawn a couple of transport clients, each with it's own
bulk processor with concurrent set at 32. The part of our code that is
reading in the messages from Kafka then submits them at random to these
various thrift clients. Is anyone else having to do this or should a single
thrift client be able to do this?

I wasn't able to get much more out of it because the CPU usage started to
get really high but I don't think that's an Elasticsearch thing, I think
it's because we are doing so many regex tasks.

While hitting around ~10k a second the network output was only about 5mb a
second so we don't seem to be blocked there.

I did determine that was are basically able to pull from Kafka as fast as
the messages come in when NOT doing inserts into Elasticsearch so I don't
think that is the problem.

I plan on doing some testing today where we have multiple consumers
running so see if we can hit our ~40k inserts per second goal (4 consumers
doing ~10k each).

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/44a6e2ff-f217-4c1d-ab19-0f62834944fd%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

There is no need for more than one client instance per JVM. You can
increase the bulk request concurrency in the BulkProcessor with
"setConcurrentRequests" to avoid blocking threads, until you reach the
sweet spot where client submitting resources matches the indexing capacity
of the cluster.

This is a matter of dynamic balance, which is different from setup to
setup. The default request concurrency is 1. For a higher value, you have
to prepare enough heap resources and maybe run your doc construction in
multiple threads to exploit the advantages.

As a rule of thumb, use 4 * available cores for the concurrency, and
~1-10MB for the bulk size.

For example, I often operate with a bulk size of 1000 docs and a
concurrency level of 32.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHaSgLRMTv3Uh_C5Z87_seMXyVFeVn-7_kwA3s2Fte99A%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hmm, ok, thank you for that info Jörg. I had previously been using one
client with 64 concurrent requests as the hardware we are running on has 32
cores. It sounds like I might need to try bumping that number up to see
what happens.

On Wednesday, January 1, 2014 5:27:40 AM UTC-7, Jörg Prante wrote:

There is no need for more than one client instance per JVM. You can
increase the bulk request concurrency in the BulkProcessor with
"setConcurrentRequests" to avoid blocking threads, until you reach the
sweet spot where client submitting resources matches the indexing capacity
of the cluster.

This is a matter of dynamic balance, which is different from setup to
setup. The default request concurrency is 1. For a higher value, you have
to prepare enough heap resources and maybe run your doc construction in
multiple threads to exploit the advantages.

As a rule of thumb, use 4 * available cores for the concurrency, and
~1-10MB for the bulk size.

For example, I often operate with a bulk size of 1000 docs and a
concurrency level of 32.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/8024abb9-e9a8-4e71-9321-9fcb0692c50c%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi tdjb,

I am also working on similar task, my requirement is to store around 70
million documents on a single mode. I have a 8 GB 8 core machine. Please
guide me how should take approach.

Also request you to share some code samples of how I can use Bulk Processor.

Regards

Geet

On Thursday, January 2, 2014 5:08:57 AM UTC+5:30, tdjb wrote:

Hmm, ok, thank you for that info Jörg. I had previously been using one
client with 64 concurrent requests as the hardware we are running on has 32
cores. It sounds like I might need to try bumping that number up to see
what happens.

On Wednesday, January 1, 2014 5:27:40 AM UTC-7, Jörg Prante wrote:

There is no need for more than one client instance per JVM. You can
increase the bulk request concurrency in the BulkProcessor with
"setConcurrentRequests" to avoid blocking threads, until you reach the
sweet spot where client submitting resources matches the indexing capacity
of the cluster.

This is a matter of dynamic balance, which is different from setup to
setup. The default request concurrency is 1. For a higher value, you have
to prepare enough heap resources and maybe run your doc construction in
multiple threads to exploit the advantages.

As a rule of thumb, use 4 * available cores for the concurrency, and
~1-10MB for the bulk size.

For example, I often operate with a bulk size of 1000 docs and a
concurrency level of 32.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/8080c572-f66b-4866-af72-9e7ab2e0f939%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Jörg, I went back to square one with some of the code based on your
suggestions and we now seem to be inserting into ES at the same rate we are
pulling from Kafka (which is what we wanted). I am using one transport
client with ~100 concurrent requests. That alone was note enough though and
the biggest changes that seemed to have helped us get what we wanted as
changing the refresh time to 10s and the shards from 5 to 16. Now we are
doing about 17k inserts per second on each consumer instance.

The only issue we've seen now is that after some time Elasticsearch itself
becomes a bit unstable. It appears to be related to the merging as the logs
indicate really long merge times (multiple minutes) right around the time
we start seeing issues. My guess is that is a topic for another thread :slight_smile:

Geet, we are basically just using the BulkProcessor object as-is with a
wrapper around it so all of our worker threads can use the same
BulkProcessor:

BulkProcessor.builder(client, new BulkProcessor.Listener() {
@Override
public void beforeBulk(long executionId, BulkRequest request
) {
logger.info("Bulk Going to execute new bulk composed of
{} actions", request.numberOfActions());
getInserts.mark();
}

            @Override
            public void afterBulk(long executionId, BulkRequest request, 

BulkResponse response) {
logger.info("Executed bulk composed of {} actions",request
.numberOfActions());
getInserts.mark(response.getItems().length);
}

            @Override
            public void afterBulk(long executionId, BulkRequest request, 

Throwable failure) {
logger.warn("Error executing bulk", failure);
}
}).setBulkActions(maxBulkCount).setConcurrentRequests(
bulkThreads).setFlushInterval(TimeValue.timeValueMillis(maxBulkTimeoutMs)).
build();

And then use the add() method to add your documents.

On Wednesday, January 1, 2014 5:27:40 AM UTC-7, Jörg Prante wrote:

There is no need for more than one client instance per JVM. You can
increase the bulk request concurrency in the BulkProcessor with
"setConcurrentRequests" to avoid blocking threads, until you reach the
sweet spot where client submitting resources matches the indexing capacity
of the cluster.

This is a matter of dynamic balance, which is different from setup to
setup. The default request concurrency is 1. For a higher value, you have
to prepare enough heap resources and maybe run your doc construction in
multiple threads to exploit the advantages.

As a rule of thumb, use 4 * available cores for the concurrency, and
~1-10MB for the bulk size.

For example, I often operate with a bulk size of 1000 docs and a
concurrency level of 32.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/bf606027-ecee-4250-aded-40b0cacaf3c7%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thanks tdjb,

I followed your code and it worked for me, I tried to set the
refresh_interval in the transport client setting but it seems its not
working.
Please suggest me how can set the refresh interval using transport client.

Regards

Geet

On Fri, Jan 3, 2014 at 8:32 PM, tdjb eujon.sellers@gmail.com wrote:

Jörg, I went back to square one with some of the code based on your
suggestions and we now seem to be inserting into ES at the same rate we are
pulling from Kafka (which is what we wanted). I am using one transport
client with ~100 concurrent requests. That alone was note enough though and
the biggest changes that seemed to have helped us get what we wanted as
changing the refresh time to 10s and the shards from 5 to 16. Now we are
doing about 17k inserts per second on each consumer instance.

The only issue we've seen now is that after some time Elasticsearch itself
becomes a bit unstable. It appears to be related to the merging as the logs
indicate really long merge times (multiple minutes) right around the time
we start seeing issues. My guess is that is a topic for another thread :slight_smile:

Geet, we are basically just using the BulkProcessor object as-is with a
wrapper around it so all of our worker threads can use the same
BulkProcessor:

BulkProcessor.builder(client, new BulkProcessor.Listener() {
@Override
public void beforeBulk(long executionId, BulkRequestrequest
) {
logger.info("Bulk Going to execute new bulk composed
of {} actions", request.numberOfActions());
getInserts.mark();
}

            @Override
            public void afterBulk(long executionId, BulkRequestrequest

, BulkResponse response) {
logger.info("Executed bulk composed of {} actions",request
.numberOfActions());
getInserts.mark(response.getItems().length);
}

            @Override
            public void afterBulk(long executionId, BulkRequestrequest

, Throwable failure) {
logger.warn("Error executing bulk", failure);
}
}).setBulkActions(maxBulkCount).setConcurrentRequests(
bulkThreads).setFlushInterval(TimeValue.timeValueMillis(maxBulkTimeoutMs
)).build();

And then use the add() method to add your documents.

On Wednesday, January 1, 2014 5:27:40 AM UTC-7, Jörg Prante wrote:

There is no need for more than one client instance per JVM. You can
increase the bulk request concurrency in the BulkProcessor with
"setConcurrentRequests" to avoid blocking threads, until you reach the
sweet spot where client submitting resources matches the indexing capacity
of the cluster.

This is a matter of dynamic balance, which is different from setup to
setup. The default request concurrency is 1. For a higher value, you have
to prepare enough heap resources and maybe run your doc construction in
multiple threads to exploit the advantages.

As a rule of thumb, use 4 * available cores for the concurrency, and
~1-10MB for the bulk size.

For example, I often operate with a bulk size of 1000 docs and a
concurrency level of 32.

Jörg

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/UAiR3Vf779I/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/bf606027-ecee-4250-aded-40b0cacaf3c7%40googlegroups.com
.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEXePwcS1B02u113Vjka9OK621WAyR8gGAMA3auoYfsMw7xoSw%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

You must send an update settings request to the cluster.

Example:

    ImmutableSettings.Builder settingsBuilder =

ImmutableSettings.settingsBuilder();
settingsBuilder.put("refresh_interval", -1);
UpdateSettingsRequest updateSettingsRequest = new
UpdateSettingsRequest(getIndex())
.settings(settingsBuilder);
client.admin().indices()
.updateSettings(updateSettingsRequest)
.actionGet();

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEcZnOHPPDQyC-M9B0XwQ-hbyXf1VLQ4QbsWCmX4LOCbQ%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.