No node available while doing easticsearch data migration

(spancer ray) #1

Hi Guys,

I'm using java api to migrate volume index data from one es cluster to another, and my code is somewhat like below:

Migration code:

int pageSize = 10000;
String[] indices = { "myoldindex" };
Client sourceclient = ClientUtil.getSourceTransportClient();
SearchResponse searchResponse = sourceclient.prepareSearch(indices).setSearchType(SearchType.SCAN)
BulkRequestBuilder bulkRequestBuilder = ClientUtil.getTargetTransportClient().prepareBulk();
while (true)
searchResponse = sourceclient.prepareSearchScroll(searchResponse.getScrollId())
for (SearchHit hit : searchResponse.getHits())
if (bulkRequestBuilder.numberOfActions() > 0)
bulkRequestBuilder = ClientUtil.getTargetTransportClient().prepareBulk();

		if (searchResponse.getHits().hits().length == 0)
            if (bulkRequestBuilder.numberOfActions() > 0)

Client code:

static Settings defaultSettings = ImmutableSettings.settingsBuilder().put("client.transport.sniff", true).build();

  private static TransportClient targetClient;

private static TransportClient sourceClient;

static {
    try {
        Class<?> clazz = Class.forName(TransportClient.class.getName());
        Constructor<?> constructor = clazz.getDeclaredConstructor(Settings.class);
	    Settings finalSettings = ImmutableSettings.settingsBuilder()
	    targetClient = (TransportClient) constructor.newInstance(finalSettings);
	    sourceClient = (TransportClient) constructor.newInstance(finalSettings);
	    sourceClient.addTransportAddress(new InetSocketTransportAddress("", 9300));
	    targetClient.addTransportAddress(new InetSocketTransportAddress("", 9300));
    } catch (Exception e) {

//get instance
public static synchronized Client getSourceTransportClient() {
    return sourceClient;

  //get instance
public static synchronized Client getTargetTransportClient() {
    return targetClient;

At first, everything goes fine, and the speed is about 6 Million records per hour, however, after about 2 hours, I got the No node available exception.... I'm pretty sure it's not the complicating problem that cause it.
And I'm wondering whether es (or ES client) has some timeout configration params?

Can anyone help? Will be great appreciated.


(Jörg Prante) #2

You have overwhelmed the cluster, so it can not respond within 5 seconds,
and did not streamline your bulk indexing. On cluster side, consider to
look into segment merging and how big your segments have grown.

In the indexing code, you do not take care of BulkResponses when just
executing bulkRequestBuilder.execute().actionGet(). This must sooner or
later go crazy. Check if you can add a listener and wait for the cluster to
respond properly before continuing.

Also, in the scan/scroll request, note that setSize() is per shard. Check
if pageSize * shard numbers is the right size for a bulk request, it may
get too large.


You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
For more options, visit

(spancer ray) #3

Hi Jörg,

Thanks for the reply. I changed to use BulkProcessor which enhanced the inserting performance.
However, while pulling data from source cluster, i got the same exception. This time, the problem
caused by searchScroll. Below is the exception message:

Exception in thread "main" org.elasticsearch.client.transport.NoNodeAvailableException: No node available at org.elasticsearch.client.transport.TransportClientNodesService$RetryListener.onFailure( at org.elasticsearch.client.transport.TransportClientNodesService.execute( at at at org.elasticsearch.client.transport.TransportClient.searchScroll( at at org.elasticsearch.action.ActionRequestBuilder.execute( at org.elasticsearch.action.ActionRequestBuilder.execute(

And my fetch size was set to 2000, with 5 shards in my source cluster, which means my scroll fetches 10,000 every scan request. I don't know whether this happened for the overwhelmed fetching operation, if so, how can I avoid this? To recude my fetch size? Or should I increse it to so as to meet the insertion hunger. (I got my BulkProcessor concurrentRequests size set to 5, with other params the default values.)

I've got 70million test data in my source cluster, and the exception happens when after 25million data migrated. Have you any idea on this?


(Jörg Prante) #4

Do you use monitoring tools for watching the cluster nodes?

So you can find out how the resource usage is developing until you reach 25
mio. I predict you will notice the cluster entering big segment merge phase
plus the search load from your scan/scroll requests. Try to streamline
segment merging by either throttling or reducing the segment maximum size
to load (default is 5G).

You should try using a smaller value for setSize(), maybe 200 instead of
2000, to let the scan/scroll generate more handy bulk request sizes.

The life time for a scroll request is very high, 2 minutes. During this
time the server must keep found docs in memory and this can easily pile up.
I would reduce it to 30 seconds or so. This will save resources on the
cluster node, but it must be balanced with the setSize() param to avoid
search timeouts.


You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
For more options, visit

(system) #5