BulkAll takes a long time to complete

Elasticsearch version : 5.6.5

C# Nest package: 5.5.0

JVM version ( java -version ): 9

OS version : Windows 10

Description of the problem including expected versus actual behavior :
Sometimes BulkAll takes a long time to complete.
I'm not getting any errors it's just waiting for something, sometimes for 85 seconds or longer.
I tested on my local machine with a list containing a single item, so it's not about having too much data or having too many requests.
This reproduces when it's called 2-3 times per second.

Steps to reproduce :

Client.BulkAll(itemArray, i => i.Index<T>()
                     .MaxDegreeOfParallelism(Environment.ProcessorCount) //12
        .Wait(TimeSpan.FromSeconds(60), r => { });

Is there a known problem that was fixed in the newer version of NEST/Elasticsearch?

Hi @CSharpBender ,

there are a few options to come closer:

  1. Check log files for errors.
  2. Enable index slow logs https://www.elastic.co/guide/en/elasticsearch/reference/5.6/index-modules-slowlog.html
  3. This could be garbage collection related, check that GC logs do not have any back to back full GCs.
  4. A node crashing or being killed by OS could likely lead to prolonged index time (though I would not expect 85s for a single index request).
  5. Network issues between client and ES.
  6. I notice you use java 9, could be worth trying with java 8 instead, since 9 is unsupported.
  7. Look at hot threads or similar when the issue happens.

Before we can guess at whether upgrading will help this, we need to know what caused it. It might be easier to spin up a new ES to see if it reproduces there.

Hi @HenningAndersen,
Thanks for responding, I will check the logs, maybe I find something usefull.
Regarding the other points: there is no network involved, it's my local elasticsearch instance and I'm the only one using it. Java is the only hot thread.


Hi @HenningAndersen,

I enabled slow log but the files are empty. The only issue I could find in the logs is this:

[2019-07-17T00:01:44,242][INFO ][o.e.i.IndexingMemoryController] [HostName] now throttling indexing for shard [[IndexName][2]]: segment writing can't keep up

After googling I found that it might be related to disk writing (although I have SSD) so I've updated the config to "index.merge.scheduler.max_thread_count" : 1.
Still, this hasn't fixed my problem, from time to time some threads are freezing and they don't timeout although I've set it to 60seconds.

I suspect a NEST 5.5 issue but I might be wrong.

LE: After upgrading to NEST v5.6.5 I couldn't reproduce the issue.
Can anyone tell me if the version of NEST should be the same as the elasticsearch one? I was using NEST v5.5 and elasticsearch v5.6.5

Thank you

Elasticsearch 5.x does not support Java9 as per the support matrix.

Hi @Christian_Dahlqvist,
I've uninstalled Java9 and I've installed OpenJava 1.8.0_212 and the issue continues to reproduce with NEST 5.5
It's not reproducing with NEST 5.6.5, so should the NEST version be the same as the elasticsearch one?