Direct buffer memory problems with RestHighlevelClient

Hello

We are currently running elasticsearch 5.6.2. We connect to the nodes with the Java TransportClient. With the RestHighlevelClient reaching a state where a good amount of our use-cases are covered, we tried switching from the TransportClient to the Rest client. Unfortunately we ran into an issue.

The exception is as follows: Exception in thread "I/O dispatcher xy" java.lang.OutOfMemoryError: Direct buffer memory

After some investigation it seems to be the java.nio components, which use off-heap memory. It can be defined by setting -XX:MaxDirectMemorySize=xy. We use a cloudfoundry java-buildpack for deployment, which did set this memory to 10M. Since our application does some quite data heavy requests, we tried a few different settings. Increasing it to 50M did delay the exception, but it still occurred later on.

This exception never happened with the TransportClient. Which seems plausbile if it never used java.nio. The problem we're facing right now is that this came up on our test environment. We can't really know what settings would be needed for production, where much more requests are processed.

Personally, I would be happiest if we could choose to use the classic Java I/O, if that would actually solve the problem. The apache httpCore components state that this may be more appropriate for data intensive scenarios.

Maybe for reference: The Jest library seems to use two different clients. A CloseableHttpAsyncClient (nio) for all async calls and a CloseableHttpClient for all blocking calls.

Does anyone have any experience with this problem? If so, what would be the usual solution? And is there any known issue or todo for the RestHighlevelClient regarding this topic?

Thank you,
Kevin

1 Like

Hi @Slomo,

as you have mentioned, direct buffer memory is used for NIO. This memory is also not part of the heap but rather part of native memory (i.e. you would see this in RSS but not in heap usage diagrams). By default, the JVM chooses direct buffer memory ergonomically and IMHO 50MB is way too little, I'd rather try something in the GB range.

Is there any reason why you are setting this value explicitly? If you want to let the JVM choose this value ergonomically again, just set -XX:MaxDirectMemorySize=0. This is probably also the simplest and safest option. Now you might wonder what value the JVM chooses in that case but unfortunately, determining this value is a bit involved. This article about MaxDirectMemorySize should help you to determine that value.

Daniel

1 Like

Hi Daniel

Thanks for your response.

The setting was just provided this way by our build pack. Since we never (consciously) used direct buffer memory, we never touched any of the memory settings. Reason also being, that this is the most stable build pack we used in terms of heap space problems.

I was thinking that 50MB would be enough for our test use case, if the memory is freed up after the IO routine is complete. I read a bit further and it seems that the memory is freed up only if the referencing object in the heap space is GCd. In this case, I can imagine that we exceed the limit. So you might be right with it being too low.

For now, we will try to run it with the java default and see how it works.

Kevin

Hi Kevin,

sounds good. Yes, direct buffers are not immediately cleaned up as it is piggybacking on GC on of the on-heap reference.

Another flag that you should watch out for is -XX:+DisableExplicitGC. NIO has a hack to force a full GC when the process is out of direct buffer memory. With -XX:+DisableExplicitGC this hack is effectively prevented but you will see these OutOfMemoryErrors that you've mentioned in your original post. For that reason, I'd recommend to either leave the JVM's default (the default is to turn off this behavior) or explicitly set -XX:-DisableExplicitGC (note the - before DisableExplicitGC instead of a +. Setting binary JVM flags is very subtle...).

Daniel

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.