Is there any known issues for high shared and buff/cache usage with elasticsearch java client

i am using below Elasticsearch java client and see high shared and buff/cache memory usage with the application.

<dependency>
          <groupId>co.elastic.clients</groupId>
          <artifactId>elasticsearch-java</artifactId>
          <version>8.13.3</version>
      </dependency>

is there any known issue or configuration which can help me debug why its consuming more memory ?

Any help or direction for the debug will be very helpful

Hello,
Could we get more details on this? If you could get a heapdump of your application we could understand what exactly is consuming so much memory.

@ltrotta heap dump will be in GBs, so any way i can share here. i dont see a upload option

@sibasish.palo just to be sure before starting an investigation, could you provide the information that makes you think this is caused specifically by the Elasticsearch client, and not by any other component in the application?

@ltrotta recently we upgraded the elasticsearch version and that is the only change made to the application in recent past, so assuming if the upgrade of the client is causing this problem

Please provide version information: previous version and new version.

Also, please explain what you mean with "high shared and buff/cache memory usage". What did you measure?

In other words, please help us help you by providing context information that allows us to investigate the problem.

earlier elastic version was 1.7.3, current is 8.13.3

old

sh-4.2$ free -h
              total        used        free      shared  buff/cache   available
Mem:            30G         27G        3.2G        412K        386M        1.8G
Swap:            0B          0B          0B

new

sh-4.2$ free -h
              total        used        free      shared  buff/cache   available
Mem:            15G        1.9G        1.3G        7.9G         12G        2.6G
Swap:            0B          0B          0B

Please let me know if i can provide any other additional information

Post of the day for sure. I LOL-ed.

Sorry, can you double check those version numbers @sibasish.palo please.

2 Likes

@RainTown yes we did migrated from v1.7.3 to v8.13.3

Then well done, seriously, that was a great effort. a 10 year version jump.

The 2 systems you compared has different total memory, the new one had less, so am I right in assuming were different systems? And different operating systems as well as wildly different elasticsearch versions?

In any case, rather than dig into the details of "free -h", can you tell us what problems you have with the application / elasticsearch? Ignore the "free -h" for now, pretend you had never run that command, what problems do you have?

@RainTown yes we have decreased the memory of the machine and the OS version is same in both the machines and both hosts same application

we don't see any issues as of now as we have alerts in place to trigger alerts for memory usage we are investigating to see if this will have any adverse effect eventually.

Thanks.

OK, that the buffer/cache number looks high is a good thing!! You want that to be high, if thats alerting then IMO the alert is broken, you want overall system to use as much of the available memory as possible. Dont be fooled into thinking you need a lot of "free" memory, thats just a waste.

The shared memory being high is a little less usual, but it's not a bad thing per se. Part of it might be tmps filesystems (df -a | grep tmpfs , these count towards shared) the rest will have been requested by an application.

You can see per process using ps_mem (amongst other tools)

I suggest use -d flag

As of now, I dont see anything I would be concerned about.

1 Like

Thanks for the additional info. A lot as changed obviously since version 1.7.3, starting with the fact that in 1.7.3 there was no real client, and applications were a node in the cluster. Clients use the http API since version 6 (or maybe 5). You also certainly had to update your application.

So you definitely have to adjust your alerts to this new environment.

1 Like

issues was with the JDK i was using

1 Like

Thanks for updating the thread. Did you try use same JDK for v1.7.3 and v8.13.3 ?

elasticsearch is most often installed alongside its bundled JDK/JVM. The working assumptions here will (almost always) be that you used that bundled JDK, unless you explicitly mention otherwise. So little chance anyone could guess that JDV version would be a factor.

But its great you reached a scenario where you are now happier.

its the same jdk for both the version but with some customizations on it, the customization was reporting the wrong metrics even though there was enough memory available.

So to close the thread please choose one of the replies as resolution, maybe your own.

The only information about memory use you shared were (limited) outputs from linux free commands, that showed nothing abnormal.

Had you done so, you might have reached a resolution quicker.

3 weeks later

which seems to mean user error, there was never a real problem to begin with!

This isn't to pick on you. But it is very very rare that someone here complains that a problem report has provided too much information, while it's commonly the case that the critical details are missed.

1 Like