Hello,
Could we get more details on this? If you could get a heapdump of your application we could understand what exactly is consuming so much memory.
@sibasish.palo just to be sure before starting an investigation, could you provide the information that makes you think this is caused specifically by the Elasticsearch client, and not by any other component in the application?
@ltrotta recently we upgraded the elasticsearch version and that is the only change made to the application in recent past, so assuming if the upgrade of the client is causing this problem
Then well done, seriously, that was a great effort. a 10 year version jump.
The 2 systems you compared has different total memory, the new one had less, so am I right in assuming were different systems? And different operating systems as well as wildly different elasticsearch versions?
In any case, rather than dig into the details of "free -h", can you tell us what problems you have with the application / elasticsearch? Ignore the "free -h" for now, pretend you had never run that command, what problems do you have?
@RainTown yes we have decreased the memory of the machine and the OS version is same in both the machines and both hosts same application
we don't see any issues as of now as we have alerts in place to trigger alerts for memory usage we are investigating to see if this will have any adverse effect eventually.
OK, that the buffer/cache number looks high is a good thing!! You want that to be high, if thats alerting then IMO the alert is broken, you want overall system to use as much of the available memory as possible. Dont be fooled into thinking you need a lot of "free" memory, thats just a waste.
The shared memory being high is a little less usual, but it's not a bad thing per se. Part of it might be tmps filesystems (df -a | grep tmpfs , these count towards shared) the rest will have been requested by an application.
You can see per process using ps_mem (amongst other tools)
I suggest use -d flag
As of now, I dont see anything I would be concerned about.
Thanks for the additional info. A lot as changed obviously since version 1.7.3, starting with the fact that in 1.7.3 there was no real client, and applications were a node in the cluster. Clients use the http API since version 6 (or maybe 5). You also certainly had to update your application.
So you definitely have to adjust your alerts to this new environment.
Thanks for updating the thread. Did you try use same JDK for v1.7.3 and v8.13.3 ?
elasticsearch is most often installed alongside its bundled JDK/JVM. The working assumptions here will (almost always) be that you used that bundled JDK, unless you explicitly mention otherwise. So little chance anyone could guess that JDV version would be a factor.
But its great you reached a scenario where you are now happier.
its the same jdk for both the version but with some customizations on it, the customization was reporting the wrong metrics even though there was enough memory available.
So to close the thread please choose one of the replies as resolution, maybe your own.
The only information about memory use you shared were (limited) outputs from linux free commands, that showed nothing abnormal.
Had you done so, you might have reached a resolution quicker.
3 weeks later
which seems to mean user error, there was never a real problem to begin with!
This isn't to pick on you. But it is very very rare that someone here complains that a problem report has provided too much information, while it's commonly the case that the critical details are missed.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.