Reasons Behind HPROF Files - What is the Cause?

Hi All

We are on Elasticsearch 6.2.2 where it has thrown a HPROF file when indexing the content of documents for Micro Focus' Content Manager.

The file seems to be referring to massive amounts of memory leaks.

Is there a later version that handles memory leaks more efficiently?

The server has 16GB, and Elasticsearch has been assigned 8GB.

There is a fatal error within the contentmanager.log referencing an error in the thread - see attached.

Any advice would be appreciated :slight_smile:

Please don't post pictures of text, they are difficult to read, impossible to search and some people may not be even able to see them :slight_smile:

A HPROF is a heap dump, which means the JVM crashed for some reason. If you can post actual text logs it'd help us. Please make sure you use the </> button to format the logs as well.

Hi Mark

Apologies - here is the error in text:

[2020-03-05T22:46:06,470][WARN ][o.e.m.j.JvmGcMonitorService] [CMCI1] [gc][425904] overhead, spent [1.1s] collecting in the last [1.8s]
[2020-03-05T22:46:09,830][WARN ][o.e.m.j.JvmGcMonitorService] [CMCI1] [gc][425906] overhead, spent [1.6s] collecting in the last [2.3s]
[2020-03-05T22:46:10,861][WARN ][o.e.m.j.JvmGcMonitorService] [CMCI1] [gc][425907] overhead, spent [968ms] collecting in the last [1s]
[2020-03-05T22:46:11,971][WARN ][o.e.m.j.JvmGcMonitorService] [CMCI1] [gc][425908] overhead, spent [1s] collecting in the last [1.1s]
[2020-03-05T22:46:13,580][WARN ][o.e.m.j.JvmGcMonitorService] [CMCI1] [gc][425909] overhead, spent [1.5s] collecting in the last [1.6s]
[2020-03-05T22:46:14,799][WARN ][o.e.m.j.JvmGcMonitorService] [CMCI1] [gc][425910] overhead, spent [1.1s] collecting in the last [1.2s]
[2020-03-05T22:46:16,127][WARN ][o.e.m.j.JvmGcMonitorService] [CMCI1] [gc][425911] overhead, spent [1.3s] collecting in the last [1.3s]
[2020-03-05T22:46:17,737][WARN ][o.e.m.j.JvmGcMonitorService] [CMCI1] [gc][425912] overhead, spent [1.5s] collecting in the last [1.6s]
[2020-03-05T22:46:20,315][WARN ][o.e.m.j.JvmGcMonitorService] [CMCI1] [gc][425913] overhead, spent [2.5s] collecting in the last [2.5s]
[2020-03-05T22:46:21,487][WARN ][o.e.m.j.JvmGcMonitorService] [CMCI1] [gc][425914] overhead, spent [1.1s] collecting in the last [1.1s]
[2020-03-05T22:46:23,518][WARN ][o.e.m.j.JvmGcMonitorService] [CMCI1] [gc][425915] overhead, spent [2s] collecting in the last [2s]
[2020-03-05T22:46:42,239][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [CMCI1] fatal error in thread [elasticsearch[CMCI1][bulk][T#3]], exiting
java.lang.OutOfMemoryError: Java heap space
	at java.lang.AbstractStringBuilder.<init>(Unknown Source) ~[?:1.8.0_181]
	at java.lang.StringBuilder.<init>(Unknown Source) ~[?:1.8.0_181]
	at com.fasterxml.jackson.core.util.TextBuffer.contentsAsString(TextBuffer.java:346) ~[jackson-core-2.8.10.jar:2.8.10]
	at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._finishAndReturnString(UTF8StreamJsonParser.java:2470) ~[jackson-core-2.8.10.jar:2.8.10]
	at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.getText(UTF8StreamJsonParser.java:315) ~[jackson-core-2.8.10.jar:2.8.10]
	at org.elasticsearch.common.xcontent.json.JsonXContentParser.text(JsonXContentParser.java:83) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.common.xcontent.support.AbstractXContentParser.textOrNull(AbstractXContentParser.java:237) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.TextFieldMapper.parseCreateField(TextFieldMapper.java:345) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:297) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:485) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentParser.parseValue(DocumentParser.java:607) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:407) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:384) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:482) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:500) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:394) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:384) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:482) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:500) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentParser.parseNonDynamicArray(DocumentParser.java:584) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentParser.parseArray(DocumentParser.java:542) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:396) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:384) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentParser.internalParseDocument(DocumentParser.java:93) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:67) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:261) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:714) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:692) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnPrimary(IndexShard.java:673) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:548) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeUpdateRequestOnce(TransportShardBulkAction.java:359) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeUpdateRequest(TransportShardBulkAction.java:406) ~[elasticsearch-6.2.2.jar:6.2.2]
[2020-03-05T22:46:44,411][WARN ][o.e.m.j.JvmGcMonitorService] [CMCI1] [gc][425917] overhead, spent [2.1s] collecting in the last [2.1s]
[2020-03-05T22:47:55,387][INFO ][o.e.n.Node               ] [CMCI1] initializing ...
[2020-03-05T22:47:55,512][INFO ][o.e.e.NodeEnvironment    ] [CMCI1] using [1] data paths, mounts [[CONTENT MANAGER (D:)]], net usable_space [464.5gb], net total_space [499.8gb], types [NTFS]
[2020-03-05T22:47:55,512][INFO ][o.e.e.NodeEnvironment    ] [CMCI1] heap size [7.9gb], compressed ordinary object pointers [true]
[2020-03-05T22:47:55,544][INFO ][o.e.n.Node               ] [CMCI1] node name [CMCI1], node ID [8rqDFoTZR2Om0vgvStKalA]
[2020-03-05T22:47:55,544][INFO ][o.e.n.Node               ] [CMCI1] version[6.2.2], pid[7092], build[10b1edd/2018-02-16T19:01:30.685723Z], OS[Windows Server 2016/10.0/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_181/25.181-b13]
[2020-03-05T22:47:55,544][INFO ][o.e.n.Node               ] [CMCI1] JVM arguments [-Xms8g, -Xmx8g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=C:\Users\am107\AppData\Local\Temp\2\elasticsearch, -XX:+HeapDumpOnOutOfMemoryError, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Delasticsearch, -Des.path.home=D:\elasticsearch-6.2.2, -Des.path.conf=D:\elasticsearch-6.2.2\config, exit, -Xms8192m, -Xmx8192m, -Xss1024k]
[2020-03-05T22:47:57,672][INFO ][o.e.p.PluginsService     ] [CMCI1] loaded module [aggs-matrix-stats]
[2020-03-05T22:47:57,672][INFO ][o.e.p.PluginsService     ] [CMCI1] loaded module [analysis-common]
[2020-03-05T22:47:57,672][INFO ][o.e.p.PluginsService     ] [CMCI1] loaded module [ingest-common]
[2020-03-05T22:47:57,672][INFO ][o.e.p.PluginsService     ] [CMCI1] loaded module [lang-expression]
[2020-03-05T22:47:57,672][INFO ][o.e.p.PluginsService     ] [CMCI1] loaded module [lang-mustache]
[2020-03-05T22:47:57,672][INFO ][o.e.p.PluginsService     ] [CMCI1] loaded module [lang-painless]
[2020-03-05T22:47:57,672][INFO ][o.e.p.PluginsService     ] [CMCI1] loaded module [mapper-extras]
[2020-03-05T22:47:57,672][INFO ][o.e.p.PluginsService     ] [CMCI1] loaded module [parent-join]
[2020-03-05T22:47:57,672][INFO ][o.e.p.PluginsService     ] [CMCI1] loaded module [percolator]
[2020-03-05T22:47:57,672][INFO ][o.e.p.PluginsService     ] [CMCI1] loaded module [rank-eval]
[2020-03-05T22:47:57,672][INFO ][o.e.p.PluginsService     ] [CMCI1] loaded module [reindex]
[2020-03-05T22:47:57,672][INFO ][o.e.p.PluginsService     ] [CMCI1] loaded module [repository-url]
[2020-03-05T22:47:57,672][INFO ][o.e.p.PluginsService     ] [CMCI1] loaded module [transport-netty4]
[2020-03-05T22:47:57,672][INFO ][o.e.p.PluginsService     ] [CMCI1] loaded module [tribe]
[2020-03-05T22:47:57,672][INFO ][o.e.p.PluginsService     ] [CMCI1] no plugins loaded

From opening the HPROF within YourKit Java Profiler, we have inspected the inconsistencies and have found that there are massive amounts of memory leaks - is there a later version of Elasticsearch that handles memory leaks more efficiently?

Or, is the only way to move forward to increase the RAM? We currently have 8GB allocated to Elasticsearch - is this not enough?

Cheers
Sarah

Newer versions of Elasticsearch are much more efficient with memory use and management, so it's always worth upgrading.

However it looks like you are sending bulk requests to Elasticsearch? If that's the case can you provide more information, like how large they are (count and size), how often, what the threadpool capacity looks like, what sort of client you are using etc.

Hi Mark

Great to know!

That is correct - in this instance we are.

The count of records that were sent to be content indexed were ~700,000 initially.

We then received errors from this amount, so we had to then index the remaining ~200,000 records where this fatal error was produced.

This was only a once off situation as we were having issues with Elasticsearch.

The threadpool capacity has been given 1024MB - does this need to be increased?

The client we are using in conjunction with Elasticsearch is Micro Focus' Content Manager.

Please let me know if you require any additional information.

Cheers
Sarah

Ok but how large are those records? What does the bulk threadpool capacity look like?

How did you do that?

Hi Mark

The records are 200MB maximum.

My apologies - I have confused myself with Thread Stack Size.

How do I check the threadpool capacity?

Cheers
Sarah

Do you mean your individual documents are that size? That's pretty massive and it's no surprise you are getting threadpool overload. Can you tweak how the software you are using talk's to Elasticsearch, specifically how many documents it sends at once?

Hi Mark

Yes - the maximum that any document can be to be indexed is 200MB.

How can I check the threadpool capacity?

Here are the settings that Micro Focus' Content Manager has for Elasticsearch:

Where:
• Elasticsearch Index Name - Name of the Elasticsearch index to use for this Content Manager dataset. The characters in the name must be alphanumeric characters or underscores.
• Elasticsearch Transaction Limit (MB) - Limits how much indexing data can be sent to Elasticsearch in a single request.
• Elasticsearch Request Timeout (s) - Limits how long CM will wait for Elasticsearch to process a single request.
• Provide extensive logging for index management - select for detailed log files.

At this point in time a record is only indexed when it is registered or manually requested.

The only way in which we can replicate the HPROF file is if thousands of records are sent for indexing all at once - in this case it was 700,000, where 200,000 failed. We then had to index the remaining 200,000 records.

From the error given on the 19/03/2020, would there be any indication of memory leaks?

Is 8GB of RAM enough for Elasticsearch? Do you have any documentation on the new release of Elasticsearch that would handle memory leaks more efficiently?

Cheers
Sarah

These are not memory leaks, you're overwhelming Elasticsearch.

Giving it more heap would be beneficial if you're going to be doing these huge bulk reindexes.

Hi Mark

No worries! Thank you!!

In regards to the threadpool capacity, would that be the screenshot I included in my previous post?

If not, how do I check the threadpool capacity? Via the Elasticsearch manager?

Cheers
Sarah

I am not sure what the software you are using shows.

But you can use the _cat APIs directly - https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-thread-pool.html

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.