X-pack ML job gives the following

2 weeks and probably about 15,000,000 records in a ML Job giving me the following error-

[2017-03-24T11:38:07,458][WARN ][o.e.m.j.JvmGcMonitorService] [es154.162.client2] [gc][816] overhead, spent [925ms] collecting in the last [1s]
[2017-03-24T11:38:12,327][WARN ][o.e.m.j.JvmGcMonitorService] [es154.162.client2] [gc][820] overhead, spent [956ms] collecting in the last [1.8s]
[2017-03-24T11:38:14,482][WARN ][o.e.m.j.JvmGcMonitorService] [es154.162.client2] [gc][822] overhead, spent [948ms] collecting in the last [1.1s]
[2017-03-24T11:38:16,924][WARN ][o.e.m.j.JvmGcMonitorService] [es154.162.client2] [gc][823] overhead, spent [1.6s] collecting in the last [2.4s]

shutsdown elasticsearch etc. I'm still figuring out x-pack etc so there's probably something I'm doing. It's a really beefy server.

There's a couple other questions like this but no answers.

Everything I'm reading says this is a memory error but it looks ok..
root@elkstack:/var/log/elasticsearch# free -mh
total used free shared buff/cache available
Mem: 251G 4.8G 133G 2.6G 113G 242G
Swap: 5.3G 0B 5.3G

Hi,

You can configure the amount of memory the JVM uses by editing the jvm.options file in the Elasticsearch config directory. Take a look at the documentation for more details.

The default out of the box setting it 1G, you have a big machine so a much larger value is fine but don't give the JVM more than 26Gb of memory as it can no longer use compressed oops. -Xms26g and -Xmx26g are the max values you should use.

Solved, thanks David!!!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.