Node uses too much memory, I think

Hello guys.

I run ES on 3 nodes with 20 shards and 1 replica.
Each node runs on 8 core cpu, 16GB memory server, and I set heap size 8GB
to JVM option.
I set -Dbootstrap.mlockall=true, -Des.index.cache.field.type=soft options.
There is 350 millions docs whose size is 650GB total in ES.
When I start entire nodes and after cluster's health become green stat,
each node uses almost 6.5 - 7.9 GB even though there's no actual search or
insert request.
Is it normal? Should I use bigger memory server for this size of data?
Please give me an advise.

Thanks.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

The heap size dictates both the minimum (-Xms) and maximum (-Xmx) size the
Java process will take on startup. Your JVM process will allocate all 8GB
to itself if possible. What are the sizes of your caches (field/filter)?
They should be close to 0.

Cheers,

Ivan

On Thu, Aug 29, 2013 at 12:45 AM, forkurt@gmail.com wrote:

Hello guys.

I run ES on 3 nodes with 20 shards and 1 replica.
Each node runs on 8 core cpu, 16GB memory server, and I set heap size 8GB
to JVM option.
I set -Dbootstrap.mlockall=true, -Des.index.cache.field.type=soft options.
There is 350 millions docs whose size is 650GB total in ES.
When I start entire nodes and after cluster's health become green stat,
each node uses almost 6.5 - 7.9 GB even though there's no actual search or
insert request.
Is it normal? Should I use bigger memory server for this size of data?
Please give me an advise.

Thanks.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thanks for reply.

When I said node used 6.5-7.9GB, it was on bigdesk's JVM chart.
The actual used heap size is now 7.1G and commited is 7.9 which is XMx size.

The thing I want to know is ES load all index information on memory, so
used heap size is so big even if there's little client's request.
If that's right, I think I should use larger memory for node server for
this amount of data. Do you have any idea about this?

Finally, I didin't set anything about cache size, which means ES user
defualt setting for that - 20% for filter cache. And for your question, how
can I check the actual caches(field/fillter)? Could you give me the way?

Jay

On Friday, August 30, 2013 6:27:27 AM UTC+9, Ivan Brusic wrote:

The heap size dictates both the minimum (-Xms) and maximum (-Xmx) size the
Java process will take on startup. Your JVM process will allocate all 8GB
to itself if possible. What are the sizes of your caches (field/filter)?
They should be close to 0.

Cheers,

Ivan

On Thu, Aug 29, 2013 at 12:45 AM, <for...@gmail.com <javascript:>> wrote:

Hello guys.

I run ES on 3 nodes with 20 shards and 1 replica.
Each node runs on 8 core cpu, 16GB memory server, and I set heap size 8GB
to JVM option.
I set -Dbootstrap.mlockall=true, -Des.index.cache.field.type=soft options.
There is 350 millions docs whose size is 650GB total in ES.
When I start entire nodes and after cluster's health become green stat,
each node uses almost 6.5 - 7.9 GB even though there's no actual search or
insert request.
Is it normal? Should I use bigger memory server for this size of data?
Please give me an advise.

Thanks.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi,

The fact that the JVM uses a lot of memory under very light load isn't
worrying, JVMs like memory and then tend to keep it instead of giving it
back to the system to be prepared for sudden increases of load. If requests
succeed in reasonable time, it probably mean there is no issue. In case of
issues, you should start looking at garbage collection activity: very
frequent collections, and memory usage remaining high after major
collections mean that the garbage collector can't keep up with the rate of
memory allocations given the memory he has and the JVM should be given more
memory.

The cluster nodes stats API can give you statistics about memory usage of
field data, see
Elasticsearch Platform — Find real-time answers at scale | Elastic.

On Fri, Aug 30, 2013 at 5:11 AM, forkurt@gmail.com wrote:

Thanks for reply.

When I said node used 6.5-7.9GB, it was on bigdesk's JVM chart.
The actual used heap size is now 7.1G and commited is 7.9 which is XMx
size.

The thing I want to know is ES load all index information on memory, so
used heap size is so big even if there's little client's request.
If that's right, I think I should use larger memory for node server for
this amount of data. Do you have any idea about this?

Finally, I didin't set anything about cache size, which means ES user
defualt setting for that - 20% for filter cache. And for your question, how
can I check the actual caches(field/fillter)? Could you give me the way?

Jay

On Friday, August 30, 2013 6:27:27 AM UTC+9, Ivan Brusic wrote:

The heap size dictates both the minimum (-Xms) and maximum (-Xmx) size
the Java process will take on startup. Your JVM process will allocate all
8GB to itself if possible. What are the sizes of your caches
(field/filter)? They should be close to 0.

Cheers,

Ivan

On Thu, Aug 29, 2013 at 12:45 AM, for...@gmail.com wrote:

Hello guys.

I run ES on 3 nodes with 20 shards and 1 replica.
Each node runs on 8 core cpu, 16GB memory server, and I set heap size
8GB to JVM option.
I set -Dbootstrap.mlockall=true,** -Des.index.cache.field.type=**soft
options.
There is 350 millions docs whose size is 650GB total in ES.
When I start entire nodes and after cluster's health become green stat,
each node uses almost 6.5 - 7.9 GB even though there's no actual search or
insert request.
Is it normal? Should I use bigger memory server for this size of data?
Please give me an advise.

Thanks.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@**googlegroups.com.

For more options, visit https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
Adrien Grand

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.