Memory usage increased to more than 90 while running Elastic search service on window machine

Hi Team,

Memory usage increased to more than 90 while running Elastic search service on window machine.
I am using elastic version 7.0.1 / my system RAM is 4gb.

Can you please help me how to handle / what configuration need to use in elastic search config file.
In the task-bar java JVM taking so much memory usage.
Also try to set these setting in jvm.option config fileç

You should always set the min and max JVM heap

size to the same value. For example, to set

the heap to 4 GB, set:

-Xms2g
-Xmx2g

What is the output of:

GET /_cat/health?v
GET /_cat/indices?v
GET /_cat/shards?v

Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Or use markdown style like:

```
CODE
```

This is the icon to use if you are not using markdown format:

There's a live preview panel for exactly this reasons.

Lots of people read these forums, and many of them will simply skip over a post that is difficult to read, because it's just too large an investment of their time to try and follow a wall of badly formatted text.
If your goal is to get an answer to your questions, it's in your interest to make it as easy to read and understand as possible.
Please update your post.

Hi David,
Sorry for the wrong format.

I am facing network issue, So I coudn't connect with the elasticsearch server right now.

Once it get start working I will share all the details.
Meanwhile can you share some general config setting which we need to take care for elastic search config to improve performance.

Defaults are good in general.

Hi @dadoonet,
I have just uninstall & then re-install the elastic-search version 7.0.1 in my window machine.
When I run this service. In the task bar the physical memory reserved by the Java(TM) Platform SE binary version (jre-8u201) is 93 %. File-beat service has been stop at this moment.

I am not using any custom index to verify this issue.

How can I control this ?

I appreciate your help.
Result of GET _cat/health?v:

epoch      timestamp cluster       status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1558352777 11:46:17  elasticsearch green           1         1      2   2    0    0        0             0                  -                100.0%

GET /_cat/indices?v:

  health status index                uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana_task_manager Ld6OEiyxTmqlqlyAUKPCkA   1   0          2            4       45kb           45kb
green  open   .kibana_1            AALbkTVAQLa7XIAI87b5Yg   1   0          3            0     13.9kb         13.9kb

Output of GET /_cat/shards?v:

index                shard prirep state   docs  store ip        node
.kibana_1            0     p      STARTED    3 14.2kb 127.0.0.1 DEV-APP
.kibana_task_manager 0     p      STARTED    2 45.6kb 127.0.0.1 DEV-APP

Let me know if you require more information.

Can you share how much memory the jvm is using.?
Also, I'd recommend using a more recent JVM. I believe that Elasticsearch now ships a default one with the distribution so I'd use that one instead.

JVM using 60% memory.

So Ideally which version we need to use for 7.0.1 version?

But can you share what does in mean in Mb or gb? Not something that you compute but something that comes from Windows itself. May be a screen capture?

So Ideally which version we need to use for 7.0.1 version?

As per Support Matrix | Elastic, an Oracle/Open JDK 12.

Thanks for the support .

Give me a half an hour. I am traveling to home now.
The memory using by JVM is arround 2gb aprox.

That sounds correct as you set 2gb in the configuration file.
What is the problem then?

image

This is the actual one.

JVM version: jre1.8.0_211

So you asked to use 2gb. It is using 2gb.
What's wrong?

Its taking little bit more than 2gb. if I am not wrong.
Issue is that While running elastic search it is consuming 97 % memory.

Ok, Got it, If I stop running ELK the system taking 48 %.

So If i use 1gb, Can we control this about to 70-80%?

That's inexact. It's not elasticsearch which is consuming 97% of the memory but Elasticsearch + all the other processes which are running on your machine.
Proof is:

Yeah. You have yet half of the memory resources of your machine used by windows and other programs.
My guess is that is not a production machine, right?

So If i use 1gb, Can we control this about to 70-80%?

If you use 1gb, then elasticsearch will use 1gb. If other processes uses 3gb, then you will use 100% of the RAM.

Yes , its not a production env.
If I use 1gb then cpu usage increase.

I think 2gb is correct.

What else we need to set in elastic search.

So if it's a dev environment, why do you care so much?

Anyway it's always better (and a best practice) to isolate elasticsearch from any other process. Ie, only elasticsearch should be running on your machine.

If you have only 1gb, then you should have a few number of indices / shards and not a big volume of data.

Thanks for your Support David,

Actually I am new in ELK and its my responsibility to implement centralized logging in our application. so its a good practice to know about elastic search pros & cons before we move to the production env. Before we move on production We want to check all the area or gap.

That's a good idea to check that indeed.
That looks a bad idea to draw conclusions on a system which is not the target though.

To start quick and easy you can also think of using our cloud.elastic.co service.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.