Hi ,
I am using a 26 GB Ram machine . And I have allocated 8gb as heap size .
I have some data in mongodb and am doing a bulk api request to put the data from mongo to elastic search .
When I do this operation , I see that elastic search is using up most of the RAM and when I check the cluster status it has used up -
indent preformatted text by 4 spaces
"nodes": {
"count": {
"total": 1,
"data": 1,
"coordinating_only": 0,
"master": 1,
"ingest": 1
},
"versions": [
"6.2.4"
],
"os": {
"available_processors": 4,
"allocated_processors": 4,
"names": [
{
"name": "Linux",
"count": 1
}
],
"mem": {
"total": "25.5gb",
"total_in_bytes": 27389636608,
"free": "5.6gb",
"free_in_bytes": 6097088512,
"used": "19.8gb",
"used_in_bytes": 21292548096,
"free_percent": 22,
"used_percent": 78
}
},
"process": {
"cpu": {
"percent": 0
},
"open_file_descriptors": {
"min": 263,
"max": 263,
"avg": 263
}
},
"jvm": {
"max_uptime": "15h",
"max_uptime_in_millis": 54185581,
"versions": [
{
"version": "1.8.0_151",
"vm_name": "OpenJDK 64-Bit Server VM",
"vm_version": "25.151-b12",
"vm_vendor": "Oracle Corporation",
"count": 1
}
],
"mem": {
"heap_used": "2.4gb",
"heap_used_in_bytes": 2661901360,
"heap_max": "7.9gb",
"heap_max_in_bytes": 8555069440
},
"threads": 56
},
"fs": {
"total": "145.3gb",
"total_in_bytes": 156067389440,
"free": "134.8gb",
"free_in_bytes": 144787374080,
"available": "134.8gb",
"available_in_bytes": 144770596864
},
indent preformatted text by 4 spaces
And I dont see the RAM memory getting freed at all .
Is there any configuration am missing ?
I have set bootstrap.mlockall: true
LimitMEMLOCK=infinity,
I was hoping the RAM memory would get freed up once the bulk operation is compeleted .But thats not the case here .
Please help on this .