I am sure this must be obvious, but I am missing it.
I have cranked the number of open files in /etc/security/limits.conf to
256000 for soft and hard. Elasticsearch confirms when loading that
nofile=256000
After an hour or so, one of the ES servers starts reporting too many open
files.
This is the setup:
6 Servers
4 Servers running ES with data
2 Servers running ES with no data (just routers)
There are two indexes:
Index 1 - 10 Shards, 1 Replicas (ie Master + 1 replica) - 15GB Index (30GB
inc replica)
Index 2 - 1 Shard, 0 Replicas - 525MB
I have checked number of open files for ES and its only ever around 50-200.
So we should be ok. I have been cranking up the nofile setting for the past
few weeks but it makes no difference.
Maybe you should check by
cat /proc/es pid/limit |grep file
Only set module is always not essential.
Sent from my Windows Phone
From: Marietta
Sent: 25/08/2012 2:12 AM
To: elasticsearch@googlegroups.com
Subject: Too many open files but nofile set to 256000
I am sure this must be obvious, but I am missing it.
I have cranked the number of open files in /etc/security/limits.conf to
256000 for soft and hard. Elasticsearch confirms when loading that
nofile=256000
After an hour or so, one of the ES servers starts reporting too many open
files.
This is the setup:
6 Servers
4 Servers running ES with data
2 Servers running ES with no data (just routers)
There are two indexes:
Index 1 - 10 Shards, 1 Replicas (ie Master + 1 replica) - 15GB Index (30GB
inc replica)
Index 2 - 1 Shard, 0 Replicas - 525MB
I have checked number of open files for ES and its only ever around 50-200.
So we should be ok. I have been cranking up the nofile setting for the past
few weeks but it makes no difference.
Can you double check that the max open files is actually applied using the node info API (with the process flag set). If its set, can you gist the lsof output?
I am sure this must be obvious, but I am missing it.
I have cranked the number of open files in /etc/security/limits.conf to 256000 for soft and hard. Elasticsearch confirms when loading that nofile=256000
After an hour or so, one of the ES servers starts reporting too many open files.
This is the setup:
6 Servers
4 Servers running ES with data
2 Servers running ES with no data (just routers)
There are two indexes:
Index 1 - 10 Shards, 1 Replicas (ie Master + 1 replica) - 15GB Index (30GB inc replica)
Index 2 - 1 Shard, 0 Replicas - 525MB
I have checked number of open files for ES and its only ever around 50-200. So we should be ok. I have been cranking up the nofile setting for the past few weeks but it makes no difference.
I have cranked the number of open files in /etc/security/limits.conf to 256000 for soft and hard. Elasticsearch confirms when loading that nofile=256000
After an hour or so, one of the ES servers starts reporting too many open files.
Just looking at my deploy scripts, I have the following in /etc/sysctl.d/60-maxfiles.conf:
On Sunday, 26 August 2012 01:25:29 UTC+1, LiMac wrote:
Maybe you should check by
cat /proc/es pid/limit |grep file
Only set module is always not essential.
Sent from my Windows Phone
From: Marietta
Sent: 25/08/2012 2:12 AM
To: elasti...@googlegroups.com <javascript:>
Subject: Too many open files but nofile set to 256000
I am sure this must be obvious, but I am missing it.
I have cranked the number of open files in /etc/security/limits.conf to
256000 for soft and hard. Elasticsearch confirms when loading that
nofile=256000
After an hour or so, one of the ES servers starts reporting too many open
files.
This is the setup:
6 Servers
4 Servers running ES with data
2 Servers running ES with no data (just routers)
There are two indexes:
Index 1 - 10 Shards, 1 Replicas (ie Master + 1 replica) - 15GB Index (30GB
inc replica)
Index 2 - 1 Shard, 0 Replicas - 525MB
I have checked number of open files for ES and its only ever around
50-200. So we should be ok. I have been cranking up the nofile setting for
the past few weeks but it makes no difference.
For anyone with a problem like this, it may be worth confirming the numbers
within elasticsearch as well using the nodes info api:
/_nodes?process
gives the max_file_descriptors:
{
refresh_interval: 1000,
id: 13919,
max_file_descriptors: 25000
}
/_nodes/process/stats
gives:
open_file_descriptors: 516,
in the output:
On Thursday, 15 November 2012 11:12:43 UTC, mohsin husen wrote:
helo Marietta
did you got any resolution ?
On Sunday, 26 August 2012 01:25:29 UTC+1, LiMac wrote:
Maybe you should check by
cat /proc/es pid/limit |grep file
Only set module is always not essential.
Sent from my Windows Phone
From: Marietta
Sent: 25/08/2012 2:12 AM
To: elasti...@googlegroups.com
Subject: Too many open files but nofile set to 256000
I am sure this must be obvious, but I am missing it.
I have cranked the number of open files in /etc/security/limits.conf to
256000 for soft and hard. Elasticsearch confirms when loading that
nofile=256000
After an hour or so, one of the ES servers starts reporting too many open
files.
This is the setup:
6 Servers
4 Servers running ES with data
2 Servers running ES with no data (just routers)
There are two indexes:
Index 1 - 10 Shards, 1 Replicas (ie Master + 1 replica) - 15GB Index
(30GB inc replica)
Index 2 - 1 Shard, 0 Replicas - 525MB
I have checked number of open files for ES and its only ever around
50-200. So we should be ok. I have been cranking up the nofile setting for
the past few weeks but it makes no difference.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.