Elasticsearch Java Security Manager: somaxconn read permissions

Hi,

I am experiencing a Java Security Manager issue when running Elasticsearch with the official docker image in Version 5.6.4 setting log level to DEBUG.

Elasticsearch throws an exception during start:

  Dec 06 09:14:53 server docker[14579]: [2017-12-06T09:14:53,671][DEBUG][i.n.u.NetUtil            ] Failed to get SOMAXCONN from sysctl and file /proc/sys/net/core/somaxconn. Default: 128
  Dec 06 09:14:53 server docker[14579]: java.security.AccessControlException: access denied ("java.io.FilePermission" "/proc/sys/net/core/somaxconn" "read")

I also saw this closed issue related to exactly this problem:

It just says that the Security Manager has no permission to read that file. Is there a simple solution to "fix" this?

That’s surprising. Let me try to reproduce this and get back to you.

I can reproduce this. It requires X-Pack to be installed. I will open an internal PR to fix this and the fix will be in 5.6.6.

The internal PR to fix this is opened. Thanks for the report.

Thank you for fast reply. The error message says that if somaxconn cannot be read the fallback of 128 is used.

Just to keep this in mind a question comes up.

In my scenario there is a cluster with 1:1 relationships for kibana, logstash and elasticsearch:
1 logstash with 16 output workers -> 1 elasticsearch node
1 kibana -> 1 elasticsearch node

I assume this means that there are 128 default somaxconn minus 16 output workers = 112 availble connections for Kibana.

Will there be an issue with unavailable elasticsearch connections when more than 112 users are logged on into Kibana?

That is not what somaxconn means, it does not mean the maximum number of connections. Instead it (roughly) refers to the maximum queue length for the backlog on a socket in the listen state of completely established socket connections that are waiting to be accepted. This only needs to be increased if you have a listening socket for which you frequently expect huge bursts of new connections. This can be the case in a large cluster (e.g., in a 100-node at startup as the nodes all try to pairwise form 13 connections with each other).

1 Like

Thanks for clearification. Glad to hear we are on the save side.

You are welcome. Thanks again for the report.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.