Cannot solve problem "max file descriptors [4096] for elasticsearch"

I am running Elasticsearch in Docker container with the following command:

docker run -d --name elasticsearch -p 9200:9200 -e "bootstrap.system_call_filter=false" -e "xpack.security.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:5.3.0

and I am getting the following error:

[2017-04-15T01:05:17,504][INFO ][o.e.n.Node               ] [dg6FHE_] starting ...
[2017-04-15T01:05:19,002][WARN ][i.n.u.i.MacAddressUtil   ] Failed to find a usable hardware address from the network interfaces; using random bytes: 53:ab:be:bf:e2:c0:97:80
[2017-04-15T01:05:19,426][INFO ][o.e.t.TransportService   ] [dg6FHE_] publish_address {xxx.xx.x.xx:9300}, bound_addresses {[::]:9300}
[2017-04-15T01:05:19,555][INFO ][o.e.b.BootstrapChecks    ] [dg6FHE_] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
ERROR: bootstrap checks failed
max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

I updated /etc/security/limits.conf file:

* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096

I executed sysctl -w fs.file-max=65536 and checked the output of cat /proc/sys/fs/file-max but Elasticsearch still complains and displays that error.

Output of sysctl -p:

# sysctl -p
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
vm.min_free_kbytes = 65536
fs.file-max = 65536
net.ipv4.tcp_syncookies = 1
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.conf.all.accept_redirects = 0

For some reasons there were no such error few hours before.
What else should I do to resolve that issue?

Update:

I mounted host directory with elasticsearch.yml file with a Docker container:

discovery.zen.minimum_master_nodes: 1
bootstrap.system_call_filter: false
xpack.security.enabled: false

The previous error is still displayed but as a warning now. Elasticsearch started successfully but I am not able to reach it at port 9200.

netstat -an:

bash-4.3$ netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 ::ffff:xxx.x.x.x:9200   :::*                    LISTEN
tcp        0      0 ::1:9200                :::*                    LISTEN
tcp        0      0 ::ffff:xxx.x.x.x:9300   :::*                    LISTEN
tcp        0      0 ::1:9300                :::*                    LISTEN
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node Path
unix  2      [ ]         STREAM     CONNECTED      94584
unix  2      [ ]         STREAM     CONNECTED      94334

Elasticsearch log:

[2017-04-15T01:55:14,698][INFO ][o.e.p.PluginsService     ] [x-FTlx2] loaded module [transport-netty4]
[2017-04-15T01:55:14,699][INFO ][o.e.p.PluginsService     ] [x-FTlx2] loaded plugin [x-pack]
[2017-04-15T01:55:32,746][INFO ][o.e.n.Node               ] initialized
[2017-04-15T01:55:32,746][INFO ][o.e.n.Node               ] [x-FTlx2] starting ...
[2017-04-15T01:55:34,594][WARN ][i.n.u.i.MacAddressUtil   ] Failed to find a usable hardware address from the network interfaces; using random bytes: f2:38:9a:10:86:b5:8b:dc
[2017-04-15T01:55:35,161][INFO ][o.e.t.TransportService   ] [x-FTlx2] publish_address {xxx.x.x.x:9300}, bound_addresses {[::1]:9300}, {xxx.x.x.x:9300}
[2017-04-15T01:55:35,348][WARN ][o.e.b.BootstrapChecks    ] [x-FTlx2] max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
[2017-04-15T01:55:35,807][INFO ][o.e.m.j.JvmGcMonitorService] [x-FTlx2] [gc][2] overhead, spent [354ms] collecting in the last [1.1s]
[2017-04-15T01:55:36,842][INFO ][o.e.m.j.JvmGcMonitorService] [x-FTlx2] [gc][3] overhead, spent [301ms] collecting in the last [1.1s]
[2017-04-15T01:55:39,873][INFO ][o.e.c.s.ClusterService   ] [x-FTlx2] new_master {x-FTlx2}{x-FTlx2SSVSv879Xgq9zTg}{2km9O-FRRhueFkvNzNkrrQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-04-15T01:55:40,423][INFO ][o.e.h.n.Netty4HttpServerTransport] [x-FTlx2] publish_address {xxx.x.x.x:9200}, bound_addresses {[::1]:9200}, {xxx.x.x.x:9200}
[2017-04-15T01:55:40,431][INFO ][o.e.n.Node               ] [x-FTlx2] started
[2017-04-15T01:55:42,313][INFO ][o.e.g.GatewayService     ] [x-FTlx2] recovered [0] indices into cluster_state
[2017-04-15T01:55:47,393][INFO ][o.e.m.j.JvmGcMonitorService] [x-FTlx2] [gc][12] overhead, spent [502ms] collecting in the last [1.6s]
[2017-04-15T01:55:53,848][INFO ][o.e.l.LicenseService     ] [x-FTlx2] license [cd1d9ce1-eb90-4bf8-8480-ba40f4245656] mode [trial] - valid
[2017-04-15T01:56:04,749][INFO ][o.e.c.m.MetaDataCreateIndexService] [x-FTlx2] [.monitoring-data-2] creating index, cause [auto(bulk api)], templates [.monitoring-data-2], shards [1]/[1], mappings [logstash, _default_, node, kibana, cluster_info]
[2017-04-15T01:56:07,070][INFO ][o.e.c.m.MetaDataCreateIndexService] [x-FTlx2] [.monitoring-es-2-2017.04.15] creating index, cause [auto(bulk api)], templates [.monitoring-es-2], shards [1]/[1], mappings [node, shards, _default_, index_stats, index_recovery, cluster_state, cluster_stats, indices_stats, node_stats]
[2017-04-15T01:56:11,187][INFO ][o.e.c.m.MetaDataMappingService] [x-FTlx2] [.monitoring-es-2-2017.04.15/h6o8iTi6RbKIUCMVUqYQ7Q] update_mapping [cluster_stats]
[2017-04-15T01:56:13,192][INFO ][o.e.c.m.MetaDataMappingService] [x-FTlx2] [.monitoring-es-2-2017.04.15/h6o8iTi6RbKIUCMVUqYQ7Q] update_mapping [node_stats]
[2017-04-15T01:56:24,571][INFO ][o.e.c.m.MetaDataMappingService] [x-FTlx2] [.monitoring-es-2-2017.04.15/h6o8iTi6RbKIUCMVUqYQ7Q] update_mapping [cluster_stats]
[2017-04-15T01:56:25,330][INFO ][o.e.c.m.MetaDataMappingService] [x-FTlx2] [.monitoring-es-2-2017.04.15/h6o8iTi6RbKIUCMVUqYQ7Q] update_mapping [indices_stats]
[2017-04-15T01:56:26,378][INFO ][o.e.c.m.MetaDataMappingService] [x-FTlx2] [.monitoring-es-2-2017.04.15/h6o8iTi6RbKIUCMVUqYQ7Q] update_mapping [index_stats]
[2017-04-15T01:56:27,864][INFO ][o.e.m.j.JvmGcMonitorService] [x-FTlx2] [gc][50] overhead, spent [479ms] collecting in the last [1.2s]
[2017-04-15T01:56:44,285][INFO ][o.e.m.j.JvmGcMonitorService] [x-FTlx2] [gc][65] overhead, spent [482ms] collecting in the last [1.2s]
[2017-04-15T01:56:57,116][INFO ][o.e.m.j.JvmGcMonitorService] [x-FTlx2] [gc][76] overhead, spent [417ms] collecting in the last [1.1s]
[2017-04-15T01:57:05,987][INFO ][o.e.m.j.JvmGcMonitorService] [x-FTlx2] [gc][young][84][15] duration [720ms], collections [1]/[1s], total [720ms]/[4.7s], memory [134.1mb]->[78.6mb]/[1.9gb], all_pools {[young] [57.4mb]->[25.5kb]/[66.5mb]}{[survivor] [8.3mb]->[8.3mb]/[8.3mb]}{[old] [68.3mb]->[70.3mb]/[1.9gb]}
[2017-04-15T01:57:05,989][WARN ][o.e.m.j.JvmGcMonitorService] [x-FTlx2] [gc][84] overhead, spent [720ms] collecting in the last [1s]
[2017-04-15T01:57:24,065][WARN ][o.e.m.j.JvmGcMonitorService] [x-FTlx2] [gc][101] overhead, spent [502ms] collecting in the last [1s]
[2017-04-15T01:57:36,359][INFO ][o.e.m.j.JvmGcMonitorService] [x-FTlx2] [gc][young][113][17] duration [963ms], collections [1]/[1s], total [963ms]/[6.2s], memory [128.6mb]->[74.7mb]/[1.9gb], all_pools {[young] [50.1mb]->[43.9kb]/[66.5mb]}{[survivor] [8.1mb]->[4.3mb]/[8.3mb]}{[old] [70.4mb]->[70.4mb]/[1.9gb]}
[2017-04-15T01:57:36,366][WARN ][o.e.m.j.JvmGcMonitorService] [x-FTlx2] [gc][113] overhead, spent [963ms] collecting in the last [1s]
[2017-04-15T02:00:03,848][INFO ][o.e.m.j.JvmGcMonitorService] [x-FTlx2] [gc][256] overhead, spent [444ms] collecting in the last [1s]
[2017-04-15T02:08:56,582][WARN ][o.e.m.j.JvmGcMonitorService] [x-FTlx2] [gc][young][777][63] duration [1.1s], collections [1]/[1.3s], total [1.1s]/[14.8s], memory [158mb]->[101.1mb]/[1.9gb], all_pools {[young] [60.8mb]->[514kb]/[66.5mb]}{[survivor] [5.1mb]->[8.3mb]/[8.3mb]}{[old] [91.9mb]->[92.3mb]/[1.9gb]}
[2017-04-15T02:08:56,597][WARN ][o.e.m.j.JvmGcMonitorService] [x-FTlx2] [gc][777] overhead, spent [1.1s] collecting in the last [1.3s]

Update:
Elasticsearch was trying to start but then stopped for some reason:

[2017-04-15T18:24:53,111][INFO ][o.e.n.Node               ] [] initializing ...
[2017-04-15T18:24:55,265][INFO ][o.e.e.NodeEnvironment    ] [ukwxwlM] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [8.6gb], net total_space [9.8gb], spins? [unknown], types [rootfs]
[2017-04-15T18:24:55,269][INFO ][o.e.e.NodeEnvironment    ] [ukwxwlM] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-04-15T18:24:56,636][INFO ][o.e.n.Node               ] node name [ukwxwlM] derived from node ID [ukwxwlMKS2-LDkwrp0H5Sw]; set [node.name] to override
[2017-04-15T18:24:56,647][INFO ][o.e.n.Node               ] version[5.3.0], pid[1], build[3adb13b/2017-03-23T03:31:50.652Z], OS[Linux/2.6.32-431.20.3.el6.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_92-internal/25.92-b14]
[2017-04-15T18:25:48,693][INFO ][o.e.p.PluginsService     ] [ukwxwlM] loaded module [aggs-matrix-stats]
[2017-04-15T18:25:48,695][INFO ][o.e.p.PluginsService     ] [ukwxwlM] loaded module [ingest-common]
[2017-04-15T18:25:48,695][INFO ][o.e.p.PluginsService     ] [ukwxwlM] loaded module [lang-expression]
[2017-04-15T18:25:48,695][INFO ][o.e.p.PluginsService     ] [ukwxwlM] loaded module [lang-groovy]
[2017-04-15T18:25:48,695][INFO ][o.e.p.PluginsService     ] [ukwxwlM] loaded module [lang-mustache]
[2017-04-15T18:25:48,696][INFO ][o.e.p.PluginsService     ] [ukwxwlM] loaded module [lang-painless]
[2017-04-15T18:25:48,696][INFO ][o.e.p.PluginsService     ] [ukwxwlM] loaded module [percolator]
[2017-04-15T18:25:48,699][INFO ][o.e.p.PluginsService     ] [ukwxwlM] loaded module [reindex]
[2017-04-15T18:25:48,699][INFO ][o.e.p.PluginsService     ] [ukwxwlM] loaded module [transport-netty3]
[2017-04-15T18:25:48,699][INFO ][o.e.p.PluginsService     ] [ukwxwlM] loaded module [transport-netty4]
[2017-04-15T18:25:48,700][INFO ][o.e.p.PluginsService     ] [ukwxwlM] loaded plugin [x-pack]
[2017-04-15T18:27:02,146][INFO ][o.e.n.Node               ] initialized
[2017-04-15T18:27:02,153][INFO ][o.e.n.Node               ] [ukwxwlM] starting ...
[2017-04-15T18:27:05,205][WARN ][i.n.u.i.MacAddressUtil   ] Failed to find a usable hardware address from the network interfaces; using random bytes: 4b:af:ef:89:4e:ee:29:65
[2017-04-15T18:27:05,547][INFO ][o.e.t.TransportService   ] [ukwxwlM] publish_address {xxx.xx.x.xx:9300}, bound_addresses {[::]:9300}
[2017-04-15T18:27:05,563][INFO ][o.e.b.BootstrapChecks    ] [ukwxwlM] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-04-15T18:27:05,729][INFO ][o.e.n.Node               ] [ukwxwlM] stopping ...
[2017-04-15T18:27:05,831][INFO ][o.e.n.Node               ] [ukwxwlM] stopped
[2017-04-15T18:27:05,832][INFO ][o.e.n.Node               ] [ukwxwlM] closing ...
[2017-04-15T18:27:05,930][INFO ][o.e.n.Node               ] [ukwxwlM] closed

Does anybody have any ideas how to fix that issue?

Since you are not starting on a local interface, the bootstrap checks are being enforced. Probably you are failing the check for file descriptors.

Here's a quick Docker Compose example that should help you getting started:

---
version: '2'
services:

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:$ELASTIC_VERSION
    environment:
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    mem_limit: 1g
    cap_add:
      - IPC_LOCK
    volumes:
      - esdata1:/usr/share/elasticsearch/data
    ports:
      - 9200:9200

volumes:
  esdata1:
    driver: local

Thanks. But how can I do that if I don't use Docker Compose?

Sure, you can use plain docker run. You just need to find the right parameters and put in the information. It will be a very long list, but should be totally doable.

Is there any specific reason for not using compose (or some other orchestration)? The compose file describes the container in its running state and the biggest advantage is probably that you can link containers together. Otherwise you'll have to talk to the right IP address when combining containers.

PS: I think the combination of Elasticsearch and Docker is creating quite some confusion. I would just use Elasticsearch on its own first (to get familiar with the Bootstrap checks for example) and only then try to combine it with Docker.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.