Nofile -> limit for open file under docker container for elasticsearch user

Hi I read this post regarding construct for nofile vs nproc.
Let me describe my case
I've provisioned by docker compose service for Elasticsearch:

        soft: -1
        hard: -1
        soft: 65535
        hard: 65535

at least I saw in docker container that I have

open files (-n) 1048576

elasticsearch@441ecdf403aa:~$ ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1029415
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1048576
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

from elaticsearch point of view it's also doesn't look as expected:

"wOMZvpx7QkW1IoDB9-hv5A" : {
      "timestamp" : 1647361899514,
      "name" : "es_coordination_1",
      "transport_address" : "",
      "host" : "",
      "ip" : "",
      "roles" : [
      "attributes" : {
        "xpack.installed" : "true",
        "transform.node" : "true"
      "process" : {
        "timestamp" : 1647361899516,
        "open_file_descriptors" : 1771,
        "max_file_descriptors" : 1048576,

on the system I've also:
config in /etc/security/limits.conf

elasticsearch soft nofile 65535
elasticsearch hard nofile 65535
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

Have You ever met such kind of case?

doesn't swarm support ulimits: ??? yet

Already found the solution

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.