X-pack-security warnings after upgrading to 9.3.3

Hello - I just upgraded our node in our monitoring-cluster (single-node) from 9.2.4 to 9.3.3 and have started receiving warnings regarding the x-pack-security getting denied reading some internal files in the docker-container. Nothing has been changed in the configuration, and this warning is completely new to us. I can’t find anything in the release-notes that could have caused this. Cluster and node is green, logs are coming in. So as far as I can see, this does not affect it in any way.

This is one of them:

[2026-04-10T11:05:57,483][WARN ][o.e.e.r.p.P.x.o.e.s.o.e.x.security] [esmon] Not entitled: component [x-pack-security], module [org.elasticsearch.security], class [class org.elasticsearch.xpack.security.PrivilegedFileWatcher], entitlement [file], operation [read], path [/usr/share/elasticsearch/config/jvm.options] org.elasticsearch.entitlement.bridge.NotEntitledException: component [x-pack-security], module [org.elasticsearch.security], class [class org.elasticsearch.xpack.security.PrivilegedFileWatcher], entitlement [file], operation [read], path [/usr/share/elasticsearch/config/jvm.options]
at org.elasticsearch.entitlement@9.3.3/org.elasticsearch.entitlement.runtime.policy.PolicyCheckerImpl.notEntitled(PolicyCheckerImpl.java:452)
at org.elasticsearch.entitlement@9.3.3/org.elasticsearch.entitlement.runtime.policy.PolicyCheckerImpl.checkFileRead(PolicyCheckerImpl.java:235)
at org.elasticsearch.entitlement@9.3.3/org.elasticsearch.entitlement.runtime.policy.PolicyCheckerImpl.checkFileRead(PolicyCheckerImpl.java:207)
at org.elasticsearch.entitlement@9.3.3/org.elasticsearch.entitlement.rules.Policies.lambda$fileRead$13(Policies.java:164)
at org.elasticsearch.entitlement@9.3.3/org.elasticsearch.entitlement.runtime.registry.InstrumentationRegistryImpl.check$(InstrumentationRegistryImpl.java:46)
at java.base/java.nio.file.Files.exists(Files.java)
at org.elasticsearch.security@9.3.3/org.elasticsearch.xpack.security.PrivilegedFileWatcher.lambda$fileExists$0(PrivilegedFileWatcher.java:49)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:74)
at org.elasticsearch.security@9.3.3/org.elasticsearch.xpack.security.PrivilegedFileWatcher.fileExists(PrivilegedFileWatcher.java:49)
at org.elasticsearch.server@9.3.3/org.elasticsearch.watcher.FileWatcher$FileObserver.checkAndNotify(FileWatcher.java:157)
at org.elasticsearch.server@9.3.3/org.elasticsearch.watcher.FileWatcher$FileObserver.updateChildren(FileWatcher.java:308)
at org.elasticsearch.server@9.3.3/org.elasticsearch.watcher.FileWatcher$FileObserver.checkAndNotify(FileWatcher.java:181)
at org.elasticsearch.server@9.3.3/org.elasticsearch.watcher.FileWatcher.doCheckAndNotify(FileWatcher.java:84)
at org.elasticsearch.server@9.3.3/org.elasticsearch.watcher.AbstractResourceWatcher.checkAndNotify(AbstractResourceWatcher.java:34)
at org.elasticsearch.server@9.3.3/org.elasticsearch.watcher.ResourceWatcherService$ResourceMonitor.run(ResourceWatcherService.java:166)
at org.elasticsearch.server@9.3.3/org.elasticsearch.threadpool.Scheduler$ReschedulingRunnable.doRun(Scheduler.java:224)
at org.elasticsearch.server@9.3.3/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1113)
at org.elasticsearch.server@9.3.3/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1090)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:614)
at java.base/java.lang.Thread.run(Thread.java:1474)

This is due to some recent changes related to entitlements. This should not have any affect on your cluster running correctly. Obviously, if you see any issues please feel free to reply here. We are looking at ways to improve the logging for this type of entitlement warning.

Thank you for the response & support!
I’ve yet to encounter any issues - so I’ll let you know if they arise!

We are also seeing these exceptions (one for each elasticsearch.yml and jvm.options files and the jvm.options.d subdirectory), but logged not once, but every five seconds, completely flooding our logs. We can modify the log4j2 config to suppress these, but I wonder if this constant throwing of these exceptions should be actually resolved, not just the logging of the exception suppressed.

Yes - this is the exact same thing for us. We are getting flooded a lot. In the past hour we have a total of 74690 log-entries for our cluster. If filtering out these new exceptions, we only have 129.

Thank you for the reply. The excessive logging I observe for the version 8.19.14 too

I understand that this has no real impact on the cluster health but to be honest it creates a lot of noise and potentially obscures the real issue.