Filter curation, kql, dashboards, codegen: am I doing something stupid?

Hi,

I get the feeling that I am doing something silly here, so hoping someone might have some insight that's been using ELK for longer than I have.

I have a deployment of LME from ncsc which is an ELK stack fed by sysmon on endpoints, with dashboards reporting security related events.

The aim as I understand it, which could be fundamentally flawed, is to remove 'normal' from the dashboard such that you only see anomolous events.

To that end, I was initially tapping '-' next to, for example, dns requests, which creates a 'NOT field: value' filter.

After adding rather a lot of these, the entire first page of kibana became fields. So I refactored that into a single filter and tried to learn KQL. Effectively I now have one filter for DNS normal which is negated, and it's a massive boolean or effectively.

Now, operationally, adding to that filter for more 'normal' events from the dashboard is COSTLY. For a few reasons. The DSL editor is tiny, doesn't appear to be easily resizable, but also I want to reuse these filters across dashboards. Also the DSL syntax is pretty hard to understand, maybe just my failing, so it's not clear at all where I'm dropping quotes manually editing this massive kql query.

I did see that filters based on saved queries (which seems like the way to go) are a work in progress, so that's good to hear, but for now here's what I am doing.

I wrote a bash script (yes, should use python probably, but going for lowest common installed tool in the team) which

  • parses the output directly copied from the kibana dashboard

  • adds the dns query to a text file in git, named 'normal dns' to persist state

  • sorts and uniques that file, effectively updating the 'state of acceptable dns queries'

  • generates the boolean kql query on stdout for copy and paste back into kibana

This seems like a crazy thing to have to do. Have I missed something obvious?

Thanks for your time reading.

Maybe I don't have the context here, but is this just picks out a hostname from DNS values to hide it? I'm not sure if that's the best way to find anomalous DNS queries. I would look for a way to pre-process incoming data by scanning for anomalous DNS entries, and adding a field that gets indexed along with the original data. You can use tools such as Logstash or Elasticsearch Ingest Node to do this.

Broadly speaking, if you need to find anomalies in real time, you should look into using Elastic machine learning: https://www.elastic.co/what-is/elasticsearch-machine-learning

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.