Splunk vs. Elastic: Lookup Tables

Good Morning,

I have been working with a customer for a while and they have chosen to migrate from Splunk to Elastic. One sticking point has been lookup tables. They had very specific lookup tables that provided them with a high level of success and was efficient and easy for them to use. At the time they were using Splunk and Endgame and were detecting most incidents with equal success on both platforms due to these lookup tables.

At this point they are still having the same level of success with their Endgame appliance, but we have been unable to replicated the successes they were having within Splunk with Elastic, Elastic Security and Kibana. I have spent a lot of the time standing up out of the box SIEM detections and trying to bring the out of the box ML online, but they are not event close to the performance that we are seeing within the Endgame appliance or saw with Splunk.

At this point we have been able to roughly translate the Splunk queries to Elastic (to varying levels of success) and then build dashboards around them. Unfortunately, we have not been able to replicate the success that was seen using Splunk.

My question: Has anyone tried to create lookup tables within Elastic? I can think of a few possibilities that might work utilizing logstash, but I am interested to know if anyone has run into a similar situation and what (if anything) you were able to do about it.

Thanks,
Alex

1 Like

Can u provide an example of what ur trying to do? This is probably the closest you will get Set up an enrich processor | Elasticsearch Guide [8.2] | Elastic.

2 Likes

They translated the following query from Splunk to Elastic:

event.category:process and event.type:(start or process_started) and process.executable:(*RECYCLER* or *SystemVolumeInformation* or *\\Tasks\\* or *\\debug\\* or *\:\\Temp\\* or *\\Downloads\\* or *\\Desktop\\* or *\\Documents\\* or *\:\\T3mp\\*) OR (file.path: *\\Downloads\\*.zip or *\\Downloads\\*.exe AND event.code: 15)

In Splunk whenever this query got a result I believe a result index was populated. In Splunk the analytics and data pulls happened at night I believe. They would get in and take a look at this result index in the morning. If it had results they knew it was data worth taking a closer look at.

I think we are getting ready to head down the same path as you mentioned above with the enrichment processors. I was curious if anyone had any production experience with these things. As we are set up right now I am a little bit concerned about our logstashes from a performance perspective.

We have roughly 5000 endpoints sending data to our stack and we have six logstashes running fairly simple filters on the data right now to drop windows events that we don't care about.

Do you have any idea what kind of a performance hit we would take at each of the logstashes by running this query on all of our incoming data. In a perfect world we would do just that and then add a new field with a boolean data type that the analysts would be able to search on to get to data they care about quicker.

The problem is that this is only one use case and we have a dozen to 20 other queries that would need to be run in order to be able to replicate the abilities they had in Splunk. To me this seems like it would be asking too much of our logstashes in their current configuration. Interested to hear from anyone with or without experience in these areas.

Thanks

It looks like I jumped the gun a little bit. I didn't even think about running the processors on the data in the stack. I think that is an interesting angle and am curious about processing requirements there as well.

I'm still a little confused. So u have windows events being ingested into elastic. And u want to know if a specific event matches that query? Why not use Detections and alerts | Elastic Security Solution [8.2] | Elastic to create an alert OR use an ingest processor/logstash to just add the field if the event fields match what ur looking for?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.