Filebeat Cisco ASA Module failing to parse

I have a Filebeat 7.9.3 shipper confirmed sending logs using two modules panw and cisco, specifically the ASA feature.

I am only seeing entries for the panw module, but I know the cisco logs are there somewhere. After doing some searching someone else has an unrelated problem but getting similar results. I borrowed their search and I am getting the following message:

    POST filebeat-7.9.3/_search
{
   "size": 1,
   "sort": { "@timestamp": "desc"},
   "query": {
      "match": {"message":"failed to find message"}
   }
}
   "message" : "Teardown TCP connection 834104538 for SLight:13.65.40.138/443 to inside:172.28.62.102/57580 duration 24:00:04 bytes 170652 TCP Reset-I",
          "error" : {
            "message" : [
              "Provided Grok expressions do not match field value: [Teardown TCP connection 834104538 for SLight:13.65.40.138/443 to inside:172.28.62.102/57580 duration 24:00:04 bytes 170652 TCP Reset-I]"

Any advice?

Hi, do you see any log entries that look like events are not being parsed properly?

We have the same problem with some port related fields containing ports and username which gets them to fail being indexed.

Unfortunately no, and honestly I am not even sure where to check. The elasticsearch log files seem to focus on cluster and system status. I don't know where to find any logging related to indexing on the cluster side. The only way I stumbled into this was knowing that the documents were being sent and the size of the index was so huge, (100 GB per iteration), that I assumed something was being indexed. It was only when I stumbled across that query I posted above that I was able to find any clue to what was happening.

Where to look depends on how logging is configured in your beat. We use Logstash to forward beats events so we simply look into the Logstash logs.

In the meantime we added extra stages to the ingest pipelines to change the fields which are problematic. The bad thing about it is that we might have to repeat this every time there's an update to beats which should overwrite the pipeline.

(And personally I really prefer Logstash over Elasticsearch ingest pipelines very much)

Sorry should have explained. I am using the built in ASA module for elasticsearch. I am not using logstash currently. Originally this was the case with my own custom filter, however, this prevented me from using the ILM feature which is desperately needed due to the volume of documents that are being processed.

It is issues like this where open source really falls short. I am trying to present this as a viable alternative to SPLUNK but so far it is failing.

I just said, I'm using Logstash to clarify that the both of us have to look into different logfiles.

Why did Logstash prevent you from using ILM? It was a bit cumbersome in versions before 7 but now it works quite well.

Well, "Open Source" doesnt' mean it doesn't cost anything. You might be able to use it without license fee, but you will have to invest time and learning. And maybe in support contracts or sponsoring. Often you can replace one with the other more or less but not always. Don't get me wrong, I think it's a great idea to replace Splunk with Elastic Stack and you're very welcome to search for help. I just want to make clear that even when Elastic ist embracing hugops and tries to make your live as easy as possible there's still some work to do. The good thing is, you can fix it for yourself with a bit of help. You can't even do that with proprietary software most of the time.

Fully aware of what you are using, and since I am using the ASA module you should know that I am shipping to elasticsearch. The elasticsearch log files show nothing. Not going to spend tons of time on this but if the index does not autoincrement with the -000x label on the end of the name then ILM does not work. There are several posts about it. I had better results with Curator in version 5.

I have heard this about open source for over twenty years now and have worked with it forever. I am not new to working with any open source project and most are abandoned due to this very reason. I have put countless hours into this and other open source projects but the bottom line is that I have a real job and actual results are expected. I didn't make the claim about it being a replacement for SPLUNK....Elastic did.

Ok, back to your initial problem: You can add extra filters to the Elasticsearch ingest pipeline. So if your input doesn't match the provided filters, you have to add extra ones (Remember, they are processed sequentially). Why your log doesn't match the pattern/filters is beyond my knowledge It can be that you have a custom log format on your ASA or that the provided filters in the beat are broken. Since both of us have problems with it, I guess there's a problem in the provided filters. We (meaning not you but the people who worked on our side) could work around it by introducing new filters into the pipeline.

I can just encourage you to open another thread about ILM with Logstash here if you haven't. It does work but it's not as easy to start with like Curator.

I don't know what the provided "filters" are that the coders have defined as part of the predefined, built-in, ASA module. I am sure I could get on git and dig it out, but again I am approaching time vs value issue. I have been assured multiple times that our ASA logs are output in standard ASA output so not sure what else to troubleshoot.

You don't have to dig into code to find out. Current versions of Elastic Stack have a GUI part in Kibana that shows filters in Ingest pipelines.

Unfortunately, my organization has given up on ELK and returned to spending tens of thousands on SPLUNK due to this one issue with ASA module.

That's a pity.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.