Im using the cef module in filebeat to receive cef events from a ArcSight SmartConnector.
The SmartConnector is configured for Port 5001 Protocol UDP and CEF Version 0.1.
On the Filebeat side I receive on port 5001 and using the cef module.
For come cef events comming over there is no issues, but for a lot of them I get this error message in Filebeat log: 2020-02-20T10:40:11.910+0100 ERROR [syslog] syslog/input.go:243 can't parse event as syslog rfc3164 {"message": "CEF:0|Atlassian|BITBUCKET|||Read|Unknown| eventId=5415 msg=- art=1582191610033 rt=1582191600857 src=192.168.128.65 sourceZoneURI=/All Zones/ArcSight System/Private Address Space Zones/RFC1918: 192.168.0.0-192.168.255.255 suid=jenkins destinationServiceName=BITBUCKET cs2=TESTCASE/ref-app-test cs4=Klona innehållet i ett repo cs5=@PT3JP7x640x10461x0 flexString1=SessionsId flexString2=t1zzr6 ahost=server.utv.corp.se agt=172.17.1.101 agentZoneURI=/All Zones/ArcSight System/Private Address Space Zones/RFC1918: 172.16.0.0-172.31.255.255 amac=00-50-56-B3-03-3B av=7.13.0.8178.0 atz=Europe/Stockholm at=sdkmultifolderreader dvchost=lx532859 dvc=172.16.1.1 deviceZoneURI=/All Zones/ArcSight System/Private Address Space Zones/RFC1918: 172.16.0.0-172.31.255.255 dtz=Europe/Stockholm geid=0 _cefVer=0.1 aid=3H+nSPXABABCqh6EN5f-+8A\\=\\="}
I've seen that before here: Filebeat CEF Module. You'd need to change your input from syslog to udp, the file you need to change is module/cef/log/config/input.yml under /usr/share/filebeat/... .
Ok that seemed to work, would be nice to know why the issue arise and if it's fixable in another way. Does that mean that I could change it to TCP instead? (prefer TCP over UDP in this case).
Ok, actually the issue was not solved properly. I just got another bunch of errors in logstash. But now it seems to be some kind of parsning/mapping issue:
Feb 24 16:05:07 lx229738.utv.m.se logstash[32367]: [2020-02-24T16:05:07,123][WARN ][logstash.outputs.elasticsearch][beats] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-7.5.2-cef-2020.02.23", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x50ed2a95>], :response=>{"index"=>{"_index"=>"filebeat-7.5.2-cef-2020.02.23", "_type"=>"_doc", "_id"=>"leC6d3ABlGu3p_9D-aEW", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [source.user.group] tried to parse field [group] as object, but found a concrete value"}}}}
Your current index mapping was created dynamically. You might have changed something (while troubleshooting), and now the filebeat tries to ingest a concrete value into a field that already exists and this is an object field (look at the json structure). You could stop the beat, change the index name (in the output), and see if the filebeat would be able to ingest data into a new index. It will create a new mapping for this new index, if that doesn't work, your field extraction is off somewhere.
The index that you're currently trying to ingest your data into has a bad mapping for [source.user.group] field.
I hear what you say, but I am not really sure where to fix this. I extracted the index template from filebeat once again and recreated it (Load the Elasticsearch index template | Filebeat Reference [8.11] | Elastic). I changed the logstash to write to another index (which will get its settings from the new index template) but the error still comes in the logs with the newly created index. Is it in the index template that this kind of issue should be fixed?
This is the part in the index template where leads to source.user.group:
I'm assuming this is just a part of the JSON file, and there is source.user above group. If this is the case, you can easily see that group is an object, you'll have group.domain, group.name, or group.id.
Your ingest is trying to push a concrete value, such as text, keyword, or a number into [source.user.group] and it can't do since it's an object. What's the index template, a mapping you're using for it? Do you change somewhere the name of the field, add a field, etc.?
And finally, would you be able to share a log example?
Hi, a late reply, but there is two log examples in the first two posts. They are inside the error message. It's standard ArcSight CEF, so nothing special there, and we're using the cef_module in filebeat, so everything should work I guess. I have not set up my own Index Template, but filebeat configured that by itself on the first run.
I'm using the standard filebeat-7.6.0 index template. That should be able to handle cef module from Beats I think?
I dont understand why same log can be indexed differently since some logs keep comming through.
Edit: I managed to fix the problem by changing the index template. But since this is all built in the filebeat standard index templates it feels like it's a bug or something. As soon as I had fixed this all events keps streaming in, but also got a few new errors, similar of course but in another part of the mapping. "object mapping for [destination.user.group] tried to parse field [group] as object, but found a concrete value"
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.