Filebeat module cef syslog parse errors

Im using the cef module in filebeat to receive cef events from a ArcSight SmartConnector.
The SmartConnector is configured for Port 5001 Protocol UDP and CEF Version 0.1.
On the Filebeat side I receive on port 5001 and using the cef module.

For come cef events comming over there is no issues, but for a lot of them I get this error message in Filebeat log:
2020-02-20T10:40:11.910+0100 ERROR [syslog] syslog/input.go:243 can't parse event as syslog rfc3164 {"message": "CEF:0|Atlassian|BITBUCKET|||Read|Unknown| eventId=5415 msg=- art=1582191610033 rt=1582191600857 src= sourceZoneURI=/All Zones/ArcSight System/Private Address Space Zones/RFC1918: suid=jenkins destinationServiceName=BITBUCKET cs2=TESTCASE/ref-app-test cs4=Klona innehållet i ett repo cs5=@PT3JP7x640x10461x0 flexString1=SessionsId flexString2=t1zzr6 agt= agentZoneURI=/All Zones/ArcSight System/Private Address Space Zones/RFC1918: amac=00-50-56-B3-03-3B av= atz=Europe/Stockholm at=sdkmultifolderreader dvchost=lx532859 dvc= deviceZoneURI=/All Zones/ArcSight System/Private Address Space Zones/RFC1918: dtz=Europe/Stockholm geid=0 _cefVer=0.1 aid=3H+nSPXABABCqh6EN5f-+8A\\=\\="}

Here is another example:
2020-02-20T10:58:01.951+0100 ERROR [syslog] syslog/input.go:243 can't parse event as syslog rfc3164 {"message": "CEF:0|Corp|System|1.3.2|Read1|Read|Unknown| eventId=7600 msg=N.SYS II QueryJob Person\\=Object[ObjectName\\=SearchRequest;Query\\=Object[ObjectName\\=PersonQuery;DateOfBirth\\=7803;SearchSystem\\=[SYS;];Gender\\=Object[ObjectName\\=ReferenceData;Dictionary\\=SYS_1;Category\\=GENDER;Code\\=M;];FamilyNames\\=TEST;AlertRequestingUser\\=[];Nationality\\=Object[ObjectName\\=ReferenceData;Dictionary\\=ISO_3166_1_ALPHA_3;Category\\=COUNTRY;Code\\=SWE;];];]; categorySignificance=/Normal categoryBehavior=/Access categoryDeviceGroup=/Application categoryOutcome=/Success categoryObject=/Host/Application/Service art=1582192681204 act=FV2 rt=1582192681012 suid=002021002163 spriv=1 dst= destinationZoneURI=/All Zones/ArcSight System/Private Address Space Zones/RFC1918: dmac=00-50-56-B3-85-A9 destinationServiceName=SYSTEM filePermission=1 cs1=ALERT cs2=No id found. cs3=TrackingId cs5=54844ff0-1111-2222-1111-48529f14007e flexNumber2=6981 deviceCustomDate1=1582192681013 cs1Label=Objekttyp cs2Label=ObjektID cs3Label=Objektidtyp cs4Label=Info cs5Label=LogMapID cs6Label=Reserv5 flexNumber2Label=LopNummer agt= agentZoneURI=/All Zones/ArcSight System/Private Address Space Zones/RFC1918: amac=00-55-56-B3-55-3B av= atz=Europe/Stockholm at=file dvchost=serverino dvc= deviceZoneURI=/All Zones/ArcSight System/Private Address Space Zones/RFC1918: deviceNtDomain=F dtz=Europe/Stockholm deviceProcessName=MODE geid=0 _cefVer=0.1 aid=3H+nSPXABABCqh6EN5f-+8A\\=\\="}

I've seen that before here: Filebeat CEF Module. You'd need to change your input from syslog to udp, the file you need to change is module/cef/log/config/input.yml under /usr/share/filebeat/... .

Ok that seemed to work, would be nice to know why the issue arise and if it's fixable in another way. Does that mean that I could change it to TCP instead? (prefer TCP over UDP in this case).

There are already some feature requests open today for this module, maybe this could help:

Ok, actually the issue was not solved properly. I just got another bunch of errors in logstash. But now it seems to be some kind of parsning/mapping issue:

Feb 24 16:05:07 logstash[32367]: [2020-02-24T16:05:07,123][WARN ][logstash.outputs.elasticsearch][beats] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-7.5.2-cef-2020.02.23", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x50ed2a95>], :response=>{"index"=>{"_index"=>"filebeat-7.5.2-cef-2020.02.23", "_type"=>"_doc", "_id"=>"leC6d3ABlGu3p_9D-aEW", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [] tried to parse field [group] as object, but found a concrete value"}}}}

This looks like your mapping is bad (object vs value).

I am using the cef-module in filebeat. So I am not sure how to fix that issue. I updated Filebeat to version 7.6, but that did not solve the issue.

Your current index mapping was created dynamically. You might have changed something (while troubleshooting), and now the filebeat tries to ingest a concrete value into a field that already exists and this is an object field (look at the json structure). You could stop the beat, change the index name (in the output), and see if the filebeat would be able to ingest data into a new index. It will create a new mapping for this new index, if that doesn't work, your field extraction is off somewhere.

The index that you're currently trying to ingest your data into has a bad mapping for [] field.

I hear what you say, but I am not really sure where to fix this. I extracted the index template from filebeat once again and recreated it ( I changed the logstash to write to another index (which will get its settings from the new index template) but the error still comes in the logs with the newly created index. Is it in the index template that this kind of issue should be fixed?

This is the part in the index template where leads to

"group": {
                "properties": {
                  "domain": {
                    "ignore_above": 1024,
                    "type": "keyword"
                  "name": {
                    "ignore_above": 1024,
                    "type": "keyword"
                  "id": {
                    "ignore_above": 1024,
                    "type": "keyword"

I'm assuming this is just a part of the JSON file, and there is source.user above group. If this is the case, you can easily see that group is an object, you'll have group.domain,, or

Your ingest is trying to push a concrete value, such as text, keyword, or a number into [] and it can't do since it's an object. What's the index template, a mapping you're using for it? Do you change somewhere the name of the field, add a field, etc.?

And finally, would you be able to share a log example?

Hi, a late reply, but there is two log examples in the first two posts. They are inside the error message. It's standard ArcSight CEF, so nothing special there, and we're using the cef_module in filebeat, so everything should work I guess. I have not set up my own Index Template, but filebeat configured that by itself on the first run.


I'm using the standard filebeat-7.6.0 index template. That should be able to handle cef module from Beats I think?
I dont understand why same log can be indexed differently since some logs keep comming through.

Edit: I managed to fix the problem by changing the index template. But since this is all built in the filebeat standard index templates it feels like it's a bug or something. As soon as I had fixed this all events keps streaming in, but also got a few new errors, similar of course but in another part of the mapping. "object mapping for [] tried to parse field [group] as object, but found a concrete value"