All our syslogs are pushed through UDP and not TCP, so I don't think that will work as expected or I will not be able to see any syslog coming through TCP. or am I wrong?
In that case, try the udp
plugin instead:
udp {
codec => cef { delimiter => "\r\n" }
port => 3014
}
I tried the code you suggested but I stopped receiving any sort of syslogs. Based on the syslogs I have, the tag relative to _grokparsefailure is pointing to the syslog input _grokparsefailure_sysloginput
I was wondering and maybe you can clarify this point for me. If _grok is just a filtering pattern, is there a way to go over it and accept all the syslogs without any filtering?
Another thing, could this error be related to the time zone issue?
Hi @RobBavey so I think I have a "good" news. I run the command curl -X GET "localhost:9200/logstash_index/_search"?pretty=true
to see the data in my index, and the result is as follow.
{
"_index" : "logstash_index",
"_type" : "_doc",
"_id" : "<Covered value>",
"_score" : 1.0,
"_source" : {
"agentTimeZone" : <Covered value>,
"severity_label" : "<Covered value>",
"deviceVendor" : "<Covered value>",
"facility" : 0,
"severity" : 0,
"categoryOutcome" : "<Covered value>",
"agentReceiptTime" : "<Covered value>",
"applicationProtocol" : "<Covered value>",
"categoryObject" : "<Covered value>",
"deviceEventCategory" : "<Covered value>",
"host" : "<Covered value>",
"agentHostName" : "<Covered value>",
"requestClientApplication" : "<Covered value>",
"categoryBehavior" : "<Covered value>",
"geid" : "0",
"@version" : "1",
"categoryDeviceType" : "<Covered value>",
"deviceEventClassId" : "<Covered value>",
"cefVersion" : "<Covered value>",
"name" : "<Covered value>",
"deviceSeverity" : "<Covered value>",
"bytesIn" : "<Covered value>",
"sourceHostName" : "<Covered value>",
"customerURI" : "<Covered value>",
"requestUrl" : "<Covered value>",
"deviceCustomNumber1Label" : "<Covered value>",
"agentAddress" : "<Covered value>",
"bytesOut" : "<Covered value>",
"agentType" : "<Covered value>",
"deviceAddress" : "<Covered value>",
"agentId" : "<Covered value>",
"sourceAddress" : "<Covered value>",
"deviceCustomString2Label" : "<Covered value>",
"destinationHostName" : "<Covered value>",
"agentZoneURI" : "<Covered value>",
"facility_label" : "<Covered value>",
"deviceCustomNumber1" : "<Covered value>",
"priority" : 0,
"tags" : [
"_grokparsefailure_sysloginput"
],
"@timestamp" : "2021-04-30T10:20:56.376Z",
"deviceProcessName" : "<Covered value>",
"deviceZoneURI" : "<Covered value>",
"categorySignificance" : "<Covered value>",
"sourceZoneURI" : "<Covered value>",
"deviceReceiptTime" : "<Covered value>",
"requestMethod" : "<Covered value>",
"deviceCustomString1Label" : "<Covered value>",
"deviceProduct" : "<Covered value>",
"baseEventCount" : "3",
"deviceCustomString4Label" : "<Covered value>",
"destinationTimeZone" : "<Covered value>",
"agentMacAddress" : "<Covered value>",
"type" : "<Covered value>",
"eventId" : "<Covered value>",
"deviceCustomString5Label" : "<Covered value>",
"startTime" : "<Covered value>",
"deviceCustomString6Label" : "<Covered value>",
"categoryDeviceGroup" : "<Covered value>",
"deviceHostName" : "1<Covered value>",
"agentVersion" : "<Covered value>",
"deviceCustomString3Label" : "<Covered value>",
"deviceAction" : "<Covered value>",
"deviceVersion" : ""
}
},
my logstash.conf has the grok pattern based on the timestamp, and in the code pasted up, the first block of code, doesn't have a timestamp
and I assume that the reason why it get tagged as grokparsefailure
.. in fact as you can see in the second block which contains a timestamp field, it doesn't get tagged.
Could this be the reason?
@Hamza_El_Aouane What I see from this message you sent me, is kinda what I expected - the CEF data is being parsed correctly, but the data associated with the syslog data is not. What I was trying to get to was what format are you sending data to logstash in - do you know what format your CEF exporter is sending?
My suspicion was that you are receiving the CEF data not in syslog RFC3164 format, which is what the syslog input expects. This can be solved with sending directly to the udp
or tcp
inputs, and working from there. The configuration snippet that I posted earlier is taken from the Arcsight module which receives data from ArcSight SmartConnectors, but we may want to try tweaking the delimiter
in the cef
codec definition - let's change our input definition to:
udp {
codec => cef
port => 3014
}
@RobBavey Thank you so much for your time, patience and help. You have no idea how much help you are giving me and I do really appreciate it.
Unfortunately I don't know the format of the exporter because is a separate company that is sending us this data.
I did replace my logstash.conf, and now it looks like this:
input {
udp {
port => 3014
codec => cef
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash_index"
}
}
But when I do this, I stop receiving any syslog.
Is there any other test that I can't try?
I did get this WARN
in the terminal
[2021-05-19T19:43:42,374][WARN ][logstash.outputs.elasticsearch][main][267db6ccda4b29b358e0e222b8a550200e4a90026f11363b75da5ddc4fe58dc0] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash_index", :routing=>nil}, #<LogStash::Event:0x779c73e9>], :response=>{"index"=>{"_index"=>"logstash_index", "_type"=>"_doc", "_id"=>"lsInhnkBYjljbMfFv9km", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [severity] of type [long] in document with id 'lsInhnkBYjljbMfFv9km'. Preview of field's value: 'Low'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: \"Low\""}}}}}
If this is happening since the last change to the plugin configuration, then we may be making progress, and events are starting to get processed, but are clashing with previously sent entries.
I don't know if you made your index mapping using dynamic mapping or explicit mapping, but from the error message, it looks like your index mapping is expecting the value of severity
to be a long
, and since the last change(?), 'severity' is now a string. My guess is that you are using dynamic mapping, and from the data snippet you sent me, the previous plugin settings were putting severity:0
in there. Can you try ingesting into a clean index?
Mapping | Elasticsearch Guide [8.11] | Elastic has more information about index mappings in elasticsearch.