Logs getting Merged/clubbed with each other in some cases

Hello Dear ELKs,

I'm using logstash7.10 for forward the logs to Qradar and Azure sentinel. Have noticed some irregularities with some log source type.

Log flow : heterogenous logs -> file --> logstash( file input) --> Qradar on tcp 514 + Azure sentinel using sentinel output plugin

Deviations :

  1. Fortigate logs merged with checkpoint logs
  2. timestamp is missing in some logs
  3. some are truncated logs

Ask : is there any way to fine tune logstash config to TC above issues

Request for help!! Thank you in advance.

Are you using the pipelines.yml to configure multiple pipelines? What does your pipelines.yml looks like.

If you didn't configure logstash to use multiple pipelines with pipelines.yml then you have just only one pipeline and unless you have conditionals in this pipeline, the data from all inputs will pass through all filters and go to all outputs.

Also, share some evidence about this issues, share logs and the output you are getting and the expected output.

Thank @leandrojmp for your quick response.

Yes, i have pipelines.yml:

Problematic logs:




You need to share your pipelines configurations, it is impossible to know what your Logstash is doing without it.

Also, avoid sharing plain text as screenshots as it is not possible to copy to try to replicate your pipelines, share them as text using the preformatted text option, the </> button.

# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
#   https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

#- pipeline.id: main
#  path.config: "/etc/logstash/conf.d/*.conf"

- pipeline.id: main
  path.config: "/etc/logstash/conf.d/main.conf"

- pipeline.id: sentinel
  path.config: "/etc/logstash/conf.d/sentinel.conf"

#- pipeline.id: sentinel-win-fortigate
#  path.config: "/etc/logstash/conf.d/sentinel-win-fortigate.conf"

You need to share your configuration, the content of the files main.conf and sentinel.conf.

cat /etc/logstash/conf.d/main.conf
input {
file {
path => "/logpath/mainlog.log"
start_position => "beginning"
sincedb_path => "/etc/logstash/sincedb/null"

}
}
output {
tcp { host => ["10.2.5.1"]
port => 9600
codec => line { format => "%{message}" }
}

pipeline { send_to => "sentinel" }
}


cat /etc/logstash/conf.d/sentinel.conf
input {
pipeline { address => "sentinel" }
}

filter {
if [message] =~ "10.1.2.3" and [message] =~ "[localhost] sudo: pam_unix" { drop { } }
if [message] =~ "10.1.2.3" and [message] =~ "kernel: " { drop { } }
if [message] =~ "diskUuid" { drop { } } #filter from Logstash 3/4

}

output {

if [message] =~ "zpa-lss" {
microsoft-sentinel-logstash-output-plugin {
client_app_Id => "abcb"
client_app_secret => "*********
tenant_id => "**************************"
data_collection_endpoint => "https://abc"
dcr_immutable_id => "*******************"
dcr_stream_name => "Custom-ZPA_CL"
#create_sample_file=> true
#sample_file_path => "/tmp/logstash_samplefile"
}
}

else if [message] =~ "zscaler-nss" {
microsoft-sentinel-logstash-output-plugin {
client_app_Id => "abc"
client_app_secret => "****************"
tenant_id => "***"
data_collection_endpoint => "https://abc"
dcr_immutable_id => "
"
dcr_stream_name => "Custom-ZScalar_NSSStream"
#create_sample_file=> true
#sample_file_path => "/tmp/logstash_samplefile"
}
}

else if [message] =~ "Check Point" {
microsoft-sentinel-logstash-output-plugin {
client_app_Id => "abc"
client_app_secret => "**************"
tenant_id => "*8"
data_collection_endpoint => "https://abc"
dcr_immutable_id => "
"
dcr_stream_name => "Custom-CheckPointStream"
#create_sample_file=> true
#sample_file_path => "/tmp/logstash_samplefile"
}
}

else if ([message] =~ "devname=" and [message] =~ "devid=" and [message] =~ "date=" and [message] =~ "time=") {
microsoft-sentinel-logstash-output-plugin {
client_app_Id => "bbc"
client_app_secret => "**************"
tenant_id => "8"
data_collection_endpoint => "https://abc"
dcr_immutable_id => "
"
dcr_stream_name => "Custom-FortigateStream"
#create_sample_file=> true
#sample_file_path => "/tmp/logstaplefile"
}
}

else {
microsoft-sentinel-logstash-output-plugin {
client_app_Id => ""
client_app_secret => "
***"
tenant_id => "****88"
data_collection_endpoint => "https://abc"
dcr_immutable_id => "
"
dcr_stream_name => "Custom-SyslogStream"
#create_sample_file=> true
#sample_file_path => "/tmp/logstash_samplefile"
}
}
}

Well, I'm not sure what is your issue, you have just only one source of logs, your Check point and Fortigate logs are coming from the same file.

If you have logs merged in logstash it means that they are merged in the source file, if something is missing from the logs in logstash, it means that they are missing also in the source file, you need to check on how you are creating this source file.

Hello @leandrojmp,

Thanks for checking. I have validated the source file, logs look ok to me. I suspect considering the huge log volume logs are getting merged when bandwidth is high. Do you suggest any performance tuning or any parameter for controlling the traffic.

It really depends on how this log is generated, but this is not an issue with Logstash, it is an issue on how you are creating this log, Logstash will consume the log as it is.

I shall check the input file and get back.
Thanks Much!!!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.