Filebeats/Kafka vs File Input - Codec Used

I am attempting to collect messages on Windows servers using Filebeats, send them to Kafka, ingest/parse in Logstash, and then distribute to a third-party destination (Log Insight) using the Logstash output plugin for Log Insight (https://github.com/alanjcastonguay/logstash-output-loginsight).

I have created grok filters to parse the application logs I am concerned with and I have gotten a proof of concept to work by doing the following:

  1. Ingesting logs in to Logstash using the file input plugin
  2. Grok filters to parse the logs, date filter to read timestamps
  3. The logstash-output-loginsight to properly send logs to Log Insight

In Log Insight, I see properly parsed logs with appropriate fields and metadata available to me.

When I attempt to use the same process but using Kafka inputs in Logstash to read in filebeats traffic, I am unable to see logs passing through the pipeline and receive no traffic at the final third party destination. I believe this to be an issue with the codecs being used in the different processes, but I am not 100% sure. In the Logstash-plain.log file, I see non-2XX HTTP Output failures. I've attempted some research on those errors, but I just don't have enough information to understand other than I'm guessing Log Insight cannot recognize the incoming events.

Filebeats and Kafka default to using "json" while the file input defaults to using "plain". Considering my success with reading log events in from a file (in Logstash), I am guessing that I need to force filebeats to read in log events using plain so that it works smoothly.

Relevant sections of code----------

Filebeat.yml -
output.kafka:
hosts: ["KAFKA IP"]
topic: 'filebeat'
codec: [plain] <------------ Unsure if this is correct format, but cannot find appropriate documentation

Logstash Conf -
input {
kafka {
bootstrap_servers => "kafka_ip:9092"
client_id => "filebeat"
id => "filebeat"
topics => [ "filebeat" ]
#consumer_threads => 4
codec => "json"
}

file {
path => "/data/Logs/."
start_position => "beginning"
sincedb_path => "/data/tmp/sincedb"
}

}

filter {
if "filebeat" in [tags] {
grok {
match => { "message" => ["regex1","regex2","regex3"] }
}
}
}
output {
loginsight {
host => "LOG INSIGHT IP"
port => 9000
proto => "http"
}
}

++The path for the file input is working properly, due to formatting here in the forum the correct path is not displaying properly.

What am I missing between the kafka/file inputs to allow the filter and outputs to work the same? Is there something in Logstash I can change or do I need to change it at the filebeats level with the codec: piece?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.