Is it possible to shorten this Logstash filter cascade?



i am using ELK GA 6.3.0. I read from Kafka and I have the below filter cascade;

filter {

	dissect {
		mapping => {
			"message" => "<%{timestamp}> <%{f2}> <%{f3}> <%{f4}> <%{f5}> <%{f6}> %{rest}"

	grok {
		match => {
			"rest" => "<%{NOTSPACE:f7}>\n <%{NOTSPACE:f8}: %{GREEDYDATA:f9}>"

	date {
		match => ["timestamp", "MMM dd, YYYY hh:mm:ss:SSS aa"]
		timezone => "UTC"
		target => "@timestamp"

	mutate {
		remove_field => ["rest", "f4", "f5", "offset", "host", "@version", "input", "beat", "prospector", "fields", "timestamp"]


What I am doing is removing some unwanted log entries + removing filebeat entries. Is it possible to do the same with less CPU impact, or can I do something in filebeat also, to achieve the same?


(system) #2

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.