Hello to everyone!
I plan to use Logstash to transform syslog messages into info files
To achieve, it I decided to use multi-pipeline configuration:
pipelines.yml
- pipeline.id: ans_file_syslog
path.config: "C:/Program Files/logstash/etc/pipelines/ans_file_syslog.conf"
- pipeline.id: asa_file_syslog
path.config: "C:/Program Files/logstash/etc/pipelines/asa_file_syslog.conf"
- pipeline.id: asr_file_syslog
path.config: "C:/Program Files/logstash/etc/pipelines/asr_file_syslog.conf"
For example, one of *.conf file looks like this:
asa_file_syslog.conf
input {
udp {
port => 1114
}
}
filter {
dns {
action => "replace"
hit_cache_size => 1024
reverse => [ "[host][ip]"]
}
}
output {
file {
codec => line { format => "%{message}"}
path => "E:/logstash/asa/asa_file_syslog/%{[host][ip]}/%{+YYYY-MM-dd-HH}.log"
}
}
Because of a necessity to use a DNS filter in each *.conf file, I have a couple of questions:
-
Is it possible to use only one filter that is defined only once? If yes, how?
I read about pipeline-to-pipeline communication but didn't grasp how I can return a data flow to the original pipe -
Does it make sense to use one filter instead of defining it in each file? Except for getting rid of code reuse
-
How can I define a global value for hit_cache_size that will be used across all pipelines?
I understand that I can use a small value for each pipe, but it seems a little bit excessive