Possible to use filebeat>logstash>Eleastic in this manner

is it possible to use filebeat>logstash>eleasticsearch in this way.

Logstash is running three ports, 5045, 5046, 5047.
I have a filebeats config that is picking up logs from 3 different locations and redirecting each log to its own logstash port for parsing.
each logstash "port" has its own grok expression tied to its own template etc being fed into ES.
Is this possible?

We've being experiementing with this today and it appears filebeat will only send to the last listed type:log entry with port number in the list inside filebeat.yml.

Filebeat.yaml excerpt

type: log
enabled: True
paths:
/var/logs/syslog/logs/*json
fields: {log_type: syslog}
---- Logstash output -----
output.logstash:
enabled: true
hosts: ["localhost:5045"]

type: log
enabled: True
paths:
/var/logs/apache/logs/*json
fields: {log_type: apache}
---- Logstash output -----
output.logstash:
enabled: true
hosts: ["localhost:5046"]

type: log
enabled: True
paths:
/var/logs/sec/logs/*json
fields: {log_type: security}
---- Logstash output -----
output.logstash:
enabled: true
hosts: ["localhost:5047"]

logs1.yml from etc/logstash/conf.d

input {
beats{
port => 5045
}
}
filter {
grok {
match => { "message" => "some grok pattern" }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "syslog-%{+YYYY.MM.dd}"
document_id => "syslog"
manage_template =>true
}
stdout {
codec => rubydebug
}
}

logs2.yml from etc/logstash/conf.d

input {
beats{
port => 5046
}
}
filter {
grok {
match => { "message" => "some grok pattern" }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "apache-%{+YYYY.MM.dd}"
document_id => "apache"
manage_template =>true
}
stdout {
codec => rubydebug
}

logs3.yml from etc/logstash/conf.d

input {
beats{
port => 5047
}
}
filter {
grok {
match => { "message" => "some grok pattern" }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "security-%{+YYYY.MM.dd}"
document_id => "security"
manage_template =>true
}
stdout {
codec => rubydebug
}

I am not sure it is possible to send output to multiple destinations.
Also, why can't you have conditions on Logstash side to apply patterns based on log_type field ?
You can use if condition in filter section of Logstash

input {
beats {
port => 5044
}
}

filter {
if "django" not in [path] {
dissect {
mapping => {
"path" => "/%{}/%{}/%{}/%{}/%{}/%{task_log_folder}/%{}"
}
}
mutate {
split => { "task_log_folder" => "_" }
add_field => { "jobID" => "%{[task_log_folder][0]}" }
add_field => { "taskID" => "%{[task_log_folder][1]}" }
add_field => { "taskVersion" => "%{[task_log_folder][2]}" }
add_field => { "fileName" => "%{[task_log_folder][-1]}" }
}
}
}

i had thought about that, but my thinking is along the lines of separation.
eventually ill have multiple filebeat agent sending to a single logstash point on seperate ports so that in itself will work, but if i have multiple filebeats on a single box that need to be sent in, why can i not send to different ports?

My logstash endpoint has 3 separate ports running, they can handle the multiple inbound connections, so its filebeats that can't handle multiple streaming, yet it has provisions and capability to send to different ip:port, is it a matter of worker streams maybe ? can i not configure seperate piplines to send their data to different ports on the same host?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.