Logstash 6.0.0 multiple pipelines not working

I am trying to configure multiple pipelines on logstash 6.0.0 and can not get it to work. here are my configurations. What am I missing? On top of that, all of the beats info being sent is being logged into the syslog and not even making it to elasticsearch. It works if I keep it the old way of 5.x by putting everything in one config file.

-- current setting in logstash.yml
cat logstash.yml | grep -v "#"
node.name: d-gp2-es46-5.*****
path.data: /data/logstash
config.string: "input { beats { port => 5044 } }"
http.host: "d-gp2-es46-5."
path.logs: /var/log/logstash
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: http://d-gp2-es46-8.
:9200
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: changeme

-- pipelines.yml file
cat pipelines/pipelines.yml

  • pipeline.id: postgresql_pipeline
    path.config: "/etc/logstash/conf.d/postgresql_pipeline.conf"
  • pipeline.id: pgpool_pipeline
    path.config: "/etc/logstash/conf.d/pgpool_pipeline.conf"
  • pipeline.id: mysql_pipeline
    path.config: "/etc/logstash/conf.d/mysql_pipeline.conf"
  • pipeline.id: cassandra_pipeline
    path.config: "/etc/logstash/conf.d/cassandra_pipeline.conf"
  • pipeline.id: syslog_pipeline
    path.config: "/etc/logstash/conf.d/syslog_pipeline.conf"

-- sample of one of the config files but they are similar just with different grok patterns and indexes as outputs
cat pipelines/syslog_pipeline.conf
filter {

if "sys_log" in [tags] {
grok {
match => { "message" => ["%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:hostname} %{DATA:program}([%{NUMBER:pid}])?: %{GREEDYDATA:statement}"] }
}
date {
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
locale => en
remove_field => "timestamp"
}

if "_grokparsefailure" not in [tags] {
  mutate {
    remove_field => [ "message", "@version" ]
  }
}

}

}

output {

if "sys_log" in [tags] {
if "_grokparsefailure" in [tags] {
file {
path => "/var/log/logstash/_grokparsefailure/grokparsefailure_sys_log.log"
}
}

elasticsearch {
  hosts => "d-gp2-es46-8.*****:9200"
  manage_template => false
  index => "%{[@metadata][beat]}-sys-log-%{+YYYY.MM.dd}"
  document_type => "%{[@metadata][type]}"
}

}

}

I setup an entire new environment with 6.0.1 and I'm still having the same issues with multiple pipelines. Any help would be appreciated. All of my filebeat data is being sent to the /var/log/syslog on the logstash server.

Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: {
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: "@timestamp" => 2017-12-14T19:31:29.589Z,
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: "offset" => 508084,
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: "@version" => "1",
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: "beat" => {
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: "name" => "d-gp2-dbp7-1",
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: "hostname" => "d-gp2-dbp7-1",
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: "version" => "6.0.1"
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: },
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: "host" => "d-gp2-dbp7-1",
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: "prospector" => {
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: "type" => "log"
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: },
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: "source" => "/var/log/postgresql/pg_log/postgresql-2017-12-05_000000.log",
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: "message" => "2017-12-05 00:49:33 UTC:10.124.165.50(60024):dvrcloud@*****:[8914-5]: ERROR: column rsstuff_rsdvr.current_read_load does not exist at character 184",
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: "tags" => [
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: [0] "pg_logs",
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: [1] "d-gp2-dbp7-1",
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: [2] "beats_input_codec_plain_applied"
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: ]
Dec 14 19:31:31 p-gp2-es46-4 logstash[19082]: }

It works when I use a different port for each log but i need all filebeat logs to use one port 5044. I don't know of a way to have filebeat send different logs to different ports. If there is a way, please let me know.

I think I'm just realizing that multiple pipelines only works with different beats with different ports. Doesn't seem that there is any way to do multiple pipelines with in the same beat like filebeats. It will all still have to remain in the same configuration file. So the same "conditional hell" they refer to still exists within the same beat.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.