Pipeline config error

Maybe I have been staring at this too long and need another set of eyes to point out something silly..

Using logstash 7.3.1 started using pipelines,

I have one running, but two defined, I get an error in the debug logs:
[logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>["/etc/logstash/conf.d/pipeline.conf/Cerberus.conf"]}

I have run a config test from the command line on this specific file and passes

Here is the Cerberus.conf file:
#Input from CORP-DC03 using redis on TS-REDIS01
input {
redis {
host => "10.100.100.37"
data_type => "list"
codec => json
key => "CerberusLog"
}
}
#Filter section
filter {
grok {
break_on_match => true
patterns_dir => "./patterns"
# Match user messages from the Cerberus
match => [ "message", "%{TIMESTAMP_ISO8601:datestamp},%{DATA:facility},%{IPV4:syslogsvr},%{DATA:messagetype} %{GREEDYDATA} [%{INT:sessionid}] %{GREEDYDATA:ftpmessage} request %{DATA:Action} from %{IPV4:c_ip}" ]
match => [ "message", "%{TIMESTAMP_ISO8601:datestamp},%{DATA:facility},%{IPV4:syslogsvr},%{DATA:messagetype} %{GREEDYDATA} [%{INT:sessionid}] %{GREEDYDATA:ftpmessage} at %{IPV4:s_ip}" ]
match => [ "message", "%{TIMESTAMP_ISO8601:datestamp},%{DATA:facility},%{IPV4:syslogsvr},%{DATA:messagetype} %{GREEDYDATA} [%{INT:sessionid}] %{GREEDYDATA:ftpmessage} from %{IPV4:c_ip}" ]
match => [ "message", "%{TIMESTAMP_ISO8601:datestamp},%{DATA:facility},%{IPV4:syslogsvr},%{DATA:messagetype} %{GREEDYDATA} [%{INT:sessionid}] Kex:%{GREEDYDATA:ftpmessage}" ]
match => [ "message", "%{TIMESTAMP_ISO8601:datestamp},%{DATA:facility},%{IPV4:syslogsvr},%{DATA:messagetype} %{GREEDYDATA} [%{INT:sessionid}] %{DATA} '%{USER:user}' %{GREEDYDATA:ftpmessage}" ]
match => [ "message", "%{TIMESTAMP_ISO8601:datestamp},%{DATA:facility},%{IPV4:syslogsvr},%{DATA:messagetype} %{GREEDYDATA} [%{INT:sessionid}] [%{USER:user}] Successfully stored file at %{QS:File} (%{NUMBER:rcvd_bytes:int} B received)" ]
match => [ "message", "%{TIMESTAMP_ISO8601:datestamp},%{DATA:facility},%{IPV4:syslogsvr},%{DATA:messagetype} %{GREEDYDATA} [%{INT:sessionid}] [%{USER:user}] Successfully sent file %{QS:File} (%{NUMBER:sent_bytes:int} B sent)" ]
match => [ "message", "%{TIMESTAMP_ISO8601:datestamp},%{DATA:facility},%{IPV4:syslogsvr},%{DATA:messagetype} %{GREEDYDATA} [%{INT:sessionid}] [%{USER:user}] %{GREEDYDATA:ftpmessage}" ]
match => [ "message", "%{TIMESTAMP_ISO8601:datestamp},%{DATA:facility},%{IPV4:syslogsvr},%{DATA:messagetype} %{GREEDYDATA} [%{INT:sessionid}] SSL %{GREEDYDATA:ftpmessage}" ]
match => [ "message", "%{TIMESTAMP_ISO8601:datestamp},%{DATA:facility},%{IPV4:syslogsvr},%{DATA:messagetype} %{GREEDYDATA} [%{INT:sessionid}] %{GREEDYDATA:ftpmessage}" ]
}
#Add geoIP info
geoip {source => "c_ip"}
date {
match => [ "datestamp", "YYYY-MM-dd HH:mm:ss" ]
#timezone => "UTC"
}
mutate {
convert => [ "[geoip][coordinates]", "float" ]
}
}
#output section
output {
elasticsearch {
hosts => "10.100.100.34"
index => "logstash-ftplog-%{+YYYY.MM.dd}"
}
}

And here is the pipelines.yml file:
- pipeline.id: main
path.config: "/etc/logstash/conf.d/*.conf"
pipeline.id: cerberus
path.config: "/etc/logstash/conf.d/Cerberus/Cerberus.conf"
pipeline.id: Connect_iis
path.config: "/etc/logstash/conf.d/pipeline.conf/Connect_iis.conf"

That's not an error. Your Connect_iis pipeline uses a file under /etc/logstash/conf.d/pipeline.conf/. The message is just telling you that there is another file in that directory that it is ignoring.

Interesting, but there is only one pipeline running, I expect to have two, what am i missing?

Is there something in the conf i have missed to run more than one pipeline?

What makes you think there is only one pipeline running? Are you hitting this issue?

The logs show 1 pipeline running none stopped, and the data that should flow through that pipeline is not being pulled from redis

What do you get in the logstash logs when you start up?

Here you go, I only see one pipeline (the last one listed) being loaded

I should have 3 I think, Main (Default pipeline), Cerberus (the one that there is data waiting in redis) and Connect_IIS (the one that works and is feeding data)

Thanks

[2019-08-30T13:22:51,790][DEBUG][logstash.runner          ] metric.collect: true
[2019-08-30T13:22:51,791][DEBUG][logstash.runner          ] pipeline.id: "main"
[2019-08-30T13:22:51,792][DEBUG][logstash.runner          ] pipeline.system: false
[2019-08-30T13:22:51,792][DEBUG][logstash.runner          ] pipeline.workers: 2
[2019-08-30T13:22:51,793][DEBUG][logstash.runner          ] pipeline.batch.size: 125
[2019-08-30T13:22:51,794][DEBUG][logstash.runner          ] pipeline.batch.delay: 50
[2019-08-30T13:22:51,794][DEBUG][logstash.runner          ] pipeline.unsafe_shutdown: false
[2019-08-30T13:22:51,795][DEBUG][logstash.runner          ] pipeline.java_execution: true
[2019-08-30T13:22:51,796][DEBUG][logstash.runner          ] pipeline.reloadable: true
[2019-08-30T13:22:51,796][DEBUG][logstash.runner          ] pipeline.plugin_classloaders: false
  ~~~~~ (too many lines to post
[2019-08-30T13:22:51,818][DEBUG][logstash.runner          ] *path.dead_letter_queue: "/var/lib/logstash/dead_letter_queue" (default: "/usr/share/logstash/data/dead_letter_queue")
[2019-08-30T13:22:51,819][DEBUG][logstash.runner          ] *path.settings: "/etc/logstash" (default: "/usr/share/logstash/config")
[2019-08-30T13:22:51,819][DEBUG][logstash.runner          ] *path.logs: "/var/log/logstash" (default: "/usr/share/logstash/logs")
[2019-08-30T13:22:51,820][DEBUG][logstash.runner          ] xpack.management.enabled: false
[2019-08-30T13:22:51,821][DEBUG][logstash.runner          ] xpack.management.logstash.poll_interval: 5000000000
[2019-08-30T13:22:51,821][DEBUG][logstash.runner          ] xpack.management.pipeline.id: ["main"]
[2019-08-30T13:22:51,822][DEBUG][logstash.runner          ] xpack.management.elasticsearch.username: "logstash_system"
[2019-08-30T13:22:51,823][DEBUG][logstash.runner          ] xpack.management.elasticsearch.hosts: ["https://localhost:9200"]
[2019-08-30T13:22:51,823][DEBUG][logstash.runner          ] xpack.management.elasticsearch.ssl.verification_mode: "certificate"
[2019-08-30T13:22:51,824][DEBUG][logstash.runner          ] xpack.management.elasticsearch.sniffing: false
[2019-08-30T13:22:51,825][DEBUG][logstash.runner          ] xpack.monitoring.enabled: false
[2019-08-30T13:22:51,826][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.hosts: ["http://localhost:9200"]
[2019-08-30T13:22:51,826][DEBUG][logstash.runner          ] xpack.monitoring.collection.interval: 10000000000
[2019-08-30T13:22:51,827][DEBUG][logstash.runner          ] xpack.monitoring.collection.timeout_interval: 600000000000
[2019-08-30T13:22:51,828][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.username: "logstash_system"
[2019-08-30T13:22:51,828][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.ssl.verification_mode: "certificate"
[2019-08-30T13:22:51,829][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.sniffing: false
[2019-08-30T13:22:51,829][DEBUG][logstash.runner          ] xpack.monitoring.collection.pipeline.details.enabled: true
[2019-08-30T13:22:51,830][DEBUG][logstash.runner          ] xpack.monitoring.collection.config.enabled: true
[2019-08-30T13:22:51,830][DEBUG][logstash.runner          ] node.uuid: ""
[2019-08-30T13:22:51,831][DEBUG][logstash.runner          ] --------------- Logstash Settings -------------------
[2019-08-30T13:22:51,874][DEBUG][logstash.config.source.multilocal] Reading pipeline configurations from YAML {:location=>"/etc/logstash/pipelines.yml"}
[2019-08-30T13:22:51,923][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.3.1"}
[2019-08-30T13:22:51,991][DEBUG][logstash.agent           ] Setting up metric collection
[2019-08-30T13:22:52,047][DEBUG][logstash.instrument.periodicpoller.os] Starting {:polling_interval=>5, :polling_timeout=>120}
[2019-08-30T13:22:52,629][DEBUG][logstash.instrument.periodicpoller.jvm] Starting {:polling_interval=>5, :polling_timeout=>120}
[2019-08-30T13:22:53,016][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-08-30T13:22:53,100][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-08-30T13:22:53,117][DEBUG][logstash.instrument.periodicpoller.persistentqueue] Starting {:polling_interval=>5, :polling_timeout=>120}
[2019-08-30T13:22:53,126][DEBUG][logstash.instrument.periodicpoller.deadletterqueue] Starting {:polling_interval=>5, :polling_timeout=>120}
[2019-08-30T13:22:53,326][DEBUG][logstash.agent           ] Starting agent
[2019-08-30T13:22:53,745][DEBUG][logstash.config.source.multilocal] Reading pipeline configurations from YAML {:location=>"/etc/logstash/pipelines.yml"}
[2019-08-30T13:22:54,077][DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>["/etc/logstash/conf.d/pipeline/pipe_cerberus.conf"]}
[2019-08-30T13:22:54,079][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/pipeline/pipe_connect_iis.conf"}
[2019-08-30T13:22:54,205][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>1}
[2019-08-30T13:22:54,353][DEBUG][logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:Connect_iis}

Ok, it may be an issue with my pipelines.yml.

It runs the last pipeline in the list only, if I swap the order, then the other pipeline runs

thoughts? have I missed something silly: (note I switched around the Cerberus and Connct_iis and now the Cerberus is the only pipeline)

- pipeline.id: main
  path.config: "/etc/logstash/conf.d/*.conf"
  pipeline.id: Connect_iis
  path.config: "/etc/logstash/conf.d/pipeline/pipe_connect_iis.conf"
  pipeline.id: Cerberus
  path.config: "/etc/logstash/conf.d/pipeline/pipe_cerberus.conf"

Yep, I was missing the "-" at the beginning of each pipeline definition in yml file,

I knew it was probably something stupid

Thanks so much @Badger for your help and patience

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.