Logstash multiple configuration

Hello everyone
I'm trying to install the logstash system with multipel
I have a error to run ths system.

error:

D:\logstash\bin>logstash --path.settings d:/logstash/config Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was depreca ted in version 9.0 and will likely be removed in a future release. WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (fil e:/D:/logstash/logstash-core/lib/jars/jruby-complete-9.2.9.0.jar) to field java. io.Console.cs WARNING: Please consider reporting this to the maintainers of com.headius.backpo rt9.modules.Modules WARNING: Use --illegal-access=warn to enable warnings of further illegal reflect ive access operations WARNING: All illegal access operations will be denied in a future release Sending Logstash logs to D:/logstash/logs which is now configured via log4j2.pro perties [2020-10-06T07:13:26,210][INFO ][logstash.runner ] Starting Logstash {" logstash.version"=>"7.6.1"} [2020-10-06T07:13:27,151][ERROR][logstash.config.sourceloader] No configuration found in the configured sources. [2020-10-06T07:13:27,652][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} [2020-10-06T07:13:32,549][INFO ][logstash.runner ] Logstash shut down.

the pipelines.yml

`
# List of pipelines to be loaded by Logstash
#
# This document must be a list of dictionaries/hashes, where the keys/values are pipeline settings.
# Default values for omitted settings are read from the logstash.yml file.
# When declaring multiple pipelines, each MUST have its own pipeline.id.
#
# Example of two pipelines:
#
# - pipeline.id: test
# pipeline.workers: 1
# pipeline.batch.size: 1
# config.string: "input { generator {} } filter { sleep { time => 1 } } output { stdout { codec => dots } }"
# - pipeline.id: another_test
# queue.type: persisted
# path.config: "/tmp/logstash/*.config"
#
# Available options:
#
# # name of the pipeline
- pipeline.id: logstash1
path.config: "d:/logstash/bin/p1/logstash1.conf"
pipeline.workers: 1
pipeline.batch.size: 5
- pipeline.id: logstash2
path.config: "d:/logstash/bin/p2/logstash2.conf"
pipeline.workers: 1
queue.type: persisted
pipeline.batch.size: 5
- pipeline.id: logstash3
path.config: "d:/logstash/bin/p3/logstash3.conf"
pipeline.workers: 1
queue.type: persisted
pipeline.batch.size: 5

#   # The configuration string to be used by this pipeline
#   config.string: "input { generator {} } filter { sleep { time => 1 } } output { stdout { codec => dots } }"
#
#   # The path from where to read the configuration text
#   path.config: "c:/kibana/logstash-7.6.1/logstash-7.6.1/bin/"
#
#   # How many worker threads execute the Filters+Outputs stage of the pipeline
#  pipeline.workers: 1 (actually defaults to number of CPUs)
#
#   # How many events to retrieve from inputs before sending to filters+workers
#   pipeline.batch.size: 125
#
#   # How long to wait in milliseconds while polling for the next event
#   # before dispatching an undersized batch to filters+outputs
#   pipeline.batch.delay: 50
#
#   # Internal queuing model, "memory" for legacy in-memory based queuing and
#   # "persisted" for disk-based acked queueing. Defaults is memory
#   queue.type: memory
#
#   # If using queue.type: persisted, the page data files size. The queue data consists of
#   # append-only data files separated into pages. Default is 64mb
#   queue.page_capacity: 64mb
#
#   # If using queue.type: persisted, the maximum number of unread events in the queue.
#   # Default is 0 (unlimited)
#   queue.max_events: 0
#
#   # If using queue.type: persisted, the total capacity of the queue in number of bytes.
#   # Default is 1024mb or 1gb
#   queue.max_bytes: 1024mb
#
#   # If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
#   # Default is 1024, 0 for unlimited
#   queue.checkpoint.acks: 1024
#
#   # If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
#   # Default is 1024, 0 for unlimited
#   queue.checkpoint.writes: 1024
#
#   # If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
#   # Default is 1000, 0 for no periodic checkpoint.
#   queue.checkpoint.interval: 1000
#
#   # Enable Dead Letter Queueing for this pipeline.
#   dead_letter_queue.enable: false
#
#   If using dead_letter_queue.enable: true, the maximum size of dead letter queue for this pipeline. Entries
#   will be dropped if they would increase the size of the dead letter queue beyond this setting.
#   Default is 1024mb
#   dead_letter_queue.max_bytes: 1024mb
#
#   If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
#   Default is path.data/dead_letter_queue
#
#   path.dead_letter_queue:

`

the conf file are good (they work in another parallel system)

Please format your codes and errors under preformatted text </> or backticks (```) as it is really hard to read.

I got entangled with that too
Hope it's better

It is still not as formatted as it could be.

Anyway, this line does say that the conf files could not be found.

Where are all your .conf located? Did you point .conf correctly in your pipeline.yml?

and set log level to debug to see if any more information can be found
--log.level debug --config.debug

Also, since this is windows environment, please try to use double backslash \\ instead of slash / in your path to see if that will work.

Hope this can help!

Where are all your .conf located? Did you point .conf correctly in your pipeline.yml?

yes

and set log level to debug to see if any more information can be found
--log.level debug --config.debug

attached:

D:\logstash\bin>logstash --log.level debug --config.debug
Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was depreca
ted in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (fil
e:/D:/logstash/logstash-core/lib/jars/jruby-complete-9.2.9.0.jar) to field java.
io.Console.cs
WARNING: Please consider reporting this to the maintainers of com.headius.backpo
rt9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflect
ive access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to D:/logstash/logs which is now configured via log4j2.pro
perties
[2020-10-06T12:05:48,791][DEBUG][logstash.modules.scaffold] Found module {:modul
e_name=>"fb_apache", :directory=>"D:/logstash/modules/fb_apache/configuration"}
[2020-10-06T12:05:48,916][DEBUG][logstash.plugins.registry] Adding plugin to the
registry {:name=>"fb_apache", :type=>:modules, :class=>#<LogStash::Modules::Sca
ffold:0x7db8560 @directory="D:/logstash/modules/fb_apache/configuration", @modul
e_name="fb_apache", @kibana_version_parts=["6", "0", "0"]>}
[2020-10-06T12:05:48,916][DEBUG][logstash.modules.scaffold] Found module {:modul
e_name=>"netflow", :directory=>"D:/logstash/modules/netflow/configuration"}
[2020-10-06T12:05:48,916][DEBUG][logstash.plugins.registry] Adding plugin to the
registry {:name=>"netflow", :type=>:modules, :class=>#<LogStash::Modules::Scaff
old:0xfbf59af @directory="D:/logstash/modules/netflow/configuration", @module_na
me="netflow", @kibana_version_parts=["6", "0", "0"]>}
[2020-10-06T12:05:49,010][DEBUG][logstash.runner ] -------- Logstash Se
ttings (* means modified) ---------
[2020-10-06T12:05:49,010][DEBUG][logstash.runner ] node.name: "Kibsrv1"

[2020-10-06T12:05:49,010][DEBUG][logstash.runner ] path.data: "D:/logst
ash/data"
[2020-10-06T12:05:49,010][DEBUG][logstash.runner ] modules.cli:
[2020-10-06T12:05:49,010][DEBUG][logstash.runner ] modules:
[2020-10-06T12:05:49,010][DEBUG][logstash.runner ] modules_list:
[2020-10-06T12:05:49,025][DEBUG][logstash.runner ] modules_variable_lis
t:
[2020-10-06T12:05:49,025][DEBUG][logstash.runner ] modules_setup: false

[2020-10-06T12:05:49,025][DEBUG][logstash.runner ] config.test_and_exit
: false
[2020-10-06T12:05:49,025][DEBUG][logstash.runner ] config.reload.automa
tic: false
[2020-10-06T12:05:49,025][DEBUG][logstash.runner ] config.reload.interv
al: 3000000000
[2020-10-06T12:05:49,025][DEBUG][logstash.runner ] config.support_escap
es: false
[2020-10-06T12:05:49,041][DEBUG][logstash.runner ] config.field_referen
ce.parser: "STRICT"
[2020-10-06T12:05:49,041][DEBUG][logstash.runner ] metric.collect: true

[2020-10-06T12:05:49,041][DEBUG][logstash.runner ] pipeline.id: "main"
[2020-10-06T12:05:49,041][DEBUG][logstash.runner ] pipeline.system: fal
se
[2020-10-06T12:05:49,041][DEBUG][logstash.runner ] pipeline.workers: 4
[2020-10-06T12:05:49,041][DEBUG][logstash.runner ] pipeline.batch.size:
125
[2020-10-06T12:05:49,041][DEBUG][logstash.runner ] pipeline.batch.delay
: 50
[2020-10-06T12:05:49,041][DEBUG][logstash.runner ] pipeline.unsafe_shut
down: false
[2020-10-06T12:05:49,041][DEBUG][logstash.runner ] pipeline.java_execut
ion: true
[2020-10-06T12:05:49,041][DEBUG][logstash.runner ] pipeline.reloadable:
true
[2020-10-06T12:05:49,057][DEBUG][logstash.runner ] pipeline.plugin_clas
sloaders: false
[2020-10-06T12:05:49,057][DEBUG][logstash.runner ] pipeline.separate_lo
gs: false
[2020-10-06T12:05:49,057][DEBUG][logstash.runner ] path.plugins:
[2020-10-06T12:05:49,057][DEBUG][logstash.runner ] *config.debug: true
(default: false)
[2020-10-06T12:05:49,057][DEBUG][logstash.runner ] *log.level: "debug"
(default: "info")
[2020-10-06T12:05:49,057][DEBUG][logstash.runner ] version: false
[2020-10-06T12:05:49,057][DEBUG][logstash.runner ] help: false
[2020-10-06T12:05:49,057][DEBUG][logstash.runner ] log.format: "plain"
[2020-10-06T12:05:49,057][DEBUG][logstash.runner ] http.host: "127.0.0.
1"
[2020-10-06T12:05:49,057][DEBUG][logstash.runner ] http.port: 9600..970
0
[2020-10-06T12:05:49,057][DEBUG][logstash.runner ] http.environment: "p
roduction"
[2020-10-06T12:05:49,072][DEBUG][logstash.runner ] queue.type: "memory"

[2020-10-06T12:05:49,072][DEBUG][logstash.runner ] queue.drain: false
[2020-10-06T12:05:49,072][DEBUG][logstash.runner ] queue.page_capacity:
67108864
[2020-10-06T12:05:49,072][DEBUG][logstash.runner ] queue.max_bytes: 107
3741824
[2020-10-06T12:05:49,088][DEBUG][logstash.runner ] queue.max_events: 0
[2020-10-06T12:05:49,088][DEBUG][logstash.runner ] queue.checkpoint.ack
s: 1024
[2020-10-06T12:05:49,088][DEBUG][logstash.runner ] queue.checkpoint.wri
tes: 1024
[2020-10-06T12:05:49,088][DEBUG][logstash.runner ] queue.checkpoint.int
erval: 1000
[2020-10-06T12:05:49,088][DEBUG][logstash.runner ] queue.checkpoint.ret
ry: false
[2020-10-06T12:05:49,103][DEBUG][logstash.runner ] dead_letter_queue.en
able: false
[2020-10-06T12:05:49,103][DEBUG][logstash.runner ] dead_letter_queue.ma
x_bytes: 1073741824
[2020-10-06T12:05:49,103][DEBUG][logstash.runner ] slowlog.threshold.wa
rn: -1
[2020-10-06T12:05:49,103][DEBUG][logstash.runner ] slowlog.threshold.in
fo: -1
[2020-10-06T12:05:49,103][DEBUG][logstash.runner ] slowlog.threshold.de
bug: -1
[2020-10-06T12:05:49,103][DEBUG][logstash.runner ] slowlog.threshold.tr
ace: -1
[2020-10-06T12:05:49,103][DEBUG][logstash.runner ] keystore.classname:
"org.logstash.secret.store.backend.JavaKeyStore"
[2020-10-06T12:05:49,119][DEBUG][logstash.runner ] keystore.file: "D:/l
ogstash/config/logstash.keystore"
[2020-10-06T12:05:49,119][DEBUG][logstash.runner ] path.queue: "D:/logs
tash/data/queue"
[2020-10-06T12:05:49,119][DEBUG][logstash.runner ] path.dead_letter_que
ue: "D:/logstash/data/dead_letter_queue"
[2020-10-06T12:05:49,119][DEBUG][logstash.runner ] path.settings: "D:/l
ogstash/config"
[2020-10-06T12:05:49,119][DEBUG][logstash.runner ] path.logs: "D:/logst
ash/logs"
[2020-10-06T12:05:49,119][DEBUG][logstash.runner ] xpack.management.ena
bled: false
[2020-10-06T12:05:49,119][DEBUG][logstash.runner ] xpack.management.log
stash.poll_interval: 5000000000
[2020-10-06T12:05:49,135][DEBUG][logstash.runner ] xpack.management.pip
eline.id: ["main"]
[2020-10-06T12:05:49,135][DEBUG][logstash.runner ] xpack.management.ela
sticsearch.username: "logstash_system"
[2020-10-06T12:05:49,135][DEBUG][logstash.runner ] xpack.management.ela
sticsearch.hosts: ["https://localhost:9200"]
[2020-10-06T12:05:49,135][DEBUG][logstash.runner ] xpack.management.ela
sticsearch.ssl.verification_mode: "certificate"
[2020-10-06T12:05:49,150][DEBUG][logstash.runner ] xpack.management.ela
sticsearch.sniffing: false
[2020-10-06T12:05:49,150][DEBUG][logstash.runner ] xpack.monitoring.ena
bled: false
[2020-10-06T12:05:49,150][DEBUG][logstash.runner ] xpack.monitoring.ela
sticsearch.hosts: ["http://localhost:9200"]
[2020-10-06T12:05:49,150][DEBUG][logstash.runner ] xpack.monitoring.col
lection.interval: 10000000000
[2020-10-06T12:05:49,150][DEBUG][logstash.runner ] xpack.monitoring.col
lection.timeout_interval: 600000000000
[2020-10-06T12:05:49,150][DEBUG][logstash.runner ] xpack.monitoring.ela
sticsearch.username: "logstash_system"
[2020-10-06T12:05:49,150][DEBUG][logstash.runner ] xpack.monitoring.ela
sticsearch.ssl.verification_mode: "certificate"
[2020-10-06T12:05:49,166][DEBUG][logstash.runner ] xpack.monitoring.ela
sticsearch.sniffing: false
[2020-10-06T12:05:49,166][DEBUG][logstash.runner ] xpack.monitoring.col
lection.pipeline.details.enabled: true
[2020-10-06T12:05:49,166][DEBUG][logstash.runner ] xpack.monitoring.col
lection.config.enabled: true
[2020-10-06T12:05:49,166][DEBUG][logstash.runner ] node.uuid: ""
[2020-10-06T12:05:49,166][DEBUG][logstash.runner ] --------------- Logs
tash Settings -------------------
[2020-10-06T12:05:49,213][DEBUG][logstash.config.source.multilocal] Reading pipe
line configurations from YAML {:location=>"D:/logstash/config/pipelines.yml"}
[2020-10-06T12:05:49,275][INFO ][logstash.runner ] Starting Logstash {"
logstash.version"=>"7.6.1"}
[2020-10-06T12:05:49,322][DEBUG][logstash.agent ] Setting up metric co
llection
[2020-10-06T12:05:49,400][DEBUG][logstash.instrument.periodicpoller.os] Starting
{:polling_interval=>5, :polling_timeout=>120}
[2020-10-06T12:05:49,416][DEBUG][logstash.instrument.periodicpoller.cgroup] One
or more required cgroup files or directories not found: /proc/self/cgroup, /sys/
fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2020-10-06T12:05:49,588][DEBUG][logstash.instrument.periodicpoller.jvm] Startin
g {:polling_interval=>5, :polling_timeout=>120}
[2020-10-06T12:05:49,713][DEBUG][logstash.instrument.periodicpoller.jvm] collect
or name {:name=>"ParNew"}
[2020-10-06T12:05:49,713][DEBUG][logstash.instrument.periodicpoller.jvm] collect
or name {:name=>"ConcurrentMarkSweep"}
[2020-10-06T12:05:49,728][DEBUG][logstash.instrument.periodicpoller.persistentqu
eue] Starting {:polling_interval=>5, :polling_timeout=>120}
[2020-10-06T12:05:49,744][DEBUG][logstash.instrument.periodicpoller.deadletterqu
eue] Starting {:polling_interval=>5, :polling_timeout=>120}
[2020-10-06T12:05:49,807][DEBUG][logstash.agent ] Starting agent
[2020-10-06T12:05:49,838][DEBUG][logstash.config.source.multilocal] Reading pipe
line configurations from YAML {:location=>"D:/logstash/config/pipelines.yml"}
[2020-10-06T12:05:49,916][ERROR][logstash.config.sourceloader] No configuration
found in the configured sources.
[2020-10-06T12:05:49,947][DEBUG][logstash.agent ] Converging pipelines
state {:actions_count=>0}
[2020-10-06T12:05:49,978][DEBUG][logstash.agent ] Starting puma
[2020-10-06T12:05:49,978][DEBUG][logstash.instrument.periodicpoller.os] Stopping

[2020-10-06T12:05:50,010][DEBUG][logstash.instrument.periodicpoller.jvm] Stoppin
g
[2020-10-06T12:05:50,010][DEBUG][logstash.agent ] Trying to start WebS
erver {:port=>9600}
[2020-10-06T12:05:50,010][DEBUG][logstash.instrument.periodicpoller.persistentqu
eue] Stopping
[2020-10-06T12:05:50,025][DEBUG][logstash.instrument.periodicpoller.deadletterqu
eue] Stopping
[2020-10-06T12:05:50,025][DEBUG][logstash.agent ] Shutting down all pi
pelines {:pipelines_count=>0}
[2020-10-06T12:05:50,041][DEBUG][logstash.agent ] Converging pipelines
state {:actions_count=>0}
[2020-10-06T12:05:50,057][DEBUG][logstash.api.service ] [api-service] start
[2020-10-06T12:05:50,322][INFO ][logstash.agent ] Successfully started
Logstash API endpoint {:port=>9600}
[2020-10-06T12:05:55,298][INFO ][logstash.runner ] Logstash shut down.

Also, since this is windows environment, please try to use double backslash \\ instead of slash / in your path to see if that will work.

I entered path.config: "d:/logstash/bin/p1/logstash1.conf" ( from the pipelines.yml) in explorer and the conf file was opened

I am able to simulate your issue follow your setting from my windows logstash that is working.

cmd
D:\ELK\logstash-7.8.0\bin>logstash --path.settings d:/ELK/logstash-7.8.0/config

.conf
D:\ELK\logstash-7.8.0\bin\p1\aida.conf

pipeline.yml
path.config: "d:/ELK/logstash-7.8.0/bin/p1/aida.conf"

Logs
[2020-10-06T18:38:56,902][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.

And this is my existing configure that works

.conf
D:\ELK\logstash-7.8.0\pipeline\aida.conf

D:\ELK\logstash-7.8.0\config\pipeline.yml

- pipeline.id: aida
  pipeline.workers: 4
  path.config: "D:\\ELK\\logstash-7.8.0\\pipeline\\aida.conf"

I am not sure if any syntax gone wrong or the drive letter is case sensitive.
But I hope this can resolve your problem.

you are great
Thank you very much

the syntax gone wrong

Glad that I could help!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.