I'm becoming crazy! logstash -f /etc/logstash/conf.d/01-wazuh.conf--->works fine!
in /etc/logstash/conf.d the is only 01-wazhu.conf then logstash -f /etc/logstash/pipelines.yml-->
[Converge PipelineAction::Create] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError"
pipelines.yml is not a pipeline config file, it’s a pipeline setting file. when you use -f, logstash expect a pipeline configuration file or directory containing pipeline configuration, which is the reason why the following works
Ok, thank you.
Next step:
/usr/share/logstash/bin/logstash --config.test_and_exit --path.settings /etc/logstash --config.debug
[2020-06-15T07:50:05,897][DEBUG][logstash.config.source.multilocal] Reading pipeline configurations from YAML {:location=>"/etc/logstash/pipelines.yml"}
[2020-06-15T07:50:05,980][DEBUG][logstash.config.source.multilocal] Reading pipeline configurations from YAML {:location=>"/etc/logstash/pipelines.yml"}
[2020-06-15T07:50:06,090][DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>}
[2020-06-15T07:50:06,095][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/01-wazuh.conf"}
[2020-06-15T07:50:06,155][DEBUG][logstash.config.pipelineconfig] -------- Logstash Config ---------
[2020-06-15T07:50:06,161][DEBUG][logstash.config.pipelineconfig] Config from source {:source=>LogStash::Config::Source::MultiLocal, :pipeline_id=>:main}
[2020-06-15T07:50:06,167][DEBUG][logstash.config.pipelineconfig] Config string {:protocol=>"file", :id=>"/etc/logstash/conf.d/01-wazuh.conf"}...
... Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
It reads /etc/logstash/conf.d/01-wazuh.conf
When I start
/usr/share/logstash/bin/logstash --path.settings /etc/logstash
[2020-06-15T07:54:56,555][DEBUG][logstash.config.source.multilocal] Reading pipeline configurations from YAML {:location=>"/etc/logstash/pipelines.yml"}
[2020-06-15T07:54:56,598][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.7.1"}
[2020-06-15T07:54:56,668][DEBUG][logstash.agent ] Setting up metric collection
[2020-06-15T07:54:56,764][DEBUG][logstash.instrument.periodicpoller.os] Starting {:polling_interval=>5, :polling_timeout=>120}
[2020-06-15T07:54:57,216][DEBUG][logstash.instrument.periodicpoller.jvm] Starting {:polling_interval=>5, :polling_timeout=>120}
[2020-06-15T07:54:57,434][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-06-15T07:54:57,444][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
When I've started /usr/share/logstash/bin/logstash --path.settings /etc/logstash or systemctl start logstash, It doesn't read /etc/logstash/conf.d/01-wazuh.conf as main pipeline.
in pipeline.yml:
- pipeline.id: main
path.config: "/etc/logstash/conf.d/*.conf"
it's really difficult to troubleshoot a problem without a complete log messages. logstash behaves differently depends on how you start it.
looks like it reads pipeline configuration. are you sure your pipelines.yml formatted properly? what happens after this error ? did logstash shut down?
Hi @ptamba, thanks
After this error logstash is runningo 99% Cpu and log show:
[2020-06-18T07:40:36,398][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-06-18T07:40:36,414][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-06-18T07:40:41,419][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-06-18T07:40:41,422][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
If you use pipeline.id: main, better you close all command in the pipeline.yml, back yo logstash.yml and check the default config:
# ------------ Pipeline Settings --------------
#
# The ID of the pipeline.
#
pipeline.id: main
#
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
#
# This defaults to the number of the host's CPU cores.
#
pipeline.workers: 8
#
# How many events to retrieve from inputs before sending to filters+workers
#
pipeline.batch.size: 10000
#
# How long to wait in milliseconds while polling for the next event
# before dispatching an undersized batch to filters+outputs
#
pipeline.batch.delay: 25
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
In my configuration, I use pipeline.yml only if I have several folder and using pipeline name instead of "main".
And it works flawlessly.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.