I am trying to execute multiple .conf files in logstash with logstash -f "path*.conf" but logstash processes the .conf file the same number of times as the number of .conf files I have , if I have 5 .conf files , I get output for one document as loglevel:[INFO,INFO,INFO,INFO,INFO]. How can I prevent the duplicates.
This is the expected behavior, when you run Logstash this way it will merge all the *.conf
files in one single pipeline, all inputs will be snt to all outputs unless you use conditionals in your outputs.
When you want to run multiple pipelines (*.conf files) in a way that they are independent from each other you need to use the pipelines.yml
file and configure the multiple pipelines in this file, after that you need to run Logstash as a service or without the -f
option.
The documentation for multiple pipelines can be found here.
I tried that , I don’t know why but only the first pipeline was executed , logstash started and ended both pipelines but only the logs of the first pipeline were indexed.
You would need to share your pipelines.yml
and logstash logs when you start it.
- pipeline.id: my_first_pipeline
path.config: "D:\\logstash-8.8.1\\config\\logstash_1.conf"
- pipeline.id: my_second_pipeline
path.config: "D:\\logstash-8.8.1\\config\\logstash_2.conf"
[2023-08-16T12:15:22,511][INFO ][logstash.runner ] Log4j configuration path used is: D:\logstash-8.8.1\config\log4j2.properties [2023-08-16T12:15:22,584][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"8.8.1", "jruby.version"=>"jruby 9.3.10.0 (2.6.8) 2023-02-01 107b2e6697 OpenJDK 64-Bit Server VM 17.0.7+7 on 17.0.7+7 +indy +jit [x86_64-mswin32]"} [2023-08-16T12:15:22,615][INFO ][logstash.runner ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED] [2023-08-16T12:15:34,138][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false} [2023-08-16T12:15:34,986][INFO ][org.reflections.Reflections] Reflections took 906 ms to scan 1 urls, producing 132 keys and 464 values [2023-08-16T12:15:38,098][INFO ][logstash.javapipeline ] Pipeline
my_first_pipelineis configured with
pipeline.ecs_compatibility: v8setting. All plugins in this pipeline will default to
ecs_compatibility => v8unless explicitly configured otherwise. [2023-08-16T12:15:38,101][INFO ][logstash.javapipeline ] Pipeline
my_second_pipelineis configured with
pipeline.ecs_compatibility: v8setting. All plugins in this pipeline will default to
ecs_compatibility => v8unless explicitly configured otherwise. [2023-08-16T12:15:38,272][INFO ][logstash.outputs.elasticsearch][my_first_pipeline] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]} [2023-08-16T12:15:38,289][INFO ][logstash.outputs.elasticsearch][my_second_pipeline] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]} [2023-08-16T12:15:39,137][INFO ][logstash.outputs.elasticsearch][my_first_pipeline] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@localhost:9200/]}} [2023-08-16T12:15:39,186][INFO ][logstash.outputs.elasticsearch][my_second_pipeline] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@localhost:9200/]}} [2023-08-16T12:15:39,414][WARN ][logstash.outputs.elasticsearch][my_second_pipeline] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@localhost:9200/"} [2023-08-16T12:15:39,424][INFO ][logstash.outputs.elasticsearch][my_second_pipeline] Elasticsearch version determined (8.8.1) {:es_version=>8} [2023-08-16T12:15:39,416][WARN ][logstash.outputs.elasticsearch][my_first_pipeline] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@localhost:9200/"} [2023-08-16T12:15:39,464][WARN ][logstash.outputs.elasticsearch][my_second_pipeline] Detected a 6.x and above cluster: the
typeevent field won't be used to determine the document _type {:es_version=>8} [2023-08-16T12:15:39,487][INFO ][logstash.outputs.elasticsearch][my_first_pipeline] Elasticsearch version determined (8.8.1) {:es_version=>8} [2023-08-16T12:15:39,680][INFO ][logstash.outputs.elasticsearch][my_second_pipeline] Not eligible for data streams because config contains one or more settings that are not compatible with data streams: {"index"=>"forex"} [2023-08-16T12:15:39,681][WARN ][logstash.outputs.elasticsearch][my_first_pipeline] Detected a 6.x and above cluster: the
type event field won't be used to determine the document _type {:es_version=>8} [2023-08-16T12:15:39,745][INFO ][logstash.outputs.elasticsearch][my_second_pipeline] Data streams auto configuration (
data_stream => autoor unset) resolved to
false [2023-08-16T12:15:39,825][INFO ][logstash.outputs.elasticsearch][my_first_pipeline] Not eligible for data streams because config contains one or more settings that are not compatible with data streams: {"index"=>"holiday"} [2023-08-16T12:15:40,006][WARN ][logstash.filters.grok ][my_second_pipeline] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated [2023-08-16T12:15:40,008][INFO ][logstash.outputs.elasticsearch][my_first_pipeline] Data streams auto configuration (
data_stream => autoor unset) resolved to
false [2023-08-16T12:15:40,106][INFO ][logstash.outputs.elasticsearch][my_second_pipeline] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8} [2023-08-16T12:15:43,376][INFO ][logstash.outputs.elasticsearch][my_first_pipeline] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8} [2023-08-16T12:15:43,368][WARN ][logstash.filters.grok ][my_first_pipeline] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated [2023-08-16T12:15:43,675][INFO ][logstash.javapipeline ][my_first_pipeline] Starting pipeline {:pipeline_id=>"my_first_pipeline", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["D:/logstash-8.8.1/config/logstash_2.conf"], :thread=>"#<Thread:0x7c5e2ae3@D:/logstash-8.8.1/logstash-core/lib/logstash/java_pipeline.rb:134 run>"} [2023-08-16T12:15:43,630][INFO ][logstash.javapipeline ][my_second_pipeline] Starting pipeline {:pipeline_id=>"my_second_pipeline", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["D:/logstash-8.8.1/config/logstash_1.conf"], :thread=>"#<Thread:0x1053e000@D:/logstash-8.8.1/logstash-core/lib/logstash/java_pipeline.rb:134 run>"} [2023-08-16T12:15:45,812][INFO ][logstash.javapipeline ][my_second_pipeline] Pipeline Java execution initialization time {"seconds"=>2.11} [2023-08-16T12:15:45,818][INFO ][logstash.javapipeline ][my_first_pipeline] Pipeline Java execution initialization time {"seconds"=>2.12} [2023-08-16T12:15:46,175][INFO ][logstash.inputs.file ][my_second_pipeline] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"D:/logstash-8.8.1/data/plugins/inputs/file/.sincedb_9457514873289fb1eaf23c9dcf288064", :path=>["C:/Users/SoniAns/Documents/tcilforex.log"]} [2023-08-16T12:15:46,224][INFO ][logstash.inputs.file ][my_first_pipeline] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"D:/logstash-8.8.1/data/plugins/inputs/file/.sincedb_618e495e1a65e7878af0fe49b655ec67", :path=>["D:/tcilholiday.log"]} [2023-08-16T12:15:46,276][INFO ][logstash.javapipeline ][my_second_pipeline] Pipeline started {"pipeline.id"=>"my_second_pipeline"} [2023-08-16T12:15:46,327][INFO ][logstash.javapipeline ][my_first_pipeline] Pipeline started {"pipeline.id"=>"my_first_pipeline"} [2023-08-16T12:15:46,324][INFO ][filewatch.observingtail ][my_second_pipeline][1103a5f15e480902d0349b8b5ca23721d069f4b5dd8081f859f3f44ae8b482c9] START, creating Discoverer, Watch with file and sincedb collections [2023-08-16T12:15:46,343][INFO ][filewatch.observingtail ][my_first_pipeline][69e4728f18e1f0f92cb26647c04f6fda8a809fee27b84e828b607d9d44703dcb] START, creating Discoverer, Watch with file and sincedb collections [2023-08-16T12:15:46,450][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:my_first_pipeline, :my_second_pipeline], :non_running_pipelines=>[]}
`
Blockquote
`
[2023-08-16T12:15:46,450][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:my_first_pipeline, :my_second_pipeline], :non_running_pipelines=>}
Both of your pipelines are running.
But there is something a little confusing
[2023-08-16T12:15:43,675][INFO ][logstash.javapipeline ][my_first_pipeline] Starting pipeline {:pipeline_id=>"my_first_pipeline", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["D:/logstash-8.8.1/config/logstash_2.conf"]
And
[2023-08-16T12:15:43,630][INFO ][logstash.javapipeline ][my_second_pipeline] Starting pipeline {:pipeline_id=>"my_second_pipeline", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["D:/logstash-8.8.1/config/logstash_1.conf"]
Did you edit anything in your log before sharing? It seems that the log was edited, this can lead to confusion.
To understand why you are receiving data from just one pipeline you would need to share the conf file for each one of your pipelines.
Share them using the preformatted text option, the </>
, just paste the configuration, select it and click on the buttom to properly format as code.
I didn't edit the log files , which is why I was confused too, I couldn't troubleshoot.
Here is the conf file
input {
file {
path => ["D:/forexsample.log"]
start_position => "beginning"
codec => multiline {
pattern => "^%{TIMESTAMP_ISO8601} "
negate => true
what => "previous"
}
}
}
filter {
grok {
match => {
"message" => "%{TIMESTAMP_ISO8601:timestamp} %{GREEDYDATA:loglevel} \(%{GREEDYDATA:class}\:%{GREEDYDATA:method}\:%{NUMBER:line}\) - %{GREEDYDATA:error_message}"
}
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
user => "elastic"
index => "forex"
password => "VcBQRCx+Db9c2ATbLMFe"
}
stdout {
codec => rubydebug
}
}
Then something may be wrong, because your pipelines.yml
looks like this:
- pipeline.id: my_first_pipeline
path.config: "D:\\logstash-8.8.1\\config\\logstash_1.conf"
- pipeline.id: my_second_pipeline
path.config: "D:\\logstash-8.8.1\\config\\logstash_2.conf"
So my_first_pipeline
should run logstash_1.conf
and my_second_pipeline
should run logstash_2.conf
But your logs says that they are inverted. Double check the pipelines.yml
to confirm it, to troubleshoot you may remove/comment one of the pipelines and see if it is indeed running this pipelines.yml
.
You have 2 conf files, please share both and specify which pipeline is by the name of the conf file.
Both the conf files are the same, only the path and index have been changed.
Ya I had inter-changed the logstash_1 and logstash_2 just to see if they would work other way round.
Can you please help me troubleshoot if there are any issues in the conf file
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.