Pipeline Worker Loop Initialization Error

Been running into an issue with Logstash failing to initialize due to pipeline worker error.

Containerized version being used: Logstash 8.8.2

Error:

Using bundled JDK: /usr/share/logstash/jdk
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2023-07-21T22:44:15,293][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2023-07-21T22:44:15,298][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.8.2", "jruby.version"=>"jruby 9.3.10.0 (2.6.8) 2023-02-01 107b2e6697 OpenJDK 64-Bit Server VM 17.0.7+7 on 17.0.7+7 +indy +jit [x86_64-linux]"}
[2023-07-21T22:44:15,301][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/, -Xmx1g, -Xms1g, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2023-07-21T22:44:15,311][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2023-07-21T22:44:15,315][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2023-07-21T22:44:15,680][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"ebc73fba-c1fa-4877-abd5-0eeb8524da29", :path=>"/usr/share/logstash/data/uuid"}
[2023-07-21T22:44:16,689][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2023-07-21T22:44:16,890][INFO ][org.reflections.Reflections] Reflections took 210 ms to scan 1 urls, producing 132 keys and 464 values
[2023-07-21T22:44:16,974][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2023-07-21T22:44:16,992][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x3bc58f26@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2023-07-21T22:44:16,998][ERROR][logstash.javapipeline    ][main] Worker loop initialization error {:pipeline_id=>"main", :error=>"Missing Filter End Vertex", :exception=>Java::JavaLang::IllegalStateException, :stacktrace=>"org.logstash.config.ir.CompiledPipeline$CompiledExecution.lambda$compileFilters$1(org/logstash/config/ir/CompiledPipeline.java:401)\njava.util.Optional.orElseThrow(java/util/Optional.java:403)\norg.logstash.config.ir.CompiledPipeline$CompiledExecution.compileFilters(org/logstash/config/ir/CompiledPipeline.java:401)\norg.logstash.config.ir.CompiledPipeline$CompiledExecution.<init>(org/logstash/config/ir/CompiledPipeline.java:386)\norg.logstash.config.ir.CompiledPipeline$CompiledUnorderedExecution.<init>(org/logstash/config/ir/CompiledPipeline.java:337)\norg.logstash.config.ir.CompiledPipeline.buildExecution(org/logstash/config/ir/CompiledPipeline.java:156)\norg.logstash.execution.WorkerLoop.<init>(org/logstash/execution/WorkerLoop.java:65)\njdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\njdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(jdk/internal/reflect/NativeConstructorAccessorImpl.java:77)\njdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(jdk/internal/reflect/DelegatingConstructorAccessorImpl.java:45)\njava.lang.reflect.Constructor.newInstanceWithCaller(java/lang/reflect/Constructor.java:499)\njava.lang.reflect.Constructor.newInstance(java/lang/reflect/Constructor.java:480)\norg.jruby.javasupport.JavaConstructor.newInstanceDirect(org/jruby/javasupport/JavaConstructor.java:237)\norg.jruby.RubyClass.new(org/jruby/RubyClass.java:911)\norg.jruby.RubyClass$INVOKER$i$newInstance.call(org/jruby/RubyClass$INVOKER$i$newInstance.gen)\nRUBY.init_worker_loop(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:583)\nRUBY.start_workers(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:289)\norg.jruby.RubyProc.call(org/jruby/RubyProc.java:309)\njava.lang.Thread.run(java/lang/Thread.java:833)", :thread=>"#<Thread:0x3bc58f26@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 sleep>"}
[2023-07-21T22:44:17,001][ERROR][logstash.javapipeline    ][main] Pipeline error {:pipeline_id=>"main", :exception=>#<RuntimeError: Some worker(s) were not correctly initialized>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:293:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:194:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:146:in `block in start'"], "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x3bc58f26@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2023-07-21T22:44:17,001][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2023-07-21T22:44:17,008][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[2023-07-21T22:44:17,074][INFO ][logstash.runner          ] Logstash shut down.
[2023-07-21T22:44:17,079][FATAL][org.logstash.Logstash    ] Logstash stopped processing because of an error: (SystemExit) exit

Relevant Helm Chart section:

logstash:
  enabled: true
  logstashPipeline:
    logstash.conf: |-
      {{ .Files.Get "logstash-pipelines/my_apps.conf" }}
  logstashConfig:
    logstash.yml: |
      http.host: 0.0.0.0
      monitoring.elasticsearch.hosts: http://elasticsearch-master.elk.svc:9200
    pipelines.yml: |
      - pipeline.id: myapps
        path.config: "/usr/share/logstash/pipeline/logstash.conf"
  extraEnvs:
  - name: "ELASTICSEARCH_USERNAME"
    valueFrom:
      secretKeyRef:
        name: elasticsearch-master-credentials
        key: username
  - name: "ELASTICSEARCH_PASSWORD"
    valueFrom:
      secretKeyRef:
        name: elasticsearch-master-credentials
        key: password
  secretMounts: 
  - name: elasticsearch-master-certs
    secretName: elasticsearch-master-certs
    path: /usr/share/logstash/config/certs    
  service:
    annotations: {}
    type: ClusterIP
    loadBalancerIP: ""
    ports:
    - name: beats
      port: 5045
      protocol: TCP
      targetPort: 5045
    - name: http
      port: 8080
      protocol: TCP
      targetPort: 8080

The pipeline is very simple to just test functionality:

input {
  beats {
    port => 5044
    codec => json {
        target => "[message]"
    }
  }
}
output {
  elasticsearch {
    hosts => ["elasticsearch-master.elk.svc:9200"]
    user => '${ELASTICSEARCH_USERNAME}'
    password => '${ELASTICSEARCH_PASSWORD}'
    manage_template => false
    ssl => true
    cacert => '/usr/share/logstash/config/certs/ca.crt'
    index => "logstash-%{[fields][index_name]}-%{+yyyy.MM.dd}"
  }
}

The pipeline seems very simple and not sure why I am getting error=>"Missing Filter End Vertex"? I've read the docs to ensure that the pipeline overwrites the default logstash.conf which is what I did in the helm chart.

Any ideas in how to troubleshoot this issue?

This error normally happens when the pipeline is missing the input block.

I would double check your my_apps.conf file and check if it is really replacing the /usr/share/logstash/pipeline/logstash.conf file in your pod.

After your comment I decided to take a little break and came back. Indeed you are correct.

I had a typo in my pathing for .Files.Get which was causing the pipeline to not get copied over and who knows what was overriding the default logstash.conf which resulted in the error.

Thanks for the help!

Please close topic.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.