Logstash stuck

Guys need your help I'm desperate,
I'm trying to configure pipelines on logstash server , but it doesn't work , please take a look I'll be grateful to get some advice how to make it work
when I start logstash in the log file I get :

input conf file 01-beats-input.conf

input {
  beats {
    port => 5044
    client_inactivity_timeout => 600
    ssl => true
    ssl_certificate => "/etc/pki/logstash//il-infra-ls-ssa-stg1.crt"
    ssl_key => "/etc/pki/logstash//il-infra-ls-ssa-stg1.pkcs8.key"
    ssl_verify_mode => "peer"
    ssl_certificate_authorities => "/etc/pki/logstash//il-infra-ls-ssa-stg1.crt"
  }
}


output {
    if [log_type] == "secure"
     {
      pipeline { send_to => secure }
     }
    else if [log_type] == "syslog"
     {
      pipeline {send_to => syslog }
     }
}

syslog conf file :

input
{
  pipeline {address => syslog}
}

filter
 {
      mutate
      {
       replace => { "[@metadata][index]" => "syslog" }
      }

    grok
     {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      pattern_definitions => { "GREEDYMULTILINE" => "(.|\n)*" }
      remove_field => "message"
      }
    date
     {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
     }
 }

output
 {
   pipeline {send_to => elastic_out}
 }


and secure conf file 03-secure.conf:

input
{
pipeline {address => secure}
}


filter
 {
  mutate
    {
      replace => { "[@metadata][index]" => "secure" }
    }

grok
 {

 match => { "message" => ["%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} %{DATA:[system][auth][ssh][method]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{NUMBER:[system][auth][ssh][port]} ssh2(: %{GREEDYDATA:[system][auth][ssh][signature]})?",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} user %{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]}",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: Did not receive identification string from %{IPORHOST:[system][auth][ssh][dropped_ip]}",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sudo(?:\[%{POSINT:[system][auth][pid]}\])?: \s*%{DATA:[system][auth][user]} :( %{DATA:[system][auth][sudo][error]} ;)? TTY=%{DATA:[system][auth][sudo][tty]} ; PWD=%{DATA:[system][auth][sudo][pwd]} ; USER=%{DATA:[system][auth][sudo][user]} ; COMMAND=%{GREEDYDATA:[system][auth][sudo][command]}",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} groupadd(?:\[%{POSINT:[system][auth][pid]}\])?: new group: name=%{DATA:system.auth.groupadd.name}, GID=%{NUMBER:system.auth.groupadd.gid}",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} useradd(?:\[%{POSINT:[system][auth][pid]}\])?: new user: name=%{DATA:[system][auth][useradd][name]}, UID=%{NUMBER:[system][auth][useradd][uid]}, GID=%{NUMBER:[system][auth][useradd][gid]}, home=%{DATA:[system][auth][useradd][home]}, shell=%{DATA:[system][auth][useradd][shell]}$",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} %{DATA:[system][auth][program]}(?:\[%{POSINT:[system][auth][pid]}\])?: %{GREEDYMULTILINE:[system][auth][message]}"] }
pattern_definitions => {  "GREEDYMULTILINE"=> "(.|\n)*"}
remove_field => "message"
  }

date
  {
    match => [ "[system][auth][timestamp]", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
  }
geoip
  {
        source => "[system][auth][ssh][ip]"
        target => "[system][auth][ssh][geoip]"
  }
}

output
{
pipeline {send_to => elastic_out}
}

pipelines.yml:

- pipeline.id: 01-beats-input
  path.config: /etc/logstash/conf.d/01-beats-input.conf
- pipeline.id: 02-local-syslog-input
  path.config: /etc/logstash/conf.d/02-local-syslog-input.conf
- pipeline.id: 03-secure
  path.config: /etc/logstash/conf.d/03-secure.conf
- pipeline.id: 30-elasticsearch-output
  path.config: /etc/logstash/conf.d/30-elasticsearch-output.conf
~


when I starting it up this what I get in the log:

[2020-04-23T17:40:11,997][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.6.2"}
[2020-04-23T17:40:18,426][INFO ][org.reflections.Reflections] Reflections took 719 ms to scan 1 urls, producing 20 keys and 40 values
[2020-04-23T17:40:19,851][INFO ][logstash.filters.geoip   ][03-secure] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-City.mmdb"}
[2020-04-23T17:40:20,582][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][01-beats-input] A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-04-23T17:40:20,626][INFO ][logstash.javapipeline    ][01-beats-input] Starting pipeline {:pipeline_id=>"01-beats-input", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/01-beats-input.conf"], :thread=>"#<Thread:0x2ab6d347@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:37 run>"}
[2020-04-23T17:40:21,724][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][02-local-syslog-input] A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-04-23T17:40:21,757][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][03-secure] A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-04-23T17:40:21,857][INFO ][logstash.javapipeline    ][03-secure] Starting pipeline {:pipeline_id=>"03-secure", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/03-secure.conf"], :thread=>"#<Thread:0x6409860 run>"}
[2020-04-23T17:40:21,855][INFO ][logstash.javapipeline    ][02-local-syslog-input] Starting pipeline {:pipeline_id=>"02-local-syslog-input", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/02-local-syslog-input.conf"], :thread=>"#<Thread:0x56f4b17d@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38 run>"}
[2020-04-23T17:40:23,294][INFO ][logstash.outputs.elasticsearch][30-elasticsearch-output] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://il-infra-es-stg1:9200/, https://il-infra-es-stg2:9200/, https://il-infra-es-stg3:9200/]}}
[2020-04-23T17:40:23,970][INFO ][logstash.javapipeline    ][03-secure] Pipeline started {"pipeline.id"=>"03-secure"}
[2020-04-23T17:40:24,015][INFO ][logstash.javapipeline    ][02-local-syslog-input] Pipeline started {"pipeline.id"=>"02-local-syslog-input"}
[2020-04-23T17:40:24,039][INFO ][logstash.inputs.beats    ][01-beats-input] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2020-04-23T17:40:24,272][WARN ][logstash.outputs.elasticsearch][30-elasticsearch-output] Restored connection to ES instance {:url=>"https://il-infra-es-stg1:9200/"}
[2020-04-23T17:40:24,354][INFO ][logstash.outputs.elasticsearch][30-elasticsearch-output] ES Output version determined {:es_version=>7}
[2020-04-23T17:40:24,371][WARN ][logstash.outputs.elasticsearch][30-elasticsearch-output] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-04-23T17:40:24,467][WARN ][logstash.outputs.elasticsearch][30-elasticsearch-output] Restored connection to ES instance {:url=>"https://il-infra-es-stg2:9200/"}
[2020-04-23T17:40:24,548][WARN ][logstash.outputs.elasticsearch][30-elasticsearch-output] Restored connection to ES instance {:url=>"https://il-infra-es-stg3:9200/"}
[2020-04-23T17:40:24,571][INFO ][logstash.javapipeline    ][01-beats-input] Pipeline started {"pipeline.id"=>"01-beats-input"}
[2020-04-23T17:40:24,620][INFO ][logstash.outputs.elasticsearch][30-elasticsearch-output] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//il-infra-es-stg1:9200", "//il-infra-es-stg2:9200", "//il-infra-es-stg3:9200"]}
[2020-04-23T17:40:24,677][INFO ][org.logstash.beats.Server][01-beats-input] Starting server on port: 5044
[2020-04-23T17:40:24,820][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][30-elasticsearch-output] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-04-23T17:40:24,823][INFO ][logstash.javapipeline    ][30-elasticsearch-output] Starting pipeline {:pipeline_id=>"30-elasticsearch-output", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/30-elasticsearch-output.conf"], :thread=>"#<Thread:0x3908cf73@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:37 run>"}
[2020-04-23T17:40:24,818][INFO ][logstash.outputs.elasticsearch][30-elasticsearch-output] Using default mapping template
[2020-04-23T17:40:24,897][INFO ][logstash.javapipeline    ][30-elasticsearch-output] Pipeline started {"pipeline.id"=>"30-elasticsearch-output"}
[2020-04-23T17:40:24,949][INFO ][logstash.outputs.elasticsearch][30-elasticsearch-output] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-04-23T17:40:24,950][INFO ][logstash.agent           ] Pipelines running {:count=>4, :running_pipelines=>[:"30-elasticsearch-output", :"03-secure", :"01-beats-input", :"02-local-syslog-input"], :non_running_pipelines=>[]}
[2020-04-23T17:40:25,216][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

when i shut it down I get :

[2020-04-23T17:44:23,428][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2020-04-23T17:44:28,637][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>55, "name"=>"[02-local-syslog-input]<pipeline", "current_call"=>"[...]/logstash-core/lib/logstash/plugins/builtin/pipeline/input.rb:26:in `sleep'"}], ["LogStash::Filters::Grok", {"match"=>{"message"=>"%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\\[%{POSINT:syslog_pid}\\])?: %{GREEDYDATA:syslog_message}"}, "id"=>"53f199b8b62701aff987cb6774a744a4d817400884bbb77131181a98b7214fc5", "pattern_definitions"=>{"GREEDYMULTILINE"=>"(.|\\n)*"}, "remove_field"=>"message"}]=>[{"thread_id"=>42, "name"=>"[02-local-syslog-input]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}, {"thread_id"=>43, "name"=>"[02-local-syslog-input]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}, {"thread_id"=>44, "name"=>"[02-local-syslog-input]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}, {"thread_id"=>46, "name"=>"[02-local-syslog-input]>worker3", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}]}}
[2020-04-23T17:44:28,650][ERROR][org.logstash.execution.ShutdownWatcherExt] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2020-04-23T17:44:28,676][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>72, "name"=>"[30-elasticsearch-output]<pipeline", "current_call"=>"[...]/logstash-core/lib/logstash/plugins/builtin/pipeline/input.rb:26:in `sleep'"}, {"thread_id"=>68, "name"=>"[30-elasticsearch-output]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}, {"thread_id"=>69, "name"=>"[30-elasticsearch-output]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}, {"thread_id"=>70, "name"=>"[30-elasticsearch-output]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}, {"thread_id"=>71, "name"=>"[30-elasticsearch-output]>worker3", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}]}}
[2020-04-23T17:44:28,680][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>52, "name"=>"[03-secure]<pipeline", "current_call"=>"[...]/logstash-core/lib/logstash/plugins/builtin/pipeline/input.rb:26:in `sleep'"}], ["LogStash::Filters::GeoIP", {"source"=>"[system][auth][ssh][ip]", "target"=>"[system][auth][ssh][geoip]", "id"=>"b8a759950e37876f3486ca577996003b539fbe1386da618389faf20d7e1feade"}]=>[{"thread_id"=>45, "name"=>"[03-secure]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}, {"thread_id"=>47, "name"=>"[03-secure]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}, {"thread_id"=>48, "name"=>"[03-secure]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}, {"thread_id"=>49, "name"=>"[03-secure]>worker3", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}]}}
[2020-04-23T17:44:28,697][ERROR][org.logstash.execution.ShutdownWatcherExt] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2020-04-23T17:44:28,686][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>58, "name"=>"[01-beats-input]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.9-java/lib/logstash/inputs/beats.rb:197:in `run'"}, {"thread_id"=>50, "name"=>"[01-beats-input]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}, {"thread_id"=>51, "name"=>"[01-beats-input]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}, {"thread_id"=>53, "name"=>"[01-beats-input]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}, {"thread_id"=>54, "name"=>"[01-beats-input]>worker3", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}]}}
[2020-04-23T17:44:28,700][ERROR][org.logstash.execution.ShutdownWatcherExt] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2020-04-23T17:44:28,696][ERROR][org.logstash.execution.ShutdownWatcherExt] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2020-04-23T17:44:29,965][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>"01-beats-input"}
[2020-04-23T17:44:30,035][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>"02-local-syslog-input"}
[2020-04-23T17:44:30,074][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>"03-secure"}
[2020-04-23T17:44:30,257][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>"30-elasticsearch-output"}
[2020-04-23T17:44:30,929][INFO ][logstash.runner          ] Logstash shut down.

Any update on this please

why do you say it is not running? logfile says it is running

Yes its running but it stuck logstash doesnt receive any logs I suppresed all the pipeline accept the beat one and it doesn't write event into the file :

input {
  beats {
    port => 5044
    client_inactivity_timeout => 600
    ssl => true
    ssl_certificate => "/etc/pki/logstash//il-infra-ls-ssa-stg1.crt"
    ssl_key => "/etc/pki/logstash//il-infra-ls-ssa-stg1.pkcs8.key"
    ssl_verify_mode => "peer"
    ssl_certificate_authorities => "/etc/pki/logstash//il-infra-ls-ssa-stg1.crt"
  }
}


output {

    file
   {
    path => "/var/log/logstash/fallback.log"
    codec => rubydebug {metadata => "true"}
   }
#stdout { codec => rubydebug {metadata => "true"}}

#    if [log_type] == "secure"
#     {
#      pipeline { send_to => secure }
#     }
#
#    else if [log_type] == "syslog"
#     {
#      pipeline {send_to => syslog }
#     }
#
#    else if [log_type] == "yum"
#     {
#      pipeline {send_to => yum }
#     }
#
#    else if [log_type] == "audit"
#     {
#      pipeline {send_to => audit }
#     }
#
#    else if [log_type] == "iis_log"
#     {
#      pipeline {send_to => iis }
#     }
#    else {
#          pipeline { send_to => fallback }
#        }

}


This trace part I get when I'm running with only file writing :

[2020-04-23T23:09:43,509][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2020-04-23T23:09:45,321][DEBUG][logstash.outputs.file    ][beats] Starting flush cycle
[2020-04-23T23:09:47,323][DEBUG][logstash.outputs.file    ][beats] Starting flush cycle
[2020-04-23T23:09:47,389][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-04-23T23:09:47,395][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-04-23T23:09:48,063][DEBUG][org.logstash.execution.PeriodicFlush][beats] Pushing flush onto pipeline.
[2020-04-23T23:09:49,326][DEBUG][logstash.outputs.file    ][beats] Starting flush cycle
[2020-04-23T23:09:51,327][DEBUG][logstash.outputs.file    ][beats] Starting flush cycle
[2020-04-23T23:09:51,332][DEBUG][logstash.outputs.file    ][beats] Starting stale files cleanup cycle {:files=>{}}
[2020-04-23T23:09:51,342][DEBUG][logstash.outputs.file    ][beats] 0 stale files found {:inactive_files=>{}}
[2020-04-23T23:09:52,411][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-04-23T23:09:52,412][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-04-23T23:09:53,063][DEBUG][org.logstash.execution.PeriodicFlush][beats] Pushing flush onto pipeline.
[2020-04-23T23:09:53,329][DEBUG][logstash.outputs.file    ][beats] Starting flush cycle
[2020-04-23T23:09:55,331][DEBUG][logstash.outputs.file    ][beats] Starting flush cycle
[2020-04-23T23:09:57,333][DEBUG][logstash.outputs.file    ][beats] Starting flush cycle
[2020-04-23T23:09:57,422][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-04-23T23:09:57,424][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-04-23T23:09:58,063][DEBUG][org.logstash.execution.PeriodicFlush][beats] Pushing flush onto pipeline.
[2020-04-23T23:09:59,335][DEBUG][logstash.outputs.file    ][beats] Starting flush cycle
[2020-04-23T23:10:01,337][DEBUG][logstash.outputs.file    ][beats] Starting flush cycle
[2020-04-23T23:10:01,345][DEBUG][logstash.outputs.file    ][beats] Starting stale files cleanup cycle {:files=>{}}
[2020-04-23T23:10:01,353][DEBUG][logstash.outputs.file    ][beats] 0 stale files found {:inactive_files=>{}}
[2020-04-23T23:10:02,435][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-04-23T23:10:02,437][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-04-23T23:10:03,063][DEBUG][org.logstash.execution.PeriodicFlush][beats] Pushing flush onto pipeline.
[2020-04-23T23:10:03,339][DEBUG][logstash.outputs.file    ][beats] Starting flush cycle
[2020-04-23T23:10:05,341][DEBUG][logstash.outputs.file    ][beats] Starting flush cycle
[2020-04-23T23:10:07,343][DEBUG][logstash.outputs.file    ][beats] Starting flush cycle
[2020-04-23T23:10:07,446][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-04-23T23:10:07,448][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-04-23T23:10:08,063][DEBUG][org.logstash.execution.PeriodicFlush][beats] Pushing flush onto pipeline.
[2020-04-23T23:10:09,345][DEBUG][logstash.outputs.file    ][beats] Starting flush cycle

which keep repeating .

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.