Logstash shuts down within some seconds after starting up

Hi all, I am trying to run a log file, but after instantiating logstash suddenly stops.
Logstash 7.2.0
Error Log :

       `[2019-07-29T11:59:28,765][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified`
`[2019-07-29T11:59:28,789][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.2.0"}`
    `[2019-07-29T11:59:31,003][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, input, filter, output at line 3, column 1 (byte 76) after ", :backtrace=>["C:/Users/DESK/Desktop/Downloads/logstash-7.2.0/logstash-7.2.0/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "C:/Users/DESK/Desktop/Downloads/logstash-7.2.0/logstash-7.2.0/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "C:/Users/DESK/Desktop/Downloads/logstash-7.2.0/logstash-7.2.0/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2577:in `map'", "C:/Users/DESK/Desktop/Downloads/logstash-7.2.0/logstash-7.2.0/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:151:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in `initialize'", "C:/Users/DESK/Desktop/Downloads/logstash-7.2.0/logstash-7.2.0/logstash-core/lib/logstash/java_pipeline.rb:24:in `initialize'", "C:/Users/DESK/Desktop/Downloads/logstash-7.2.0/logstash-7.2.0/logstash-core/lib/logstash/pipeline_action/create.rb:36:in `execute'", "C:/Users/DESK/Desktop/Downloads/logstash-7.2.0/logstash-7.2.0/logstash-core/lib/logstash/agent.rb:325:in `block in converge_state'"]}`
    `[2019-07-29T11:59:31,625][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}`
`[2019-07-29T11:59:36,393][INFO ][logstash.runner          ] Logstash shut down.

`

Looks like there is a syntax error in your pipeline configuration. Can you post your pipeline config?

`# List of pipelines to be loaded by Logstash
#
# This document must be a list of dictionaries/hashes, where the keys/values are pipeline settings.
# Default values for omitted settings are read from the `logstash.yml` file.
# When declaring multiple pipelines, each MUST have its own `pipeline.id`.
#
# Example of two pipelines:
#
# - pipeline.id: test
#   pipeline.workers: 1
#   pipeline.batch.size: 1
#   config.string: "input { generator {} } filter { sleep { time => 1 } } output { stdout { codec => dots } }"
# - pipeline.id: another_test
#   queue.type: persisted
#   path.config: "/tmp/logstash/*.config"
#
# Available options:
#
#   # name of the pipeline
#   pipeline.id: mylogs
#
#   # The configuration string to be used by this pipeline
#   config.string: "input { generator {} } filter { sleep { time => 1 } } output { stdout { codec => dots } }"
#
#   # The path from where to read the configuration text
#   path.config: "/etc/conf.d/logstash/myconfig.cfg"
#
#   # How many worker threads execute the Filters+Outputs stage of the pipeline
#   pipeline.workers: 1 (actually defaults to number of CPUs)
#
#   # How many events to retrieve from inputs before sending to filters+workers
#   pipeline.batch.size: 125
#
#   # How long to wait in milliseconds while polling for the next event
#   # before dispatching an undersized batch to filters+outputs
#   pipeline.batch.delay: 50
#
#   # Internal queuing model, "memory" for legacy in-memory based queuing and
#   # "persisted" for disk-based acked queueing. Defaults is memory
#   queue.type: memory
#
#   # If using queue.type: persisted, the page data files size. The queue data consists of
#   # append-only data files separated into pages. Default is 64mb
#   queue.page_capacity: 64mb
#
#   # If using queue.type: persisted, the maximum number of unread events in the queue.
#   # Default is 0 (unlimited)
#   queue.max_events: 0
#
#   # If using queue.type: persisted, the total capacity of the queue in number of bytes.
#   # Default is 1024mb or 1gb
#   queue.max_bytes: 1024mb
#
#   # If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
#   # Default is 1024, 0 for unlimited
#   queue.checkpoint.acks: 1024
#
#   # If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
#   # Default is 1024, 0 for unlimited
#   queue.checkpoint.writes: 1024
#
#   # If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
#   # Default is 1000, 0 for no periodic checkpoint.
#   queue.checkpoint.interval: 1000
#
#   # Enable Dead Letter Queueing for this pipeline.
#   dead_letter_queue.enable: false
#
#   If using dead_letter_queue.enable: true, the maximum size of dead letter queue for this pipeline. Entries
#   will be dropped if they would increase the size of the dead letter queue beyond this setting.
#   Default is 1024mb
#   dead_letter_queue.max_bytes: 1024mb
#
#   If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
#   Default is path.data/dead_letter_queue
#
#   path.dead_letter_queue:
`

This is logstash.yml. There is an issue with .conf file in conf.d directory.

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}

This is logstash-sample.conf.

No, this is not logstash.yml. It is pipelines.yml.

This is sample configuration. Are you trying this configuration or you have appended your configuration options to?

input {
 file {
   path => "‪C:\Apache24\logs\error.log"
   type => "apache-error"
 }
}
output {
   elasticsearch {
       hosts => "http://127.0.0.1:9200"
       index => "logs_apache"
       document_Type => "logs"
 }
}

This configuration I am trying for indexing.

You may have to replace

"‪C:\Apache24\logs\error.log"

with

"‪C:\\Apache24\\logs\\error.log"

May be not working in this case.

[2019-07-29T15:12:40,882][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-07-29T15:12:40,913][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.2.0"}
[2019-07-29T15:12:57,169][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[2019-07-29T15:12:57,808][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[2019-07-29T15:12:57,939][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-07-29T15:12:57,959][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-29T15:12:58,065][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://127.0.0.1:9200"]}
[2019-07-29T15:12:58,239][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-07-29T15:12:58,791][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2019-07-29T15:12:58,750][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2019-07-29T15:12:58,838][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x1cfc0be7 run>"}
[2019-07-29T15:13:04,944][ERROR][logstash.javapipeline    ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<ArgumentError: File paths must be absolute, relative path specified: ‪C:\\Apache24\\logs\\error.log>, :backtrace=>["C:/Users/Megha/Desktop/Downloads/logstash-7.2.0/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-input-file-4.1.10/lib/logstash/inputs/file.rb:269:in `block in register'", "org/jruby/RubyArray.java:1792:in `each'", "C:/Users/Megha/Desktop/Downloads/logstash-7.2.0/logstash-7.2.0/vendor/bundle/jruby/2.5.0/gems/logstash-input-file-4.1.10/lib/logstash/inputs/file.rb:267:in `register'", "C:/Users/Megha/Desktop/Downloads/logstash-7.2.0/logstash-7.2.0/logstash-core/lib/logstash/java_pipeline.rb:192:in `block in register_plugins'", "org/jruby/RubyArray.java:1792:in `each'", "C:/Users/Megha/Desktop/Downloads/logstash-7.2.0/logstash-7.2.0/logstash-core/lib/logstash/java_pipeline.rb:191:in `register_plugins'", "C:/Users/Megha/Desktop/Downloads/logstash-7.2.0/logstash-7.2.0/logstash-core/lib/logstash/java_pipeline.rb:292:in `start_inputs'", "C:/Users/Megha/Desktop/Downloads/logstash-7.2.0/logstash-7.2.0/logstash-core/lib/logstash/java_pipeline.rb:248:in `start_workers'", "C:/Users/Megha/Desktop/Downloads/logstash-7.2.0/logstash-7.2.0/logstash-core/lib/logstash/java_pipeline.rb:146:in `run'", "C:/Users/Megha/Desktop/Downloads/logstash-7.2.0/logstash-7.2.0/logstash-core/lib/logstash/java_pipeline.rb:105:in `block in start'"], :thread=>"#<Thread:0x1cfc0be7 run>"}
[2019-07-29T15:13:04,983][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[2019-07-29T15:13:05,859][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-07-29T15:13:10,694][INFO ][logstash.runner          ] Logstash shut down.

Looks like issue with the way path is mentioned. I'm doing it on linux.
Can you try this:

"‪C:/Apache24/logs/error.log"

If this doesn't work, try to escape "/" with "/"

I tried, but again the same set of errors.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.