Logstash's Pipeline has terminated

I'm Running Logstash on Docker, yet running into following issue:

logstash_1  | Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1  | [2018-02-28T04:02:06,811][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
logstash_1  | [2018-02-28T04:02:06,833][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
logstash_1  | [2018-02-28T04:02:07,991][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"arcsight", :directory=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.1-java/modules/arcsight/configuration"}
logstash_1  | [2018-02-28T04:02:08,214][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
logstash_1  | [2018-02-28T04:02:08,219][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
logstash_1  | [2018-02-28T04:02:08,877][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"880eedb5-4fea-4989-801c-01157e5891ef", :path=>"/usr/share/logstash/data/uuid"}
logstash_1  | [2018-02-28T04:02:09,576][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.2.1"}
logstash_1  | [2018-02-28T04:02:10,281][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
logstash_1  | [2018-02-28T04:02:14,322][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
logstash_1  | [2018-02-28T04:02:15,218][INFO ][logstash.pipeline        ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x19d31955 run>"}
logstash_1  | [2018-02-28T04:02:15,381][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}
logstash_1  | [2018-02-28T04:02:18,052][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x19d31955 run>"}
logstash_logstash_1 exited with code 0
# 

I re-run it with LOG_LEVEL=debug and notices following error:

logstash_1  | [2018-02-28T04:03:29,398][DEBUG][logstash.instrument.periodicpoller.cgroup] Error, cannot retrieve cgroups information {:exception=>"Errno::ENOENT", :message=>"No such file or directory - /sys/fs/cgroup/cpuacct/docker/9f2a8ce98ade7aa55bf99896ba8e088d1b028db9fcd332a3d56737e322bb5a31/cpuacct.usage"}
logstash_1  | [2018-02-28T04:03:29,471][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
logstash_1  | [2018-02-28T04:03:29,481][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
logstash_1  | [2018-02-28T04:03:29,543][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
logstash_1  | [2018-02-28T04:03:30,255][INFO ][logstash.pipeline        ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x78c63da run>"}
logstash_1  | [2018-02-28T04:03:30,449][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}
logstash_1  | [2018-02-28T04:03:33,195][DEBUG][logstash.inputs.elasticsearch] Closing {:plugin=>"LogStash::Inputs::Elasticsearch"}
logstash_1  | [2018-02-28T04:03:33,239][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x78c63da sleep>"}
logstash_1  | [2018-02-28T04:03:33,246][DEBUG][logstash.pipeline        ] Shutting down filter/output workers {:pipeline_id=>"main", :thread=>"#<Thread:0x78c63da run>"}
logstash_1  | [2018-02-28T04:03:33,255][DEBUG][logstash.pipeline        ] Pushing shutdown {:pipeline_id=>"main", :thread=>"#<Thread:0x49633726@[main]>worker0 run>"}
logstash_1  | [2018-02-28T04:03:33,257][DEBUG][logstash.pipeline        ] Pushing shutdown {:pipeline_id=>"main", :thread=>"#<Thread:0x13013948@[main]>worker1 run>"}
logstash_1  | [2018-02-28T04:03:33,266][DEBUG][logstash.pipeline        ] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<Thread:0x49633726@[main]>worker0 run>"}
logstash_1  | [2018-02-28T04:03:33,436][DEBUG][logstash.pipeline        ] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<Thread:0x13013948@[main]>worker1 dead>"}
logstash_1  | [2018-02-28T04:03:33,442][DEBUG][logstash.filters.mutate  ] Closing {:plugin=>"LogStash::Filters::Mutate"}
logstash_1  | [2018-02-28T04:03:33,455][DEBUG][logstash.outputs.csv     ] Closing {:plugin=>"LogStash::Outputs::CSV"}
logstash_1  | [2018-02-28T04:03:33,465][DEBUG][logstash.outputs.csv     ] Close: closing files
logstash_1  | [2018-02-28T04:03:33,490][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x78c63da run>"}
logstash_1  | [2018-02-28T04:03:33,544][DEBUG][logstash.instrument.periodicpoller.os] Stopping
logstash_1  | [2018-02-28T04:03:33,585][DEBUG][logstash.instrument.periodicpoller.jvm] Stopping
logstash_1  | [2018-02-28T04:03:33,587][DEBUG][logstash.instrument.periodicpoller.persistentqueue] Stopping
logstash_1  | [2018-02-28T04:03:33,596][DEBUG][logstash.instrument.periodicpoller.deadletterqueue] Stopping
logstash_1  | [2018-02-28T04:03:33,710][DEBUG][logstash.agent           ] Shutting down all pipelines {:pipelines_count=>1}
logstash_1  | [2018-02-28T04:03:33,718][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>1}
logstash_1  | [2018-02-28T04:03:33,720][DEBUG][logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Stop/pipeline_id:main}
logstash_1  | [2018-02-28T04:03:33,744][DEBUG][logstash.pipeline        ] Stopping inputs {:pipeline_id=>"main", :thread=>"#<Thread:0x78c63da dead>"}
logstash_1  | [2018-02-28T04:03:33,749][DEBUG][logstash.inputs.elasticsearch] Stopping {:plugin=>"LogStash::Inputs::Elasticsearch"}
logstash_1  | [2018-02-28T04:03:33,762][DEBUG][logstash.pipeline        ] Stopped inputs {:pipeline_id=>"main", :thread=>"#<Thread:0x78c63da dead>"}
logstash_1  | [2018-02-28T04:03:33,780][DEBUG][logstash.pipeline        ] Worker terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x49633726@[main]>worker0 dead>"}
logstash_1  | [2018-02-28T04:03:33,785][DEBUG][logstash.pipeline        ] Worker terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x13013948@[main]>worker1 dead>"}
logstash_logstash_1 exited with code 0

Please advise.

First check the configuration of logstash config using the command like below.

$ logstash --config.test_and_exit -f /etc/logstash/conf.d/test.conf

Unless you've enabled scheduled execution I believe the elasticsearch input shuts down Logstash after it has run the configured query once.

lines such as: Pipeline started succesfully and/or Pipelines running, tells me config.test_and_exit would be OK.

there is no schedule execution... my output consists of csv and at the end, there is no .csv file generated... I ran same query through Kibana's Dev Tools and it gives desired results, but not through logstash(

Can you post you the config you use and the logstash.yml file?

# cat pipeline/*
input {
	elasticsearch {
		hosts => ["elasticsearch:9200"]
		index => "x*"
		password => "x"
		user => "x"
		query => '
{
  "query": {
    "bool": {
      "should": [
        {"wildcard":{"x":"x*"}}
      ],
      "minimum_should_match": 1, 
      "filter": [{
        "exists": {
          "field": "x"
        }
      }, {
        "exists": {
          "field": "x"
        }
      }, {
        "exists": {
          "field": "x"
        }
      }, {
        "range": {
          "@timestamp": {
            "gte": "2017-07-01T00:00:00.000Z",
            "lte": "2018-01-01T23:59:59.999Z"
          }
        }
      }]
    }
  }
}
'
		size => 500
		scroll => "5m"
	}
}
filter {
  mutate {
    add_field => { "x" => "x" }
  }
}
output {
	csv {
		fields => ["x","_id","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x","x"]
		path => "/tmp/x.csv"
	}
}
#

Looks easy enough.. Your sure you can connect to the ES instance?

I'm pretty sure, or at least I don't see any reason why not... I have few other pipeline configuration, and they connects to elasticsearch without any issues, the only thing I change is pipeline configuration (different query).

n/m, I figure out where I made a mistake and as soon as I correct it, it's started to work)

1 Like

what was your mistake?

I was playing with Security and lock myself to certain docs, and was trying to query to others... it was working, just no data since my query was returning nothing... hence nothing was written to the file...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.