Problem with S3 Input

This is all new to me, but I'm trying to pull Cisco managed S3 bucket gzip logs using the below input. The error message I'm seeing in /var/log/logstash/logstash.log is "Logstash S3 input, stop reading in the middle of the file, we will read it again when logstash is started". Could an incorrect codec and charset cause this issue? I've been struggling with this for a while. Any tips would be very much appreciated.

s3 {
access_key_id => ""
bucket => "cisco-managed-us-west-1"
region => "us-west-1"
secret_access_key => ""
prefix => "/"
type => "s3"
}

Hm. That message could be more clear.

From looking at the code, it looks like that warning is emitted when the input plugin is stopped while it is in the middle of reading a file. It is a warning, not an error message. The next time the input is started up, it will start with the beginning of the file that it had been in the middle of reading. Some or all of the events in the file may be emitted again.

The plugin could be being stopped as a part of pipeline reloading (e.g., if you're changing the pipeline configuration while Logstash is running), because you've initiated a shutdown, or because of a separate issue with the pipeline (in which case there should be a relevant ERROR message in the logs)

Hi Yauuie- Thanks for taking the time to respond. You might be right about these errors being generated by me as I'm testing changes the the S3 input- I hadn't considered that. I'll stop testing it for a test period today to see if that's the case. I've found no errors in logstash.log in re: to the pipeline. What I know for sure is I see no log data showing up in Kibana (i.e. Nagios LMS GUI). I just read about the debug mode option and will give this a try. I've pasted the combined list of log alerts below which seems to support your observations. Thanks again.

{:timestamp=>"2018-07-18T09:21:03.722000-0700", :message=>"stopping pipeline", : id=>"main"}
{:timestamp=>"2018-07-18T09:21:38.390000-0700", :message=>"Pipeline main started "}
{:timestamp=>"2018-07-18T10:37:37.748000-0700", :message=>"SIGTERM received. Shu tting down the agent.", :level=>:warn}
{:timestamp=>"2018-07-18T10:37:37.749000-0700", :message=>"stopping pipeline", : id=>"main"}
{:timestamp=>"2018-07-18T10:37:37.809000-0700", :message=>"Logstash S3 input, st op reading in the middle of the file, we will read it again when logstash is sta rted", :level=>:warn}
{:timestamp=>"2018-07-18T10:38:15.728000-0700", :message=>"Pipeline main started "}

One follow up... I did observe while in Logstash debug mode "The shutdown process appears to be stalled due to busy or blocked plugins". Presumably due to the S3 input plugin is receiving data at a slow pace. I'm not sure how to remedy this, but hopefully this might be useful to someone else down the road.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.