S3 output debug


#1

Hi, I am quite new to logstash and do run logstash 5 on AWS Linux using s3 output and elasticsearch output like in below snippet from logstash.conf
The problem I face is that data is not put in s3. Still, the s3 test file can be put in the bucket and sometimes data are put, too, but most of the times it does not work.
The log file does not show anything critical.
Now I want to switch logs to debug. I am new to linux and using 'initctl start logstash --log.level error' wont work...
Can anybody provide some guidance?

here my logstash conf:
output {
s3{
region => "us-east-1"
bucket => "aws-xxx"
codec => "json_lines"
}
elasticsearch {
hosts => "search-xxx:443"
ssl => true
index => "dummy"
}
}

another thing i noted - although test file is put in s3 the log says nothing about it:

[ec2-user@ip-10-213-72-21 ~]$ more /var/log/logstash/log*
[2018-05-09T10:53:43,376][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-05-09T10:53:43,380][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-05-09T10:53:43,391][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
[2018-05-09T10:53:43,392][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/var/lib/logstash/dead_letter_queue"}
[2018-05-09T10:53:43,414][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"44672d84-c9a0-461b-91f5-a071888ef75e", :path=>"/var/lib/logstash/uuid"}
[2018-05-09T10:54:35,893][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>xxx:443/]}}
[2018-05-09T10:54:35,896][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>xxx:443/, :path=>"/"}
[2018-05-09T10:54:36,061][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"xxx:443/"
}
[2018-05-09T10:54:36,121][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-05-09T10:54:36,137][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"
=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"
=>{"match"=>"
", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type
"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"ty
pe"=>"half_float"}}}}}}}}
[2018-05-09T10:54:36,148][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//xxx:443"]}
[2018-05-09T10:54:36,152][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>2
50}
[2018-05-09T10:54:36,814][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-05-09T10:54:36,817][INFO ][logstash.pipeline ] Pipeline main started
[2018-05-09T10:54:36,851][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-05-09T10:54:36,891][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}


(Magnus B├Ąck) #2

Now I want to switch logs to debug. I am new to linux and using 'initctl start logstash --log.level error' wont work...

You can set the loglevel in logstash.yml.


(system) #3

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.