Logstash is stuck?

I'd like to load a csv into Elastic using Logstash. This is the output of my run:

/ C:\Users...\Documents\logstash-6.3.1\logstash-6.3.1\bin>logstash -f logstash_rplim3.config
Sending Logstash's logs to C:/Users/.../Documents/logstash-6.3.1/logstash-6.3.1/logs which is now configured via log4j2.properties
[2018-07-25T08:28:02,832][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-07-25T08:28:03,382][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.1"}
[2018-07-25T08:28:07,693][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pip
[2018-07-25T08:28:08,135][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}
[2018-07-25T08:28:08,146][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>ht
[2018-07-25T08:28:08,383][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-07-25T08:28:08,457][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-07-25T08:28:08,462][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the doc
[2018-07-25T08:28:08,483][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost
[2018-07-25T08:28:08,498][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-07-25T08:28:08,520][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>600
5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text
", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}],
}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>
[2018-07-25T08:28:09,228][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x652ab12d run>"}
[2018-07-25T08:28:09,288][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-07-25T08:28:09,605][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} /

This is top 3 lines of my csv:

/ "entity_name","timestamp_utc","headline","source_name","provider_id"
"JPMorgan Chase & Co.","2018-07-12 10:03:56","Analyst at Peel Hunt Maintains Premier Oil PLC (LON:PMO)Stock Rating as a 'Buy'","Bibey Post","MRVR"
"Fifth Third Bancorp","2018-07-12 10:03:56","Analyst at Peel Hunt Maintains Premier Oil PLC (LON:PMO)Stock Rating as a 'Buy'","Bibey Post","MRVR" /

This is my config:

/input {
file {
path => "C:\Users...\Documents\rplim.csv"
start_position => "beginning"
filter {
csv {
separator => ","
#quote_char => """
columns => ["entity_name", "timestamp_utc", "headline", "source_name","provider_id"]
date {
match => [ "timestamp_utc", "YYYY-MM-dd HH:mm:ss" ]
output {
stdout {
codec => rubydebug
elasticsearch {
hosts => "localhost:9200"
index => "rplim3"
} /
What am I doing wrong? It looks like Logstash is stuck or waiting?

A file input is like "tail -f". It waits forever for new lines to get appended to the file. This looks normal to me.

Thank you Badger. I was about to say, do I need to use this flag then?

C:\Users...\Documents\logstash-6.3.1\logstash-6.3.1\bin>logstash -f logstash_rplim3.config --config.reload.automatic

I don't understand "...NOTE: Use SIGHUP to manually reload the config. The default is false." said about the auto reload flag. Please explain how to manually reload the config?

SIGHUP is UNIX specific, there is no Windows equivalent that I know of. On Windows either use -r to automatically reload when the config changes, or restart logstash every time you change the configuration (which is expensive).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.