Logstash silently fails on trying to import csv

Hello,

I'm trying to import a csv file into ELK and it fails silently even with output set to stdout/debug.

    $ cat /vagrant/sinkhole.csv | /opt/logstash/bin/logstash -f /vagrant/logstash-sinkhole.conf  | tee log
Jun 14, 2015 8:01:51 PM org.elasticsearch.node.internal.InternalNode <init>
INFO: [logstash-elk-vagrant-32003-7948] version[1.5.1], pid[32003], build[5e38401/2015-04-09T13:41:35Z]
Jun 14, 2015 8:01:51 PM org.elasticsearch.node.internal.InternalNode <init>
INFO: [logstash-elk-vagrant-32003-7948] initializing ...
Jun 14, 2015 8:01:51 PM org.elasticsearch.plugins.PluginsService <init>
INFO: [logstash-elk-vagrant-32003-7948] loaded [], sites []
Jun 14, 2015 8:01:54 PM org.elasticsearch.node.internal.InternalNode <init>
INFO: [logstash-elk-vagrant-32003-7948] initialized
Jun 14, 2015 8:01:54 PM org.elasticsearch.node.internal.InternalNode start
INFO: [logstash-elk-vagrant-32003-7948] starting ...
Jun 14, 2015 8:01:54 PM org.elasticsearch.transport.TransportService doStart
INFO: [logstash-elk-vagrant-32003-7948] bound_address {inet[/0:0:0:0:0:0:0:0:9302]}, publish_address {inet[/10.0.2.15:9302]}
Jun 14, 2015 8:01:54 PM org.elasticsearch.discovery.DiscoveryService doStart
INFO: [logstash-elk-vagrant-32003-7948] elasticsearch/XZ4SjlIgQZWnvmrjEJ6Iag
Jun 14, 2015 8:01:57 PM org.elasticsearch.cluster.service.InternalClusterService$UpdateTask run
INFO: [logstash-elk-vagrant-32003-7948] detected_master [Grizzly][UMAeGzFgRb2TrVyY-0CjJw][elk-vagrant][inet[/127.0.0.1:9300]]{max_local_storage_nodes=1}, added {[Grizzly][UMAeGzFgRb2TrVyY-0CjJw][elk-vagrant][inet[/127.0.0.1:9300]]{max_local_storage_nodes=1},[logstash-elk-vagrant-31278-9802][pHxHzJoGSyiP03Fz_O6QFw][elk-vagrant][inet[/10.0.2.15:9301]]{client=true, data=false},}, reason: zen-disco-receive(from master [[Grizzly][UMAeGzFgRb2TrVyY-0CjJw][elk-vagrant][inet[/127.0.0.1:9300]]{max_local_storage_nodes=1}])
Jun 14, 2015 8:01:57 PM org.elasticsearch.node.internal.InternalNode start
INFO: [logstash-elk-vagrant-32003-7948] started
$

I'm using a vagrant ELK and config for sinkhole as present here https://github.com/juju4/ELK
Re-reading of file is force with since_db and start_position

Any pointer to understand why nothing is parsed and why no warning/errors?

Thanks

I've had this happen if the file doesn't contain a CR/LF at the end of it, I'd try opening it in vim etc and then adding one and see if that works.

Thanks Mark. Already tried that. Checked again to be sure but it still fails.

What does your config look like?
Have you tried to change the output to just stdout to make sure data is making it through?

you can see the config in the github link and yeah, I tried with and without "stdout { codec => rubydebug }"

In the example output you are sending data into Logstash through stdin, although in the configuration file you have not defined stdin as an input, only a file path that is different from the one you are testing with: /home/vagrant/logs/sinkhole-*.csv. Does this location contain any files to be processed?

I tried both as input file (yes log dir has one file) and stdin input and in both case, I just have logstash starting but displaying nothing else.
Elasticsearch stays empty

$ /opt/logstash/bin/logstash -f /vagrant/logstash-sinkhole.conf
$ cat file | /opt/logstash/bin/logstash -f /vagrant/logstash-sinkhole.conf
[...]
INFO: [logstash-elk-vagrant-20285-9784] started

and after nothing more

You won't get anything else as the output sends to ES, not stdout.

Like input, I tried both configuration (output stdout or ES) and checked both without anythink.
only have the index properties/fields made available but no data.