I have a csv file that I need to index in my ES instance and I only need for logstash to read through to the file end and then stop the service. Is this possible in Elastic Stack 7?
My config file looks like this:
input{
        file {
                path => "C:/Users/.../empData1.csv"
		start_position => "beginning"
		sincedb_path => "NUL"
        }
}
filter{
        csv{
                separator => ","
                columns => ["X,Y,Z"]
        }
	mutate {convert => ["X", "integer"]}
        mutate {convert => ["Y", "integer"]}
}
output {
        elasticsearch{
                hosts => ["localhost:9200"]
                index => "name"
                document_type => "otherName"
        }
        stdout{}
}
I turned on debug logging and once the file finished, I kept getting these four lines in a repeating pattern until I manually killed the process:
[2019-06-11T08:19:34,895][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2019-06-11T08:19:34,978][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-06-11T08:19:34,979][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-06-11T08:19:35,537][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.
My best alternative solution is to throw this debug output into a log file and look for the repeated occurrence of these lines then send a kill signal to the program. Is there a better way I could stop Logstash after it is done parsing through my csv