Ending logstash after csv parsing

I have a csv file that I need to index in my ES instance and I only need for logstash to read through to the file end and then stop the service. Is this possible in Elastic Stack 7?
My config file looks like this:

        file {
                path => "C:/Users/.../empData1.csv"
		start_position => "beginning"
		sincedb_path => "NUL"

                separator => ","
                columns => ["X,Y,Z"]
	mutate {convert => ["X", "integer"]}
        mutate {convert => ["Y", "integer"]}

output {
                hosts => ["localhost:9200"]
                index => "name"
                document_type => "otherName"

I turned on debug logging and once the file finished, I kept getting these four lines in a repeating pattern until I manually killed the process:

[2019-06-11T08:19:34,895][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu

[2019-06-11T08:19:34,978][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}

[2019-06-11T08:19:34,979][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}

[2019-06-11T08:19:35,537][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.

My best alternative solution is to throw this debug output into a log file and look for the repeated occurrence of these lines then send a kill signal to the program. Is there a better way I could stop Logstash after it is done parsing through my csv

It can be done if you cat the file into a stdin input. It cannot be done using a file input.

Lets say that I am reading the data from an AWS S3 bucket and am changing the input

        s3 {
            bucket => "pdcs-dump-test"
            access_key_id => "xxx"
            secret_access_key => "yyy"
            region => "us-west-2"

would there be something different I could do?

No, I think stdin is the only input that will cause logstash to exit when it is finished.

You can actually set watch_for_new_files to false in the s3 plugin and that will terminate the logstash process upon completion of going through data.

Oh, you are right. It overrides stop rather than calling do_stop directly as stdin does, but it does shut logstash down.