How to tune and check the status of the logstash config execution

Hi Magnus,
I have just started exploring Logstash.
Currently trying to load a log file to elasticsearch, but it's taking a long time to create index.
Please help me understand how can we tune the config of the logstash.

File content is :
INFO : LM_36435 [Thu Apr 21 23:27:16 2016] : (20731|-1449096896) Starting execution of workflow [wf_test] in folder [Folder1] last saved by user [admin].

Config file :

input {

file {
path => "/logstash-2.1.1/conf/Infa_log/wf_test.log"
start_position => "beginning"
type => "infa_logs"
}
}

filter {

        grok {
                match => [ "message","%{WORD:Severity} : %{WORD:Message_code} \[%{DAY:Day} %{MONTH:Month} %{MONTHDAY:Day_of_Month} %{HOUR:Hour}:%{MINUTE:Min}:%{SECOND:Sec} %{YEAR:Year}\] : \(%{NOTSPACE:Num}\) %{GREEDYDATA:Message}"
                ]
                }

}

output {

elasticsearch{
hosts => ["localhost:9200"]
index => "infa_log"
}

}

Regards,
Asrar

There's not much in your configuration so I don't think there's much tuning to be done there. You should check the processing pipeline documentation and the configuration options described there. For example, if you haven't saturated your CPU(s) yet you can try increasing the number of pipeline workers, although since you run Elasticsearch on the same machine they're going to compete for the resources.

What kind of message throughput are you currently getting? What makes you think that Logstash is the bottleneck rather than Elasticsearch?

We did not had any problem with the config file written, it was not getting completed since the user i was using to run the config was not having the access to the file being read.

This was discovered by using the standard input plugin, using this it was written in ES hence the filter and the grok pattern was correct, which showed that the file read access had issues.

Thanks Magnus for the link, I will go through it.