Hello,
Firstly I am new to using Kibana and Logstash .I am running logstash to parse one file and simultaneously also running kibana .Is there any way seeing from the Kibana GUI I can confirm that Logstasth parsing has been ended.What is the best option I have choose from Kibana GUI to confirm this.Since I am new I am seeing many options in Kibana and I dont want to experiment each and every Option of Kibana .
KB and LS are not linked, so you cannot do this directly.
Though you should see the message count drop at a certain time, which would indicate it has finished.
Hello warkolm ,
Thanks for your reply .I am facing one issue with using kibana .Following is my .config file for logstash 
input {
file{
path => "D:/Log//"
start_position => "beginning"
sincedb_write_interval => 0
}
file{
	path => "D:/Log/*/MIP/*"
           start_position => "beginning"
           sincedb_write_interval => 0
}
file{
	path => "D:/Log/*/NHGS/*"
             start_position => "beginning"
            sincedb_write_interval => 0
}
}
filter{
mutate{
gsub => ["path","D:/Log/", ""]		
}
}
output {
elasticsearch {
action => "index"
hosts => ["localhost:9200"]
index => "stock"
workers => 1
codec => json
}
}
Following is my log file : excerpts
MIP Started
MIP : 0
MIP : 1
MIP : 2
MIP : 3
MIP : 4
MIP : 5
MIP : 6
MIP : 7
MIP : 8
MIP : 9
..
..
Mip Finished .
But when I am using this command to cross check everything is fine or not I am getting error
$ curl 'http://localhost:9200/stock'
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed
100   303  100   303    0     0  20200      0 --:--:-- --:--:-- --:--:-- 20200{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"stock","index":"stock"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"stock","index":"stock"},"status":404}
Can you please tell me where I am doing wrong .I think the indexing is not getting in a proper way .Can you tell me what should I do to correct this .
Try using the _cat APIs to confirm that you have the index in your cluster.
You could also install Marvel to help as well.