Logstash-* unable to fetch mapping in kibana

I have Logstash Kibana and Elasticsearch (all version 5.6.2) configured and running on Windows Server 2012 R2, running through a remote desktop connection. I am working on my logstash config file to read data from a .txt file and output to elasticsearch then to kibana. I can't figure out for the life of me why in kibana it says no matching indices for "logstash-*".

I am very new at this, so bear with me please!

logstash.conf:
input {
file {
path => "//ntsvc/logs/*.txt"
}
}

filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601}%{SPACE}Local4.%{LOGLEVEL}%{SPACE}%{IP}%{SPACE}%{CISCOTIMESTAMP}%%{SYSLOGPROG} %{GREEDYDATA:message}" }
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]
}
}

The only settings i changed in logstash.yml are:
config.test_and_exit: true

config.reload.automatic: true

Some sample data from the .txt file (edited for sensitive information):
2017-09-18 00:00:01 Local4.Debug IP Sep 18 2017 00:00:01: %ASA-0-0000: UDP request discarded from IP to COVERT:IP

I've tested the config file in PowerShell and it says the file is ok and all are running, I have a feeling I have something not configured correctly.

Thank you in advance!

Logstash is tailing the input file and waiting for more data. Please read the documentation about sincedb in the file input documentation and, in particular, check out the sincedb_path and start_position options.

Since I wrote the original post I added

start_position => "beginning"
sincedb_path => "/dev/null"

to the input section of my config file and the issue still persists

Don't use /dev/null on Windows, use nul instead.

My entire conf is as follows:
input {
file {
path => "\\ntsvc\Logs\*.txt"
start_position => "beginning"
sincedb_path => "NUL"
ignore_older => 0
}
}

filter {
grok
{
#grok filter generated and working through pattern generator with my data
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]
}
}

I still have no matching indices in kibana matching logstash-*

I also tried changing the file path to forward slashes and still have the same problem.

Are you running Logstash as a service? Can you try cranking up the loglevel to debug and see if you see anything interesting? There should be something containing "discover" that indicates what files are matched by the filename pattern.

I also strongly suggest that you use a simple stdout { codec => rubydebug } output for now.

I'm not 100% sure I follow. I set the log.level to debug and ran the --config.debug command and looked through the debugged config file and didn't see anything containing "discover". I also changed to output to what you suggested. And yes I am running logstash as a service.

I'm not 100% sure I follow. I set the log.level to debug and ran the --config.debug command and looked through the debugged config file and didn't see anything containing "discover".

Okay. Can you post the logfile somewhere?

And yes I am running logstash as a service.

Last time I looked (years ago, but still) Windows services don't automatically have access to network paths. Can you try with a local file that you're sure Logstash has access to?

I will give the local file a try. But after debugging the first time, I tried to rerun the debug command so I can copy the debugged file and I'm getting a fatal error saying logstash could not be started because there is already another instance using the configured data directory.

I don't know why, or how to fix that.

Perhaps a lockfile was left in the data directory from the previous Logstash run (or you didn't actually kill that process).

I'm not sure what fixed it, but I found that lock file, deleted it, got output to powershell from my data so I changed the output back to elasticsearch et viola!

I have a logstash Index!!!

Thank you so so much for your help and patience with me!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.