Logstash Not Creating new ES indices

I am a novice, if that, at ELK. A former co-worker setup our ELK environment to receive data from several McAfee Firewalls, and it was working until the disk ran out of space. I have at least temporarily resolved the disk space full condition. We have a single node, so I have set ES to not create replicas. I can create a new index and delete that index in ES. I can see in the ES logs that the re-routing due to high water mark has ceased, though, to where it was re-routing, I couldn't tell you. I can not find any Logstash logs, anywhere. I can see via Wireshark there is data being streamed to my ELK server from the firewalls. Yet, no new logstash files are being created. This is a windows ELK server. Services have been restarted many times, and full reboots have been performed several times. I could use some assistance getting logstash to create new indices in ES.
Here are the contents of the logstash.conf file:
input {
tcp {
port => 5544
}
udp {
port => 1514
type => MFE_Logs
}
}

filter {
if [type] == "MFE_Logs" {
grok {
match => { "message" => "%{SYSLOG5424PRI}%{SYSLOGTIMESTAMP:syslogtime} %{WORD} %{WORD}: %{GREEDYDATA:dataraw}" }
}
kv {
source => "dataraw"
field_split => ","
remove_field => [ "dataraw", "cache_hit", "logid" ]
}
mutate {
convert => [ "[data][bytes_written_to_client]", "integer" ]
convert => [ "[data][bytes_written_to_server]", "integer" ]
}
date {
match => ["syslogtime", "MMM dd HH:mm:ss", "MMM d HH:mm:ss"]
locale => "en_US"
}
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]
}
}

So Logstash is running on Windows? You're going to have to locate the logs. How are you starting Logstash? Is it run as a service? With what arguments?

Correct. Logstash (and ES and Kibana) are running on a single Windows 2012 server.
I've searched the logstash-2.0.0 folder (under which is a bin folder containing the .conf file copied above) for *.log, and got nothing. Where else might it be? What is the default naming convention for the log file; I'll search all local drives for that file.
Logstash is running as a Windows service (as are the other two). The startup is automatic, with the elasticsearch service as a dependency. There are no "start parameters" listed. The path to the executable is "G:\ELK_Stack\logstash-2.0.0\bin\nssm.exe" with no switches listed.

What is the default naming convention for the log file

There is no default logfile path. It's supplied as an argument.

There are no "start parameters" listed.

That doesn't make sense. How do you point Logstash to the configuration file(s) to use?

LOL... This was built by a former coworker. He had been working on it for several months, and finally got it working about a week before he quit. So, I have no idea how it does what it does... but, with the help of the fantastic folks on this forum, I most assuredly will be finding out.
Am I correct in guessing by your comments, logstash can't run without being pointed to a config file... that it won't run using a default config file location or something like that?
And, if the only way to get logstash to create a log file is to supply it with that info during startup... I'd guess that's why I don't have one.

Before I go trying random commands, I'll try to find the documentation for the nssm.exe command... do you have a good resource to share?

Am I correct in guessing by your comments, logstash can't run without being pointed to a config file... that it won't run using a default config file location or something like that?

Correct.

Before I go trying random commands, I'll try to find the documentation for the nssm.exe command... do you have a good resource to share?

No, I haven't used that tool.

You'll have to locate how Logstash is invoked anyway, but Process Explorer is useful for inspecting the arguments of running processes.

Well, I finally found how logstash is invoked via nssm.exe. The event log for nssm shows the following:

Started G:\ELK_Stack\logstash-2.0.0\bin\run.bat for service logstash in G:\ELK_Stack\logstash-2.0.0\bin.

The run.bat file contains the following:

logstash.bat agent -f logstash.conf

So, what I copied before is relevant and used in some manner.

I'm not sure if what is in the logstash.bat file is related to nssm or logstash, though; I can post it if it would help, but just perusing it, it doesn't appear any special switches are part of the startup, that I can see.

So, related to the logstash.conf file I posted a few days ago, using Wireshark, I can see data coming to the ELK server on UDP 1514 from our MFE devices... so, it appears that is working (from the sending side). I can see a UDP listener on 1514 on the ELK server using netstat -an, however, I can see no established connections. Same story on the devices sending log info... I can see no established connections, but I can see a lot of UDP data on 1514 headed towards the ELK server.
Any thoughts on where to check, next?

There are no "established connections" in UDP so things sound normal. Make sure you capture Logstash's logs and inspect them for more clues. You have have to increase logging verbosity with --verbose or even --debug.

Thanks.
I figured out how to get logstash to log something. I haven't enabled the --verbose or the --debug modes, yet. Here is what I am getting in the log file...

{:timestamp=>"2016-01-25T15:12:12.476000-0600", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://localhost:9200/"]', but Elasticsearch appears to be unreachable or down!", :client_config=>{:hosts=>["http://localhost:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :error_message=>"Connection refused: connect", :level=>:error}

I can tell you for certain that elasticsearch reachable... I can delete existing indices, or even create new indices, using "Sense" plugin to Chrome, or using the curl commands connecting from a linux OS (as the ELK server is running on Windows). In order to enable the ability to reach elasticsearch from Chrome on my PC or from linux, I modified the elasticsearch.yml with the following line:
network.host : aa.bb.cc.dd
(where aa.bb.cc.dd is my local IP address of the ELK server)

By doing that, would it flat out stop listening on "localhost"?

OK, that appears to have been it. By adding the "network.host: IP address" to the elasticsearch.yml file, I effectively stopped allowing Kibana to work, and stopped allowing Logstash to function.

Previously, on a thread over in the elasticsearch forum, I worked to get control of my Windows ELK server. I found that putting in

network.host: aa.bb.cc.dd
where aa.bb.cc.dd is the public IP address of the ELK server

allowed me to have control of my elasticsearch installation. Without it, I can't do anything with it.

Today, I found this page...


that indicates I can add the following to my elasticsearch.yml file:

network.bind_host: ["yourhost", "localhost"]
network.publish_host: yourhost

I tried that.. with and without the double quotes, with and without using 127.0.0.1 or localhost, and while the logstash (and kibana) started functioning, I was unable to control elasticsearch. (curl commands, or commands routed from the Sense plugin).

So. Any ideas how to be able to control my windows based elasticsearch system without breaking my logstash function?

If you want to listen on all interfaces you can set network.bind_host to 0.0.0.0. I'm not sure if you can listen to a list of arbitrary interfaces.