Nope... there isn't. I stated in my post on Jan 19 that all logstash files were green and opening. I resolved the "unassigned" and "red" status a while back. The issue, now, is no new logstash files (folders) are getting created.
Oh, right.
Can you create one manually -curl -XPUT HOST:9200/testindex
.
yes, I can... I accidentally created one before, and then deleted it... but, yes, I can create a new index, and it creates a new folder.
So then it's probably an LS problem, try restarting it.
I apologize, I am still too new to know all the lingo. LS? If that means local service, I have restarted all of them immediately after I freed up the disk space below the high watermark, and then again, the following day. No joy.
If LS doesn't mean local services, then you'll have to translate for me.
Logstash. Make sure things are flowing through there.
I have restarted the logstash service several times over the days... sometimes by itself (with Kibana restarted at the same time, as it is dependent on LS), and sometimes as part of a restart of the elasticsearch service (and dependents). It has never helped. I restarted it just now, and still, no joy. I am also having a hard time finding any logs related to logstash (this is a windows server; I can't see any log files in the logstash-2.0.0 folder and its subfolders).
In addition, tonight is "patch night" and the server has been patched and rebooted, so all services have been restarted. Still no joy on new logstash indices or folders.
It's probably worth making a new thread in the LS area with configs so we can dig further.
Mark,
I tried getting help over in logstash forum, and made some progress. I decided to run back to this forum when it appears the problem might still be in my elasticsearch configuration.
Short (hopefully) recap:
At one point, the logstash was doing it's thing, and elasticsearch was doing it's thing. I don't believe I had "control" of elasticsearch (via curl commands or the Sense plugin) as I had never tried that before.
The disks filled up.
I started looking into how to get things functioning again, and just "using" the environment.
At some point, I changed the elasticsearch.yml config file to include:
network.host: aa.bb.cc.dd
where aa.bb.cc.dd is the public IP of this Windows-based ELK server.
I was now able to control the elasticsearch environment, but both logstash and kibana stopped working.
It turns out by putting that entry in, it only listens on that IP, and no longer listens on localhost (as far as I can determine).
I found the following page.
I was unable to get what was suggested on that page to work. In the end, I have removed the network.host entry, completely, so the system is now working, but I am unable to control (query, delete, etc) via curl commands on a separate linux server or the Sense plugin (and I have yet to figure out how to control things locally on the Windows ELK server).
Can you either suggest articles etc on how to control the elasticsearch environment on the Windows 2012 R2 ELK server (without loading Chrome and Sense on it), or configure the elasticsearch environment to listen on both localhost and public IPs?
Perused, and digested. No help, darn it! By reading over that, the linked Network Changes, and the linked Network Settings pages, I did determine I had a typo in my settings; I had bind.host instead of bind_host. Regardless, after correcting that, trying with and without quotes, with and without a space between the two IP addresses, I still have no joy. Logstash is still functioning, but I am unable to connect to and control elasticsearch from another linux server or from Chrome and the Sense plugin, both of those are systems off of the local host box.
Still stuck.
Here are my current settings.
network.bind_host : [10.1.2.3,127.0.0.1]
network.publish_host: 10.1.2.3