Remove orphaned shard from ElasticSearch?

Hi!

I'm new to Elasticsearch so apologies for my lack of technical explanation but I'm hoping someone can help me out.

We've setup a single node Logstash/EL/Kibana implementation to aggregate our device logs at my org. Things were going relatively smoothly, until last night when the web interface became unresponsive. I tried restarting the services, but I still couldn't get it operational so I bounced the server.

Upon boot I noticed it wasn't collecting logs. I checked the Head plugin and noticed it was stuck in a "loop" trying to mount the same 6 shards over and over. I tried a few things, but couldn't seem to get these shards to assign correctly to the node. I decided to check out the contents of the problem nodes to check for file corruption/filelocks/perms whatever, and noticed they appeared to be vacant of any actual log data. In each, the \index folder had nothing but the segments_xx file, no actual content.

So this is where I brought out the sledgehammer and decided the easiest resolution was to nuke the hollow shards and restart Elasticsearch. This kind worked - logs are being collected and the remaining shards are mounted correctly... however, EL still thinks is has these now missing shards orphaned under the indexes which is preventing the health from going back to yellow and my monitoring to stfu!

If anyone has a fix I would appreciate - even more so if someone can suggest some stuff to check for the root cause. Thanks!