Shard relocation: shard not deleted from original node

I'm running into a problematic scenario where upon shard relocation, the original shard is never removed from the originating node.

Scenario Details:

  1. Diskspace high watermark is exceeded on a node
  2. To alleviate, cluster moves shard to a new node
  3. Move is successful (shard active on new node)
  4. Original shard is orphaned (data still exists on disk)

Other details:

  1. ES 1.7.4

  2. Cluster is currently running with allocation set to primaries (this is necessary for the short term)

  3. Index is defined with number of replicas = 1 (primary + 1 replica)

  4. Logging show s no errors and appears to be removing the shard

    2016-09-15T18:18:34.040+0000 DEBUG indices.cluster - [node_data] [logstash-2016.08.23][31] removing shard (not allocated)
    2016-09-15T18:18:34.040+0000 DEBUG index - [node_data] [logstash-2016.08.23] [31] closing... (reason: [removing shard (not allocated)])
    2016-09-15T18:18:34.041+0000 DEBUG index.engine - [node_data] [logstash-2016.08.23][31] close now acquiring writeLock
    2016-09-15T18:18:34.041+0000 DEBUG index.engine - [node_data] [logstash-2016.08.23][31] close acquired writeLock
    2016-09-15T18:18:34.046+0000 DEBUG index.engine - [node_data] [logstash-2016.08.23][31] engine closed [api]
    2016-09-15T18:18:34.046+0000 DEBUG index - [node_data] [logstash-2016.08.23] [31] closed (reason: [removing shard (not allocated)])
    2016-09-15T18:18:34.047+0000 DEBUG indices - [node_data] [logstash-2016.08.23] closing ... (reason [removing index (no shards allocated)])
    2016-09-15T18:18:34.047+0000 DEBUG indices.cluster - [node_data] [logstash-2016.08.23] cleaning index (no shards allocated)
    2016-09-15T18:18:34.049+0000 DEBUG indices - [node_data] [logstash-2016.08.23] closing index service (reason [removing index (no shards allocated)])
    2016-09-15T18:18:34.049+0000 DEBUG indices - [node_data] [logstash-2016.08.23] closing index cache (reason [removing index (no shards allocated)])
    2016-09-15T18:18:34.050+0000 DEBUG indices - [node_data] [logstash-2016.08.23] clearing index field data (reason [removing index (no shards allocated)])
    2016-09-15T18:18:34.050+0000 DEBUG index.cache.filter.weighted - [node_data] [logstash-2016.08.23] full cache clear, reason [close]
    2016-09-15T18:18:34.051+0000 DEBUG indices - [node_data] [logstash-2016.08.23] closing analysis service (reason [removing index (no shards allocated)])
    2016-09-15T18:18:34.051+0000 DEBUG indices - [node_data] [logstash-2016.08.23] closing index engine (reason [removing index (no shards allocated)])
    2016-09-15T18:18:34.051+0000 DEBUG indices - [node_data] [logstash-2016.08.23] closing mapper service (reason [removing index (no shards allocated)])
    2016-09-15T18:18:34.051+0000 DEBUG indices - [node_data] [logstash-2016.08.23] closing index gateway (reason [removing index (no shards allocated)])
    2016-09-15T18:18:34.052+0000 DEBUG indices - [node_data] [logstash-2016.08.23] closed... (reason [removing index (no shards allocated)])
    2016-09-15T18:18:34.052+0000 DEBUG indices - [node_data] [logstash-2016.08.23] closing index service (reason [removing index (no shards allocated)])
    2016-09-15T18:18:34.052+0000 DEBUG indices - [node_data] [logstash-2016.08.23] closing index query parser service (reason [removing index (no shards allocated)])

Does anyone have an idea how to address this? When removing "moved" shards from a node, does the cluster need to confirm both shard copies are allocated and active before deletion occurs?

Thank you!