How much heap have you assigned ES?
8 gigs per /etc/default/elasticsearch:
ES_HEAP_SIZE=8g
I'd (temporarily) try increasing that and see if it helps.
It would explain why you're having problems. We recommend not having more than 300 shards per node. At the default 5+1, keeping 100 days, you have over 500 shards on your single node. If you were to add a second node, the problem would not go away, because the second node would cause a rebalance, and afterwards you'd wind up with 500 on each, since you'd be adding replicas (you could get around this by disabling replicas).
10% of the heap is set aside for and index cache and shard mgmt overhead. Each active shard wants to get 250M of that 10%, and each inactive shard wants 4M. With 5 shards active (per day), that would be 1.25G of desired heap for active shards, and 1.98G for the inactive shards. When Elasticsearch cannot get the desired amount, it simply shortchanges each shard. I've seen these amounts compressed to 47M for actives and 700K for inactives. With only 800M (10% of your 8G) available, you can see where you have some memory pressure for shard management.
I think this explains why you are having timeouts when you're trying to delete indices.
You can reclaim much of this by using the Close API (or Curator's close command) to close indices not actively being queried. The data and indices remain on disk, but are unusable until you re-open the closed indices. This may allow you to continue to use your 1-node cluster in the interim.
Thanks Gents. A question...is there a....a "care and feeding of your Elasticsearch instance?" Because honestly, I have no idea about any of this...what's best practice or normal, or how to change it. To be honest I installed both logstash and elasticsearch via ppa, and manually downloaded kibana. After making my logstash.conf and increasing my heap size and disabling discover in elasticsearch.yaml, that is literally all I've done....practically nothing with the elasticsearch back end. Thanks again...I have much to learn.
The answer to that question is—as is so often the case in life—"It depends."
The possible answers to this, and other related dilemmas, are quite varied and broad, depending on what you're trying to achieve. As a result, there is no one, singular document that will answer the call for a "care and feeding of your Elasticsearch instance." In fact, we've recently added a full day Core Elasticsearch: Operations training to help address this need. Yes, the topic is that broad because there are so many possible use cases for Elasticsearch that one size cannot fit all.
If you cannot attend one of these trainings, asking questions in this community forum is a great way to learn and get answers. It will just come slower, and without all of the theoretical background and practical application instruction.
Thank you Aaron..that helps.