Performance with large number of indicies

I am trying to run an elasticsearch cluster where each of our customers (~5000 and not likely to change too much) has their own index. Unfortunately, update times are very slow.

Creating or deleting an index takes ~ 5s, and data is only uploaded at 100 docs/s (using the bulk upload). Each index is fairly small ~10K docs each ~1KB so 10MB total. As the data is so small, on each data change we create a new index with the new data and point a consistent alias to the new index, before deleting the old index.

These times are not an issue in themselves - we do not need new data to be immediately available - but it appears that indexes cannot be created/deleted in parallel? Thus we are limited to 6 update jobs per minute. Are their any config changes we could make to increase this? I have tried reducing the consistency to one, and the refresh interval to 2 mins. Below are some features of our cluster.

"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 4968,
"active_shards" : 9936,

Thanks for any suggestions and let me know if there is any other info I could provide

Which version of Elasticsearch are you on?

Thanks -

{
  "name" : "Mary Walker",
  "cluster_name" : "some_name",
  "version" : {
    "number" : "2.3.3",
    "build_hash" : "218bdf10790eef486ff2c41a3df5cfa32dadcfde",
    "build_timestamp" : "2016-05-17T15:40:04Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.0"
  },
  "tagline" : "You Know, for Search"
}

Every time you are adding or deleting an index you are modifying the cluster state, which with that number of indices and mappings could be quite large. Updating data in the existing index would reduce the amount of cluster updates, but would naturally not allow you to change mappings. Close to 10000 shards on just 3 nodes also sound a bit much, as having large amounts of very small indices is inefficient and wastes resources.