Elasticsearch duplicate reindex tasks

By default _reindex uses scroll batches of 1000. You can change the batch size with the size field in the source element.

I notice the bulk figure under the retries tree. This would be the number of retries attempted by the reindex. Bulk is the number of bulk actions retried.

From logstash-2017.06.29 to log-2017.06.29. What do shard numbers look like? What resources does this single node cluster have that is reporting 100% CPU usage. This looks like push back from a resources perspective.