BulkRequest causes hotspot

I use BulkRequest to create index,but hotspot be happened,only one note is over loadavarage and io busy,other nodes is free.Is it all the indexs in per BulkRequest will store in same shard of only one node?

May be some details on your request will help. As per I know, ES should route your request to different Node by its routing method. So should not have hotspot. Also, are you trying to create a lot index? But why? a lot index means a lot shards and is it really necessary ?

I want to store logs to ES,which is 100 thousand per second.The index like the below which use the auto generate id.
bulkRequestBuilder.add(client.prepareIndex(index_name, index_type).setSource(xbuilder));

How many nodes do you have in the cluster? Do you distribute bulk requests against evenly across the cluster? What type of hardware is your cluster deployed on? Have you done any tuning? How many shards/replicas are you indexing into?

5 nodes,bulk requests are distributed to the nodes balancing. Hardware is 16cores and 11 hard disks and 50g physical men which 30g for ES.50 shards and 0 replica and refresh_interval is 10s.

Are you using parent-child, nested documents or perhaps custom routing? Are the shards you are indexing into (if not all) evenly distributed across the nodes? Which version of Elasticsearch are you using?

The index dosen't use parent-child and custom routing.The shards are distributed to the nodes equably.Version of Elasticsearch is 2.1.1.

Using iostat to analysis to hotspot node load,Both w_await and svctm and util is more highter than other nodes,wrqm/s and wkB/s and avgrq-sz is lower.Using vmstat show that the bo of hotspot node is lower then other nodes,buff is more highter.

I don't really know what to say other than "from here it looks like you are
doing it right" and "that isn't normal. load should be spread out".

Sometimes you can get hot spots when Elasticsearch decides to allocate lots
of the new indexes to one node. If that is your problem you can reach for
the total_shards_per_node
setting. That will force Elasticsearch to allocate smoothly. If you set
that setting to high Elasticsearch will prefer to remain yellow rather than
allocate the shards, so be careful with it! Remove it from indexes that you
aren't currently writing to, etc. You can tell if this is your problem by
looking at the _cat/shards API and seeing if all the indexes you are
writing to are on the one node.

This is not an ES issue. Do you have different hardware for the nodes, one node per server? Most likely, one of your servers has a hardware defect and is maybe auto-recovering from RAID, or something.