Problems with performance in ElasticSearch 2.3.2

I have migrated ES 2.2.1 to 2.3.2 and I'm having so problems with performance.
I have one master node and five data nodes. Before I indexed 25K-30K/sec, now I index about 5K.
I use Spark with es-hadoop to index the documents, I have tried with the 2.2.1 and 2.3.0 es-hadoop version with the same performance. It's the same code, same Spark version, same machines.

I checked as well all logs from ES nodes with any error.

Are you monitoring everything with Marvel to see what the cluster is doing?

Yes, I have Marvel and Shield. I updated these plugins as well.
The cluster is green and CPU, memory and IO are low, around 10-20%.

How I didn't get that cluster works well, I removed all indices and restart the cluster, but it fixed nothing.

I checked with nmon, top and so on all nodes and everything is right. Same that when I had ES 2.2.x

we are experiencing the same, starting when we moved to 2.3.1.

our production cluster has 1,254,024,220 docs, spread over 17 indexes with 600+ shards (2 replicas). we index nearly 30 docs per second. we have monthly logstash derived indexes. we have a total of 2.45TB of data in this cluster. we have 3 dedicated masters, 6 data nodes (16GB) and 2 client nodes.

performance of our own application is still ok, but kibana has a hard time. when the current index is 'cold' querying the cluster with kibana (with the default of latest 15 minutes) breaks the data. the cluster becomes unresponsive, indexing sometimes stops completely. after trying for a while kibana start working again. then it is relatively ok. (but we have to keep using it.)

CPU doesn't appear to be the bottleneck. the biggest problem is with the load. we see high io wait, higher than before. we run our data instances with RAID 0 of 16 disks of 160GB each. before moving to 2.3 we ran with 8 disks of 320GB.

we are in the process of reindexing older indexes, to make use of doc_values. don't know if that is a good idea considering the load we see. and we are in the process of upgrading our staging cluster (similar setup, much less data) to 2.3.2, with marvel.

groet,
jurg.

looks like upgrading to kibana 4.5.1 did the trick. cluster stays healthy, kibana is consistently performant.

groet,
jurg.