Added Nodes - Indexing Rate Tanked

Added two additional data nodes to a cluster with an existing three data nodes, three masters, and two client nodes. Also upgraded the cluster from 5.2.2 to 5.3.2. Indexing rate for the cluster before adding the two new nodes floated around 5k eps taking input from a single logstash box. Indexing rate after upgrade and additions is now around 1k eps.

Also noticed immediately after adding the two new nodes, that while the shards balanced out across the data nodes, no new indexing was taking place. That seems wrong, like maybe I missed releasing something in the system to ingest new data correctly.

What commands can I use to verify what's going on and causing this hit?

Removed the two new nodes by excluding them from cluster routing allocation. Three original nodes are back up to their previous 4k-5k eps and things look normal again. No clue why yet. The additional two nodes use different hardware and SAN than the original three. That's about my only guess so far.

Can try to add one of them back in and see what happens again, though I suspect it will tank again. Interesting that Logstash backs off on processing events when elastic cluster is hindered. Makes me think there has to be some kind of log message to indicate what's going on. Or at maybe something on the coordinating client nodes?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.