How to decide an optimal settings for workers?

For logstash ElasticSearch output plugin, how to determine the proper number of workers to use? If I play around the settings, is there a way to know whether the performance is getting better (or worse?)

For logstash ElasticSearch output plugin, how to determine the proper number of workers to use?

Measure the throughput for a typical workload. Marvel or another cluster monitoring tool should be helpful here.

If I play around the settings, is there a way to know whether the performance is getting better (or worse?)

If there was no way of knowing whether the performance was better or worse, would it even matter?

"Measure the throughput for a typical workload. Marvel or another cluster monitoring tool should be helpful here"
can you elaborate a bit more on how to?

I'm not sure what to elaborate on. This should be pretty straight forward. Use e.g. Marvel to inspect the indexing throughput while you feed Logstash with lots of data. Experiment with different configuration options.

Keep in mind that ES itself might be a bottleneck, so it's totally possible that you'll get the same throughput regardless of the LS settings.

I see.. thank you.
Reason I'm asking this question is that I just upgraded my ELK cluster to latest version, and I noticed a few things:
1> there is some slow down in indexing, i.e. data is queued at Redis, and I was wondering whether increasing workers can improve the performance.
2> there are warnings " retrying failed action with response code: 429" on and off at indexers, which I don't observe previously.
Is this expected with the newer version? or it must caused by other factors?