Increased write rejection count since elastic 8 upgrade

Hello!
We recently upgraded one of our elasticsearch 7 environments to elasticsearch 8.9.1 and are now seeing many write rejections for whatever reason. We do not see any CPU and memory increase, we rather see a decrease. Ingest rate also increased. So throughout the board improvements in all metrics.

However, we now see the following exception more often than usual.

EsRejectedExecutionException: rejected execution of TimedRunnable{original=org.elasticsearch.action.bulk.TransportBulkAction$1/org.elasticsearch.action.ActionListenerImplementations$RunBeforeActionListener/ChannelActionListener{TaskTransportChannel{task=1706566404}{TcpTransportChannel{req=8299798}{indices:data/write/bulk}{Netty4TcpChannel{localAddress=/....90:9300, remoteAddress=/...193:50286, profile=default}}}}/org.elasticsearch.action.bulk.TransportBulkAction$$Lambda$6716/0x0000000801103888@6a4f70b8, creationTimeNanos=19530225712825220, startTimeNanos=0, finishTimeNanos=-1, failedOrRejected=false} on TaskExecutionTimeTrackingEsThreadPoolExecutor[name = i-0c020a2ce841ccb57/write, queue capacity = 500, task execution EWMA = 154.2ms, total task execution time = 25.2d, org.elasticsearch.common.util.concurrent.TaskExecutionTimeTrackingEsThreadPoolExecutor@3500e9e7[Running, pool size = 8, active threads = 8, queued tasks = 518, completed tasks = 492410005]]
	at org.elasticsearch.common.util.concurrent.EsRejectedExecutionHandler.newRejectedException(EsRejectedExecutionHandler.java:40)
	at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:34)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1365)
	at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.execute(EsThreadPoolExecutor.java:72)
	at org.elasticsearch.action.bulk.TransportBulkAction.doExecute(TransportBulkAction.java:214)
	at org.elasticsearch.action.bulk.TransportBulkAction.doExecute(TransportBulkAction.java:91)
	at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:86)
	at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:61)
	at org.elasticsearch.action.support.HandledTransportAction$TransportHandler.messageReceived(HandledTransportAction.java:71)
	at org.elasticsearch.action.support.HandledTransportAction$TransportHandler.messageReceived(HandledTransportAction.java:67)
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:74)
	at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:302)
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:116)
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:95)
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:821)
	at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:150)
	at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:121)
	at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:86)
	at org.elasticsearch.transport.netty4.Netty4MessageInboundHandler.channelRead(Netty4MessageInboundHandler.java:63)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at java.lang.Thread.run(Thread.java:840)

The total task execution time increases day by day since the upgrade - in the above exception its already 25.2d.

When calling
GET /_cat/thread_pool?v=true&h=id,name,active,rejected,completed
we get this output (example from one node):

KNTP1f_IQFCL5vtUAWvcXQ write 1 2914 736753922

We deployed elastic 8 on 24th of April.
CPU and Indexing Time decrease and the newly observed write rejections:

How can we resolve/prevent those rejections?

We can provide you with additional logs and a support diagnostics archive if needed.

Best
Jürgen