Typically the queues like this are dynamically created from the CPU core count, so yeah increasing helps all round.
There's a limit on them to stop the nodes becoming overwhelmed with one specific request type (index, search, ingest, etc) at the cost of other parts. It's part memory and part CPU. eg if the bulk requests queue was unbounded, a single huge request could cause an OOM. These are inbuilt so you don't need to worry about creating them.
That said, your ingest process (ie your code) should factor these sorts of responses in and retry when needed. It's standard behaviour from Elasticsearch when it hits these limits.