Both these plugins require that all related events pass through a single thread so that they are processed in order, and as a side effect the tend to scale and perform badly.
As far as I can tell, you need to process all data in a single thread up until the point where you have managed to extract the identifier to use in these filters into a separate field. From that point on you just need to make sure that all events with the same identifier gets processed by the same thread.
You could at this point calculate a MURMUR hash of the identifier and send it to one of a number of pipelines (hash % # pipelines) using the new pipeline to pipeline communication. Depending on where the bulk of your processing takes place this may or may not make a big difference.
The other way is to instead implement this matching as a batch job that runs periodically against the raw data that has been inserted into Elasticsearch. This requires more work and is not real time, but should scale better as it does not restrict the flow of data through Logstash.