In tail mode I can see why you would want to tail a large number of files, but in read mode logstash is going to open a file, read the whole file, and process the whole file through the pipeline. That's basically going to keep 1 CPU busy. I would not set the open file limit much above the number of CPUs that you have. If you increase it further then if anything I would expect throughput to get worse.
Are you deleting files and then adding more files? That can result in inode reuse.