If you do not use bulk inserts and require the indexed document to be searchable immediately you will make indexing very inefficient. This basically goes against most recommendations in this guide around optimizing indexing speed. This will result in a lot of small segments being generated and requiring merging which will put a lot of load on the cluster and result in a lot of disk I/O. In itself indexing 9k documents of 1kB in size is achievable but might require more cluster resources and very fast storage.
You also mention a quite high search rate. In order to optimize the search rate supported by the cluster you ideally want to have immutable data that is fully kept in the operating system file cache. Even if there was no indexing going on you state that you are likely to have very large amounts of data. If this does not fit in the cache it will generate a lot of disk I/O which will lead to longer latencies and reduced query throughput.
If you add these two together you see that the indexing will add new data, which will affect the page cache and make it less efficient. I therefore do not think Elasticsearch is suitable for this use case (unless you work with the requirements) and if you were to try make it work you would need a lot of hardware.