Need help in setting production enviorment

I am new to elastic search. I am doing POC on set-up production environment. I need help to do this.

  1. What are the production parameters we need to consider when setting the environment ?

  2. what are all the watermark need to set-up production ready environment ?

There are two process running live server(improve performance Ex: performance 20 to 40 milliseconds), Batch process server (improve throughput. Ex: In 1 hour 1 server will serve 200 transaction).

live server will have 8 dedicated server nodes. Batch Process will have 12 servers.

How to distribute request between live server and Batch node to not compromise live server performance while Batch in progress. How to scale up application without performance compromise.

Live server transaction 250K/hour in single server (We have 8 Online servers)
Batch process 1MN/hour in single server ( we have 8 Batch servers)

What are all the requirements needed for the above scenario for setting production environment ?

What are the production parameters we need to consider when setting the environment ?

Have you consulted the available documentation?

what are all the watermark need to set-up production ready environment ?

Are you talking about the disk watermark settings?

What are all the requirements needed for the above scenario for setting production environment ?

It's not that simple. What might require tuning depends on hardware configuration, the kind of documents, the type of queries, ...

what are all the watermark need to set-up production ready environment ?
Are you talking about the disk watermark settings?-yes

we have totally 300millions of records just give an approximate cluster and node setup how many its required?

Elasticsearch comes very good default values, so stick with these until you have gained more familiarity with Elasticsearch. The required size and configuration of a cluster does like Magnus said depend on the hardware it is deployed on, the type of data as well as the indexing and query patterns. This talk about Quantitative Cluster Sizing will give you an idea about how to determine this through benchmarks.

Live server is consider here as an online application search request and batch process is consider as an bulk request and request would be in file format below is the sample setup we have for testing for this we need a suggestion in production File System Store Size: 85.0GB 85.4GB Documents: 336,954,885 336,954,885 Index Activity Search - Query: 14.83ms 14msSearch - Fetch: 1.48ms 1.51ms Refresh: 2.14ms 41.04ms Memory Heap Size: 2.6 gb 2.8 gb % Heap Used: 44.2% 61.2%

we are using a linux machine setup

Why do you think you need to modify the disk watermark settings?

we have totally 300millions of records just give an approximate cluster and node setup how many its required?

I'd expect a single machine to be able to cope with 300M documents, but as I said it depends on hardware configuration, the kind of documents, the type of queries, ...

how to handle the data traffic and evenly distribute the request without performance impact ?