Elasticsearch cluster best practices - memory utilization

Hi Team,
I setup an ELK environment and next week we are allowing it for production usage. But i am worried about elasticsearch nodes memory utilization. Please suggest how can i improvise ES performance, memory usage and what are the best practices i should follow before go to production. Though i read online documentation but looking for precise ones.

i tried manipulating filedata but if i reduce it, it's not showing all instances data in the dashboard. i want to keep at-least 3 weeks of data.

indices.fielddata.cache.size: 40%

shall i use 4 high memory instances or 8 low memory instance? which will improvise ES performance?

PFB details about my setup:

  1. 6 nodes of Elasticsearch, 1-master,1-client and 4-data nodes

  2. Currently i setup beats on almost 15 instances and they are sending data to ES, moreover some log files.

  3. ES memory utilization graph and marvel snapshot is attached herewith.

Please let me know if you need more details, mostly properties are default.

Thanks & Regards