Multiple issues with ELK

Hi Everyone-

Ok i will try to be as clear as possible with this.

i have 3 VM with CentOs 7 like this: (1 Elasticsearch Cluster of 3 nodes, 2 master eligible and data, and 1 node client)

  1. Elasticsearch ( 12GB of RAM, 3 CPUs - 2 cores per socket- , 500 GB HDD)
  2. Elasticsearch ( 12GB of RAM, 3 CPUs - 2 cores per socket- , 500 GB HDD)
  3. Kibana - nginx reverse proxy-, Logstash, Elasticsearch node client (16GB of RAM, 2 CPUs -1 core per socket- , 500GB HHD, 154 GB HDD)

the network adapter are 1GB, full duplex, autonegotiation.

all the elasticsearch cluster machines are configured with

MAX_LOCKED_MEMORY=unlimited
LimitMEMLOCK=infinity
bootstrap.memory_lock: true
ES_HEAP_SIZE=8g (6g in the node client)
network.host: [eno16777984, local]
discovery.zen.ping.unicast.hosts: ["node 1 address", "node 2 address", "node client adress"]
Shards 5*2

i have Kopf installed to check the cluster health, so when i send queries (1 day of data logs) this is what i can observe:

the heap: 60%
cpu: 90% (red state)
disk: 75% space free
Load: is red

i tried to use the Elasticsearch with 3 CPUs -3 cores per socket- got same result.....

In kibana i always get this

Error: Bad Gateway
at respond (http:///bundles/kibana.bundle.js?v=10146:78604:16)
at checkRespForFailure (http://
/bundles/kibana.bundle.js?v=10146:78567:8)
at http:///bundles/kibana.bundle.js?v=10146:77185:8
at processQueue (http://
/bundles/commons.bundle.js?v=10146:42452:29)
at http:///bundles/commons.bundle.js?v=10146:42468:28
at Scope.$eval (http://
/bundles/commons.bundle.js?v=10146:43696:29)
at Scope.$digest (http:///bundles/commons.bundle.js?v=10146:43507:32)
at Scope.$apply (http://
/bundles/commons.bundle.js?v=10146:43804:25)
at done (http:///bundles/commons.bundle.js?v=10146:38253:48)
at completeRequest (http://
/bundles/commons.bundle.js?v=10146:38451:8)

i check the logs and just showed me an issue with babelcache.json
but i fixed it with chmod o+w /opt/kibana/optimize/.babelcache.json

and Request Timeout after 30000ms

this is what i modified with different values after i saw a couple of people having the same issue and got it fix:

threadpool.search.queue_size:
elasticsearch.requestTimeout:

sometimes rebooting the entire cluster makes everything better and smoother but just for a couple of minutes

i am a completly noob in this universe of the elastic products, so please bear with me!

thank you for your patiente and time.

Is the problem you're trying to solve the Error: Bad Gateway ?

Is your nginx proxy between Kibana and Elasticsearch? Or in front of Kibana? What is the purpose in your case for nginx there?

Regards,
Lee