My set up is 2 kibana4 nodes on separate machines sharing one A record. The main thought is that this will allow for some redundancy in case one node goes down or I have to perform updates.
The kibana config elasticsearch_url: points to a domain (again another A record with 2 IP addresses associated with it). These machines are running elasticsearch but are configured as client nodes with data: false and master: false. I have 10 ES nodes ingesting about 350GB of data per day from a distributed sensor network.
My issue is I am getting a lot of timeout errors from kibana4. The error on the "status" page says the plugin:elasticsearch times out. My thought was that having the kibana nodes point at the client nodes the client nodes would act as load balancers, but that they would also cache searches for other users.
Is this not the optimal way to do things? I do not want to run elasticsearch on my kibana nodes because I want all of my elasticsearch cluster to be completely separate from analysts. Not to mention they are in physically separated data centers.
Is there a better way to set up kibana to be more resilient and not time out. I know the docs state to run elasticsearch on your kibana nodes but as stated I don't want to do that. I'd rather have separate load balancer client nodes that make searches of the cluster and return the searches to kibana.
I can spin up more load balancers but I'm not sure that is where the issue is.
As it stands, my client nodes have 32 GB ram with 24 set aside for elasticsearch. My Kibana nodes both have 32 GB ram as well but when I look at the kibana status page it shows only 1.5GB HEAP. Is there a way to give Kibana more ram for HEAP. Will that speed things up?