Kibana search multiple elasticsearch instances

I currently have the following environment setup with ELK.

2 elasticsearch servers
2 logstash servers
2 kibana servers

logstash 1 is sending to elasticsearch 1 and kibana 1 is pointing at the elasticsearch 1 server
I have the same setup on my ELK 2 servers.

How do I get them all clustered and talking together and allow for kibana 1 and kibana 2 the ability to search elasticsearch cluster...?

You could use a tribe node to search both clusters.

I think I understand... I have bad wording (obviously), but I want to have elasticsearch 1 and 2 clustered together and same thing for the logstash servers.

Do I need redis in front of my logstash servers?

I mainly do not know where to go next to get them all working together...

Are the ES servers in the same place?

Same network all on individual servers.

virtual ubuntu 14.04 servers.

Then you can cluster those together, take a look at https://www.elastic.co/guide/en/elasticsearch/guide/current/_add_failover.html

What about Kibana? I have 2 kibana servers, to use them both to be able to search elasticsearch cluster.

Why do you need both if you have a single cluster?

Just to confirm, you have one Elasticsearch cluster consisting of two nodes and you want to set up two Kibana instances to talk to them. In this case, you don't need a tribe node -- it's only useful to search across two separate clusters.

I can imagine two scenarios for why you might want to do this: (1) high availability; (2) separate Kibana instances for different departments. In either case, you'll want to set up an HA proxy or a load balancer that spreads requests across both Elasticsearch nodes and configure both Kibana instances to talk to that proxy or LB. In scenario (1) ensure that both Kibana instances are configured to have the same internal .kibana index, in scenario (2) ensure they have different internal indexes, e.g. .kibana-team1 and .kibana-team2.

I will eventually have more but just getting ready to expand. But correct, I don't need it now.