Hi,
I have two elasticsearch clusters, each in different datacenter. We maintaining them in kubernetes pods. I am trying to implement cross-cluster search between two clusters. So, when provide node information of one DC in another DC, it failing at updateing the seeds. connection is not establishing between two clusters.Our cloud environment has two IPs pod IP and External IP. Do I need to publish any WAN address from elasticsearch? If so, what will the elasticsearch confguration for that. Please give me some suggestions on this. Here is my elasticsearch.yml configuration. Thanks.
`network.host: 0.0.0.0
cluster.name: graylog
transport.tcp.port: 9300
http.port: 9200
#discovery.zen.ping.multicast.enabled: false
[loopback, service_name]
discovery.zen.ping.unicast.hosts: ["127.0.0.1" {{ range service "service.name" }}, "{{ .Address }}"{{ end }}]
#discovery.zen.ping.multicast.enabled: true
#The mlockall property in ES allows the ES node not to swap its memory. mlockall is set to false by default, meaning that the ES node will allow swapping.
#bootstrap.mlockall: true
bootstrap.memory_lock: false
When running on fast IO like SSDs or a SAN we recommend to increase the value of the indices.store.throttle.max_bytes_per_sec in your elasticsearch.yml to 150MB
indices.store.throttle.max_bytes_per_sec: 150mb
At least NODES/2+1 on clusters with NODES > 2, where NODES is the number of master nodes in the cluster
This will prevent a split-brain scenario
discovery.zen.minimum_master_nodes: 2
node.data: true
node.master: true
##Adding remote cluster
search.remote.graylog.seeds: ["nodename1:9300","nodename2:9300","nodename3:9300"]
search.remote.connections_per_cluster: 10
search.remote.connect: true
search.remote.initial_connect_timeout: "30s"
Adding a repository
path.repo: ["/backup"]`