I have a Terraform created GKE. Then I use Yaml files to install ES (8.6.0) and Kibana. As part of my terraform IAAS I reserve 2 IPs, and 2 "google_dns_record_set" (es_dns and kb_dns) which sit on our company shared VPC subnet. My thought is to have the dns, & static IPs, and load 2 internal load balancers (one for kibana and one for ES) with my static IPs called out as loadBalancerIPs. In my mind I should then be able to go to my dns address in my browser.
I'm not sure my logic is sound, would love input on where my missing piece to the puzzle is.
- Deployed K8s with terraform
- use Terraform to reserve 2 IPs (one for Kibana and one for ES)
- used terraform to create a google_dns-record_set in our company shared vpc
- used terrafrom to create a gcp service account that can access the k8s cluster
- generated credentials file
- Deployed es with the configuration below.
- deployed a loadbalancer for es with the config below.
- but nothing is showing up when I go to https:dns address or the ip; what am I missing
es config yaml:
<
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: main
spec:
version: 8.6.0
volumeClaimDeletePolicy: DeleteOnScaledownAndClusterDeletion
secureSettings:
- secretName: gcs-credentials
nodeSets:- name: ndpool
count: 3
config:
node.store.allow_mmap: false
xpack.ml.enabled: true
volumeClaimTemplates:- metadata:
name: elasticsearch-data
spec:
accessModes:- ReadWriteOnce
resources:
requests:
storage: 2Gi
/>
- ReadWriteOnce
- metadata:
- name: ndpool
ES Load balancer Yaml: (kibana is similar)
<
apiVersion: v1
kind: Service
metadata:
name: eslb-https
annotations:
networking.gke.io/load-balancer-type: "Internal"
spec:
type: LoadBalancer
selector:
elasticsearch.k8s.elastic.co/name: main
ports:
- protocol: TCP
port: 443
targetPort: 9200
loadBalancerIP: 10.x.x.x
/>