I am deploying ECK on GKE in a private Kubernetes cluster. That cluster has the only service which will talk to Elasticsearch. So I don't need to have any https or user:password authentication. All I want is a simple clusterIP service which can be directly accessed by the service within the kubernetes cluster.
Please let me know how to do that.
TLS can be disabled as explained here: https://www.elastic.co/guide/en/cloud-on-k8s/0.9/k8s-accessing-elastic-services.html#k8s-disable-tls.
spec: http: tls: selfSignedCertificate: disabled: true
Basic authentication can be bypassed by enabling anonymous access: https://www.elastic.co/guide/en/elastic-stack-overview/7.4/anonymous-access.html.
spec: nodes: - nodeCount: 1 config: xpack.security.authc: anonymous: username: anonymous roles: superuser authz_exception: false
Even in a private cluster, we don't recommend to turn off these security layers.
Obviously, it is up to you to make the decision
How does this apply to Kibana? I've tried applying the settings above to my elasticsearch deployment, as well as the following on my Kibana deployment:
config: xpack.security.enabled: false
This causes the Kibana pods to get stuck on startup "Optimizing and caching bundled". Re-enabling xpack security causes them to be able to start up again.
Tried disabling TLS. It says:
Error from server (TLS cannot be disabled for Elasticsearch currently): error when creating "elasticsearch.yaml": admission webhook "validation.elasticsearch.elastic.co" denied the request: TLS cannot be disabled for Elasticsearch currently
Disabling xpack on Kibana forces replaying the optimization process responsible for generating JS bundles for all of the installed plugins.
This optimization process is very CPU/memory intensive and can take up to several minutes to complete depending on the underlying hardware.
I tested on my side and with the default Kibana resources, the Kibana pod is OOMKilled.
You can give more memory to your Kibana instance(s) to speed up this process. This is documented here: https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-managing-compute-resources.html#k8s-compute-resources-kibana-and-apm.
By increasing the memory limit to 4Gi, it took 83s for me.
apiVersion: kibana.k8s.elastic.co/v1beta1 kind: Kibana metadata: name: quickstart spec: version: 7.4.0 count: 1 elasticsearchRef: name: quickstart config: xpack.security.enabled: false podTemplate: spec: containers: - name: kibana resources: limits: memory: 4Gi
Provided solution does not work.
Kibana pod does not get past thru readiness probes:
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m37s default-scheduler Successfully assigned monitoring/quickstart-kb-56b8d49d6f-76jjf to ip-10-51-189-46.ec2.internal Normal Pulled 4m36s kubelet, ip-10-51-189-46.ec2.internal Container image "docker.elastic.co/kibana/kibana:7.5.1" already present on machine Normal Created 4m36s kubelet, ip-10-51-189-46.ec2.internal Created container kibana Normal Started 4m36s kubelet, ip-10-51-189-46.ec2.internal Started container kibana Warning Unhealthy 62s (x4 over 92s) kubelet, ip-10-51-189-46.ec2.internal Readiness probe failed: Get https://10.51.176.85:5601/login: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 55s (x17 over 4m14s) kubelet, ip-10-51-189-46.ec2.internal Readiness probe failed: HTTP probe failed with statuscode: 503 Warning Unhealthy 47s kubelet, ip-10-51-189-46.ec2.internal Readiness probe failed: HTTP probe failed with statuscode: 404
apiVersion: kibana.k8s.elastic.co/v1 kind: Kibana metadata: name: cluster spec: version: 7.6.0 http: tls: selfSignedCertificate: disabled: true config: xpack.security.enabled: false podTemplate: spec: containers: - name: kibana resources: limits: memory: 4Gi readinessProbe: failureThreshold: 3 httpGet: path: / port: 5601 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 count: 1 elasticsearchRef: name: cluster