Hi Team,
I was Using Elastic Cloud On Kubernetes . So for my Elasticsearch and Kibana I want to use my own Certificates, But when provided the certificates I was getting issues like the Kibana not able to connect to Elasticsearch.
tls:
certificate:
secretName: <my-secret-name>
This is the configuration I did in both Kibana and Elasticsearch.
Issues:
Unable to retrive version Informaion from Elasticsearch, unable to verify the first certificate
Can anyone please help me on this.
Can any one guide me please, it will be impacting our delivery and the option of going with Elastic Licence or not?
Could you check that the Elasticsearch certificate contains either the full certification chain, or that Kibana is setup to trust your CA ?
Yes the Elasticsearch contains all the three certificates like "ca.crt,tls.crt,tls.key".
Can you explain the setup in kibana to trust Elasticsearch CA, So that I can verify whether I have done correctly or not.
In my first comment I mentioned how I configured.
Thanks @michael.morello
Assuming the 2 following points:
- the secret
my-secret-name has been correctly created using the following command: kubectl create secret generic my-secret-name --from-file=ca.crt=ca.crt --from-file=tls.crt=es-cert.crt --from-file=tls.key=es-cert-key.pem
-
ca.crt can be used to validate es-cert.crt
Then the operator should automatically make ca.crt available in the Kibana Pod, in the /usr/share/kibana/config/elasticsearch-certs/ca.crt file, which is used by Kibana to trust Elasticsearch.
- Could you check that there is no error in the operator logs ?
- Could you enter the Kibana Pod and use the following curl command:
curl --cacert /usr/share/kibana/config/elasticsearch-certs/ca.crt -v https://clustername-es-http.namespace.svc:9200 ? If you have an error about curl not being able to verify the legitimacy of the server then it means that the CA certificate is either not the right one, or not updated by the operator.
I have created the secret just like how you mentioned in Elasticsearch.
But still the same issue :["error","savedobjects-service"],"pid":1215,"message":"Unable to retrieve version information from Elasticsearch nodes. unable to verify the first certificate"}
But one point I didnt understand you said operator automatically make ca.crt does this mean we dont have to configure anything to kibana?
I was passing ca.crt of Elasticsearch to kibana as follows:
elasticsearch.ssl.certificateAuthorities: [ "/usr/share/kibana/config/elasticsearch-certs/ca.crt" ]
the operator error message is :
"No internal CA certificate Secret found, creating a new one","service.version":"1.8.0+4f367c38","service.type":"eck","ecs.version":"1.4.0","owner_namespace":"vineeth","owner_name":"kibana","ca_type":"http"}
Thanks @michael.morello
Yes, as long as the root CA cert (ca.crt) is available in the secret then the operator should propagate and configure Kibana automatically.
This message is logged at the info level and might be expected when Kibana is created. I don't think it is relevant here.
I have just followed how you said but still I was getting the same issue.
["error","savedobjects-service"],"pid":1215,"message":"Unable to retrieve version information from Elasticsearch nodes. unable to verify the first certificate"}
Could you provide both the manifests for Elasticsearch and Kibana, and also the output of the curl command in the Kibana pod as mentioned in my previous message ?
Thanks
Okay @michael.morello
Elasticsearch.yaml
apiVersion: v1
kind: Elasticsearch
metadata:
name: elastic
namespace: elastic
spec:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0
auth: {}
http:
service:
metadata:
annotations: {}
spec:
ports:
- name: https
nodePort: 30761
port: 30040
protocol: TCP
targetPort: 9200
type: LoadBalancer
tls:
certificate:
secretName: myown-signed-certs
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0
monitoring:
logs: {}
metrics: {}
nodeSets:
- config:
node.data: false
node.ingest: false
node.master: true
node.remote_cluster_client: false
path.repo:
- /usr/share/elasticsearch/logs
count: 1
name: master
podTemplate:
metadata: {}
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms1g -Xmx1g
name: elasticsearch
resources:
limits:
cpu: 200m
memory: 2Gi
requests:
cpu: 200m
memory: 2Gi
securityContext:
runAsGroup: 1000
runAsUser: 1000
volumeMounts:
- mountPath: /usr/share/elasticsearch/config/log4j2.properties
name: log4j
subPath: log4j2.properties
- mountPath: /usr/share/elasticsearch/logs
name: logs-backup
initContainers:
- name: elastic-internal-init-filesystem
resources: {}
securityContext:
privileged: true
runAsUser: 1000
- command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data && chown -R 1000:1000
/usr/share/elasticsearch/logs && mkdir -p /usr/share/elasticsearch/logs/hourly
&& mkdir -p /usr/share/elasticsearch/logs/daily && mkdir -p /usr/share/elasticsearch/logs/weekly
&& mkdir -p /usr/share/elasticsearch/logs/monthly && sysctl -w vm.max_map_count=262144
name: sysctl
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
securityContext:
privileged: true
volumes:
- configMap:
name: log4j
name: log4j
- name: logs-backup
persistentVolumeClaim:
claimName: elastic-master-backup
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: ebs-sc
status: {}
- config:
node.data: true
node.ingest: true
node.master: false
path.repo:
- /usr/share/elasticsearch/logs
count: 1
name: ingest-data
podTemplate:
metadata: {}
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
elasticsearch.k8s.elastic.co/cluster-name: elastic
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms1g -Xmx1g
name: elasticsearch
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: "1"
memory: 2Gi
securityContext:
runAsGroup: 1000
runAsUser: 1000
volumeMounts:
- mountPath: /usr/share/elasticsearch/config/log4j2.properties
name: log4j
subPath: log4j2.properties
- mountPath: /usr/share/elasticsearch/logs
name: logs-backup
initContainers:
- name: elastic-internal-init-filesystem
resources: {}
securityContext:
privileged: true
runAsUser: 1000
- command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data && chown -R 1000:1000
/usr/share/elasticsearch/logs && mkdir -p /usr/share/elasticsearch/logs/hourly
&& mkdir -p /usr/share/elasticsearch/logs/daily && mkdir -p /usr/share/elasticsearch/logs/weekly
&& mkdir -p /usr/share/elasticsearch/logs/monthly && sysctl -w vm.max_map_count=262144
name: sysctl
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
securityContext:
privileged: true
volumes:
- configMap:
name: log4j
name: log4j
- name: logs-backup
persistentVolumeClaim:
claimName: elastic-data-backup
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: ebs-sc
status: {}
transport:
service:
metadata: {}
spec: {}
tls:
certificate: {}
updateStrategy:
changeBudget: {}
version: 7.15.0
Kibana.yaml
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
labels:
app.kubernetes.io/managed-by: Helm
name: kibana
namespace: elastic
spec:
config:
elasticsearch.hosts:
- https://<host_Ip>:30040
count: 1
elasticsearchRef:
name: elastic
namespace: elastic
enterpriseSearchRef:
name: ""
http:
service:
metadata:
annotations: {}
spec:
ports:
- name: https
port: 30041
protocol: TCP
targetPort: 5601
type: LoadBalancer
tls:
certificate: {}
image: docker.elastic.co/kibana/kibana:7.15.0
monitoring:
logs: {}
metrics: {}
podTemplate:
metadata:
creationTimestamp: null
spec:
containers:
- name: kibana
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: "1"
memory: 2Gi
securityContext:
runAsGroup: 7282723
runAsUser: 64000
version: 7.15.0
These are my configuration files and Operator I am deploying from the ECK documentation directly with the help of thsi command.
helm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace
If need any further info I will provide that as well.
Thanks @michael.morello.
Curl command inside kibana pod:
curl --cacert /usr/share/kibana/config/elasticsearch-certs/ca.crt -v https://clustername-es-http.namespace.svc:30040
* Rebuilt URL to: https://clustername-es-http.namespace.svc:30040/
* Trying 18.116.243.43...
* TCP_NODELAY set
* Connected to clustername-es-http.namespace.svc (18.116.243.43) port 30040 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /usr/share/kibana/config/elasticsearch-certs/ca.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
I think the issue is clear, it is because of ca.crt but I have provided the exact ca.crt which I was using in Elasticsearch.
Could you please help me to debug and resolve the issue.
Thanks @michael.morello
Hi from the above issue can you tell me what I have to do to resolve the issue as I was using the correct ca.crt.
Thanks @michael.morello
One question is the dns of the service like elastic-es-http.namespace.svc has to match the CN mentioned in the certificate ? @michael.morello @elasticfran
Thanks