... but I still cannot connect to the cluster with the TransportClient (famous NoNodeAvailableException) whereas I have no problem to connect with the java high level REST client.
Can you give more details about your transport client setup? Which URL does it use to connect to the nodes, which certificates, etc.
The service looks correct to me, but we would need to spend some time trying to reproduce with the transport client, which is deprecated.
I am trying to connect an opensource service called ARLAS-server which instantiate TransportClient here and depends on elasticsearch 7.3.2.
I think it should work properly without enabling ssl (i.e. no XPackTransportClient) and without sniffing... but it cannot connect. To fix this, I tried different client setup available with ARLAS-server, but none of them is working.
I deploy ARLAS-server container in the same GKE cluster and use the following config :
I'm not familiar at all with ARLAS-server. A few things that may help you:
Are you able to reach the Elasticsearch port from your ARLAS-server container? You could try with something like nc -vz quickstart-es-http 9300.
According to that piece of code it looks like you can use SSL. Which is enabled by default. To verify the Elasticsearch server certificate you may need to provide the ca.crt stored in the <cluster-name>-es-transport-certs-public Kubernetes secret.
ARLAS-server container is able to reach Elasticsearch cluster on transport port (9300), that was one of the first things I checked when facing the NoNodeAvailableException.
We have switched off TLS on the cluster to avoid additional connection problem, but maybe it's only switched off for http (9200) and not for transport (9300). By reading this, it looks like transport layer is not configurable with ECK, right?
By the way, it looks like the cluster does not store the certificate you have mentioned :
% kubectl get secret | grep es-transport
%
% kubectl get secret | grep es-http
%
Maybe it's the root cause of my issue, but I cannot find documentation to fix it.
If the transport layer is not configurable with ECK, does that mean that remote clusters cannot be used with ECK?
Reminding that to connect to a remote cluster, one needs to use the transport (9300) port, not http port (9200).
Any workarounds?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.