Enterprise Search running on a docker container can't connect to Elasticsearch running as a service on windows

Hi, I'm trying to connect Enterprise Search which I run as a docker container to connect to Elasticsearch and Kibana which are running as Windows service. However, every time I run Enterprise Search it exits, saying that pre-flight] Error: /usr/share/enterprise-search/lib/war/shared_togo/lib/shared_togo/elasticsearch_checks.class:142: Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused (Connection refused) (Faraday::ConnectionFailed)

Enterprise.yml

# Encryption keys to protect application secrets.
secret_management.encryption_keys: ["680f94e568c90364bedf927b2f0f49609702d3eab9098688585a375b14274546"]
  # example:
  #- 680f94e568c90364bedf927b2f0f49609702d3eab9098688585a375b14274546

## ----------------------------------------------------

# IP address Enterprise Search listens on
ent_search.listen_host: 0.0.0.0

# URL at which users reach Enterprise Search / Kibana
ent_search.external_url: http://host.docker.internal:3002
kibana.host: http://localhost:5601

# Elasticsearch URL and credentials
elasticsearch.host: http://localhost:9200
elasticsearch.username: elastic
elasticsearch.password: 'changeme'

# Allow Enterprise Search to modify Elasticsearch settings. Used to enable auto-creation of Elasticsearch indexes.
allow_es_settings_modification: true
ElasticSearch yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: es_cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
discovery.type: single-node
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 30-04-2023 17:47:23
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.license.self_generated.type: basic

xpack.security.authc.api_key.enabled: true
xpack.security.enabled: true

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["elm"]

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: host.docker.internal

# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
transport.host: 0.0.0.0

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

Hi @Mtuni_Globbal Welcome to the community.

This is a docker networking issue.

See this same issue most likely

try using

elasticsearch.host: http://host.docker.internal:9200

Thanks for the reply. When using the elasticsearch.host: http://host.docker.internal:9200, I now get this error
[pre-flight] Error: /usr/share/enterprise-search/lib/war/shared_togo/lib/shared_togo/elasticsearch_checks.class:179: host.docker.internal:9200 failed to respond (Faraday::ClientError)

Try to Shell into the Enterprise Search docker container and use curl to try to connect to elasticsearch?

Is elasticsearch running on https? YES it is so you will need to use SSL

elasticsearch.host: https://host.docker.internal:9200

You will also need to provide the CA or disable ssl verification

Have you validated that you can connect to elasticsearch outside and inside the docker container?

Outside I can connect to elasticsearch using https://localhost:9200 (since I am running it as a windows service). It is when I try connecting it inside the Enterprise Search docker container that I get issues. I tried this too elasticsearch.host: https://host.docker.internal:9200 and disabled ssl verification on the elasticsearch.yml file but still getting a verification issue

You need to run the curl and actually show the results I / we can not help without error logs or errors from the command

from inside the enterprise docker container run

curl -k -v -u elastic https://host.docker.internal:9200 and show the results

and you would not disable verification in the elasticsearch.yml that would be in the enterprise.yml

elasticsearch.ssl.verify: false

Uses proxy env variable https_proxy == 'https://proxy.apm.co.za:8080'

  • Trying 192.8.10.1:8080...
  • Connected to proxy.apm.cov.za (192.8.10.1) port 8080 (#0)
  • schannel: disabled automatic use of client certificate
  • ALPN: offers http/1.1
  • schannel: failed to receive handshake, SSL/TLS connection failed
  • Closing connection 0
  • schannel: shutting down SSL/TLS connection with proxy.apm.co.za port 8080
    curl: (35) schannel: failed to receive handshake, SSL/TLS connection failed

This is the output I get. Docker seems to be bypassing the elasticsearch.ssl.verify: false line since it is failing to connect due to SSL verification

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.