PostgreSQL Connector unable to connect to elasticsearch server

I am currently trying to get my PostgreSQL data into elasticsearch to get use of its searching capabilities. I've tryed it by using the PostgreSQL connector without luck. For running elasticsearch, kibana and the connector I've used the following docker compose file and config.yml.

version: "2.2"

services:
  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f config/certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f config/certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120

  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - discovery.type=single-node
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120


  kibana:
    depends_on:
      es01:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    volumes:
      - certs:/usr/share/kibana/config/certs
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
      - SERVER_PUBLICBASEURL=http://localhost:5601
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  postgres-connector:
    depends_on:
      es01:
        condition: service_healthy
    image: docker.elastic.co/enterprise-search/elastic-connectors:8.15.0.0
    command: /app/bin/elastic-ingest -c /config/config.yml
    volumes:
      - "${APPDATA}/Docker/komott/elastic:/config"
    tty: true


volumes:
  certs:
    driver: local
  esdata:
    driver: local
  kibanadata:
    driver: local


## ================= Elastic Connectors Configuration ==================
#
## NOTE: Elastic Connectors comes with reasonable defaults.
##       Before adjusting the configuration, make sure you understand what you
##       are trying to accomplish and the consequences.
#
#
## ------------------------------- Connectors -------------------------------
#
connectors:
-
  connector_id: "37Z1W5EBoL9rlbaWvgaQ"
  service_type: "postgresql"
  api_key: "NExaMVc1RUJvTDlybGJhV19nYTA6Ykg1VTBwVVBUNjZiUXBIdDVjcEdXUQ=="
elasticsearch:
  host: "http://172.18.0.3:9200"
  api_key: "NExaMVc1RUJvTDlybGJhV19nYTA6Ykg1VTBwVVBUNjZiUXBIdDVjcEdXUQ=="
  
##  The list of connector clients/customized connectors configurations.
##    Each object in the list requires `connector_id` and `service_type`.
##    An example is:
##    connectors:
##      - connector_id: changeme # the ID of the connector.
##        service_type: changeme # The service type of the connector.
##        api_key: changeme # The Elasticsearch API key used to write data into the content index.
#connectors: []
#
#
##  The ID of the connector.
##    (Deprecated. Configure the connector client in an object in the `connectors` list)
#connector_id: null
#
#
##  The service type of the connector.
##    (Deprecated. Configure the connector client in an object in the `connectors` list)
#service_type: null
#
#
## ------------------------------- Elasticsearch -------------------------------
#
## The host of the Elasticsearch deployment.
#elasticsearch.host: http://localhost:9200
#
#
## The API key for Elasticsearch connection.
##    Using `api_key` is recommended instead of `username`/`password`.
#elasticsearch.api_key: null
#
#
##  The username for the Elasticsearch connection.
##    Using `username` requires `password` to also be configured.
##    However, `elasticsearch.api_key` is the recommended configuration choice.
#elasticsearch.username: "elastic"
#
#
##  The password for the Elasticsearch connection.
##    Using `password` requires `username` to also be configured.
##    However, `elasticsearch.api_key` is the recommended configuration choice.
#elasticsearch.password: "test1234"
#
#
##  Whether SSL is used for the Elasticsearch connection.
#elasticsearch.ssl: false
#
#
##  Path to a CA bundle, e.g. /path/to/ca.crt
#elasticsearch.ca_certs: null
#
#
##  Whether to retry on request timeout.
#elasticsearch.retry_on_timeout: true
#
#
##  The request timeout to be passed to transport in options.
#elasticsearch.request_timeout: 120
#
#
##  The maximum wait duration (in seconds) for the Elasticsearch connection.
#elasticsearch.max_wait_duration: 60
#
#
##  The initial backoff duration (in seconds).
#elasticsearch.initial_backoff_duration: 1
#
#
##  The backoff multiplier.
#elasticsearch.backoff_multiplier: 2
#
#
##  Elasticsearch log level
#elasticsearch.log_level: INFO
#
#
##  Maximum number of times failed Elasticsearch requests are retried, except bulk requests
#elasticsearch.max_retries: 5
#
#
##  Retry interval between failed Elasticsearch requests, except bulk requests
#elasticsearch.retry_interval: 10
#
#
## ------------------------------- Elasticsearch: Bulk ------------------------
#
##  Options for the Bulk API calls behavior - all options can be
##    overridden by each source class
#
#
##  The number of docs between each counters display.
#elasticsearch.bulk.display_every: 100
#
#
##  The max size of the bulk queue
#elasticsearch.bulk.queue_max_size: 1024
#
#
##  The max size in MB of the bulk queue.
##    When it's reached, the next put operation waits for the queue size to
##    get under that limit.
#elasticsearch.bulk.queue_max_mem_size: 25
#
#
##  Minimal interval of time between MemQueue checks for being full
#elasticsearch.bulk.queue_refresh_interval: 1
#
#
##  Maximal interval of time during which MemQueue does not dequeue a single document
##  For example, if no documents were sent to Elasticsearch within 60 seconds because of
##  Elasticsearch being overloaded, then an error will be raised.
##  This mechanism exists to be a circuit-breaker for stuck jobs and stuck Elasticsearch.
#elasticsearch.bulk.queue_refresh_timeout: 60
#
#
##  The max size in MB of a bulk request.
##    When the next request being prepared reaches that size, the query is
##    emitted even if `chunk_size` is not yet reached.
#elasticsearch.bulk.chunk_max_mem_size: 5
#
#
##  The max size of the bulk operation to Elasticsearch.
#elasticsearch.bulk.chunk_size: 500
#
#
##  Maximum number of concurrent bulk requests.
#elasticsearch.bulk.max_concurrency: 5
#
#
##  Maximum number of concurrent downloads in the backend.
#elasticsearch.bulk.concurrent_downloads: 10
#
#
##  Maximum number of times failed bulk requests are retried
#elasticsearch.bulk.max_retries: 5
#
#
##  Retry interval between failed bulk attempts
#elasticsearch.bulk.retry_interval: 10
#
#
##  Enable to log ids of created/indexed/deleted/updated documents during a sync.
##    This will be logged on 'DEBUG' log level. Note: this depends on the service.log_level, not elasticsearch.log_level
#elasticsearch.bulk.enable_operations_logging: false
#
## ------------------------------- Elasticsearch: Experimental ------------------------
#
##  Experimental configuration options for Elasticsearch interactions.
#
#
##  Enable usage of Connectors API instead of calling connectors indices directly
#elasticsearch.feature_use_connectors_api: false
## ------------------------------- Service ----------------------------------
#
##  Connector service/framework related configurations
#
#
##  The interval (in seconds) to poll connectors from Elasticsearch.
#service.idling: 30
#
#
##  The interval (in seconds) to send a new heartbeat for a connector.
#service.heartbeat: 300
#
#
##  The maximum number of retries for pre-flight check.
#service.preflight_max_attempts: 10
#
#
##  The number of seconds to wait between each pre-flight check.
#service.preflight_idle: 30
#
#
##  The maximum number of errors allowed in one event loop.
#service.max_errors: 20
#
#
##  The number of seconds to reset `max_errors` count.
#service.max_errors_span: 600
#
#
##  The maximum number of concurrent content syncs.
#service.max_concurrent_content_syncs: 1
#
#
##  The maximum number of concurrent access control syncs.
#service.max_concurrent_access_control_syncs: 1
#
#
##  The maximum size (in bytes) of files that the framework should be willing
##    to download and/or process.
#service.max_file_download_size: 10485760
#
##  The interval (in seconds) to run job cleanup task.
#service.job_cleanup_interval: 300
#
#
##  Connector service log level.
#service.log_level: INFO
#
#
## ------------------------------- Extraction Service ----------------------------------
#
##  Local extraction service-related configurations.
##    These configurations are optional and are not included by default.
##    The presence of these configurations enables local content extraction.
##    By default, this whole object is `null`.
##    See: https://www.elastic.co/guide/en/enterprise-search/current/connectors-content-extraction.html#connectors-content-extraction-local
#
#
##  The host of the local extraction service.
#extraction_service.host: null
#
#
##  Request timeout for local extraction service requests, in seconds.
#extraction_service.timeout: 30
#
#
##  Whether or not to use file pointers for local extraction.
##    This can have very positive impacts on performance -
##    both speed and memory consumption.
##    However, it also requires that the Connectors deployment and the
##    local extraction service deployment must share a filesystem.
#extraction_service.use_file_pointers: False
#
#
##  The size (in bytes) that files are chunked to for streaming when sending
##    a file to the local extraction service.
##    Only applicable if `extraction_service.use_file_pointers` is `false`.
#extraction_service.stream_chunk_size: 65536
#
#
##  The location for files to be extracted from.
##    Only applicable if `extraction_service.use_file_pointers` is `true`.
#extraction_service.shared_volume_dir: /app/files
#
#
## ------------------------------- Sources ----------------------------------
#
##  An object mapping service type names to class Fully Qualified Names
##    E.g. `connectors.sources.mongo:MongoDataSource`.
##    If adding a net-new connector, it must be added here for the framework to detect it.
##    Default includes all tech preview, beta, and GA connectors in this repository.
##    An example is:
##    sources:
##      mongodb: connectors.sources.mongo:MongoDataSource

The elasticsearch server and kibana are working without any problems and l've already logged with an Asp.Net Core 8 Api. But when running the postgres connector it always logs the following error


2024-08-16 16:01:41 [FMWK][14:01:41][INFO] Waiting for Elasticsearch at http://172.18.0.3:9200 (so far: 63 secs)

2024-08-16 16:01:41 [FMWK][14:01:41][ERROR] Could not connect to the Elasticsearch server

2024-08-16 16:01:41 [FMWK][14:01:41][ERROR] Server disconnected

In the elasticsearch server container appears the following log every time the connector tries to connect to it:


2024-08-16 16:00:38 {"@timestamp":"2024-08-16T14:00:38.608Z", "log.level": "WARN", "message":"received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/172.18.0.3:9200, remoteAddress=/172.18.0.2:51720}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es01][transport_worker][T#12]","log.logger":"org.elasticsearch.http.netty4.Netty4HttpServerTransport","elasticsearch.cluster.uuid":"t89LDE8BTUeRaYZOq7lcfg","elasticsearch.node.id":"x5o2soqQSfOGgNz6n-6qtw","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"Komott_Api_Beta"}

Additionally I've changed the host address in the config.yml from localhost to the ip address of the docker container. Because when I've tried it with localhost the following error was logged in the connector and no request was logged in the Elasticsearch server:


2024-08-16 16:23:54 [FMWK][14:23:54][INFO] Waiting for Elasticsearch at localhost (so far: 7 secs)

2024-08-16 16:23:54 [FMWK][14:23:54][ERROR] Could not connect to the Elasticsearch server

2024-08-16 16:23:54 [FMWK][14:23:54][ERROR] Cannot connect to host localhost:9200 ssl:default [Connect call failed ('127.0.0.1', 920

Which surprised me a little bit, because my Asp.Net core logger, logs to localhost and it works without any problems.

Hey @juki1245 Sorry for late reply, I think the error logged by ES is the root cause of your problems:

received plaintext http traffic on an https channel

In your connector config you have

elasticsearch:
  host: "http://172.18.0.3:9200"

But you are running your ES cluster with security and SSL enabled, so it's available on https://172.18.0.3:9200 (note: https). I think changing that should fix the issue given that you are able to connect to your cluster with Asp.Net Core 8 Api.

Hi,

In a Docker-Compose stack setup with connectors Image: docker.elastic.co/enterprise-search/elastic-connectors:${CONNECTORS_VERSION}

For the error in
> docker-compose logs connectors
Cannot connect to host es01:9200 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)')]

Simply find and set the value for:
elasticsearch.ca_certs
in the config yml of your connectors.

example:

elasticsearch.ca_certs: /the_path_in_the_conatiner/ca/ca.crt