Http client did not trust this server's certificate, closing connection

I am trying to sync MySQL with Elasticsearch and using version 8.13.2.

I followed the docs for setup.

But getting below error in my Elasticsearch logs:

{"@timestamp":"2024-04-11T07:52:04.039Z", "log.level": "WARN", "message":"http client did not trust this server's certificate, closing connection Netty4HttpChannel{localAddress=/192.168.112.4:9200, remoteAddress=/192.168.112.2:34886}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es01][transport_worker][T#2]","log.logger":"org.elasticsearch.http.netty4.Netty4HttpServerTransport","elasticsearch.cluster.uuid":"Qt28vyF0TcK1cw0viUEa3A","elasticsearch.node.id":"Am1-U0puSaiXFDpmNOt-Mw","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-cluster"}

Below is my docker-compose.yml:

setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f config/certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f config/certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: kibana\n"\
          "    dns:\n"\
          "      - kibana\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: logstash01\n"\
          "    dns:\n"\
          "      - logstash01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120

  es01:
    depends_on:
      setup:
        condition: service_healthy
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - discovery.type=single-node
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  kibana:
    depends_on:
      es01:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    volumes:
      - certs:/usr/share/kibana/config/certs
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  logstash01:
    build:
      context: .
      dockerfile: Dockerfile-logstash
    depends_on:
      es01:
          condition: service_healthy
      kibana:
          condition: service_healthy
      laravel_db:
          condition: service_healthy
    
    volumes:
      - certs:/usr/share/logstash/certs
      - logstashdata01:/usr/share/logstash/data
      - ./volumes/logstash/pipeline/:/usr/share/logstash/pipeline/
      - ./volumes/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
      - ./volumes/logstash/config/pipelines.yml:/usr/share/logstash/config/pipelines.yml
      - ./volumes/logstash/config/queries/:/usr/share/logstash/config/queries/
    environment:
      - ELASTIC_USER=elastic
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - ELASTICSEARCH_HOSTS=https://es01:9200

networks:
    default:
        name: elastic

volumes:
  certs:
    driver: local
  esdata01:
    driver: local
  kibanadata:
    driver: local
  logstashdata01:
    driver: local

logstash conf file:

input {
  jdbc {
    jdbc_driver_library => "/usr/share/logstash/mysql-connector-java-8.0.22.jar"
    jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
    jdbc_connection_string => "jdbc:mysql://mysql_8:3306/pcs_accounts_db"
    jdbc_user => "pcs_db_user"
    jdbc_password => "laravel_db"
    sql_log_level => "debug"  
    clean_run => true 
    record_last_run => false
    type => "txn"
    statement => "SELECT * FROM ac_transaction_dump"
  }

  jdbc {
    jdbc_driver_library => "/usr/share/logstash/mysql-connector-java-8.0.22.jar"
    jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
    jdbc_connection_string => "jdbc:mysql://mysql_8:3306/pcs_accounts_db"
    jdbc_user => "pcs_db_user"
    jdbc_password => "laravel_db"
    sql_log_level => "debug"  
    clean_run => true 
    record_last_run => false
    type => "trial"
    statement => "SELECT * FROM ac_daily_trial_balance"
  }
}

filter {  
  mutate {
    remove_field => ["@version", "@timestamp"]
  }
}

output {

  stdout { codec => rubydebug { metadata => true } }

  if [type] == "txn" {
    elasticsearch {
      hosts => ["https://es01:9200"]
      data_stream => "false"
      index => "ac_transaction_dump"
      document_id => "%{transaction_dump_id}"
    }
  }

  if [type] == "trial" {
    elasticsearch {
      hosts => ["https://es01:9200"]
      data_stream => "false"
      index => "ac_daily_trial_balance"
      document_id => "%{daily_trial_balance_id}"
    }
  }
}

Please reply to this thread if any info. is needed.

You need to configure ssl_certificate_authorities on your logstash elasticsearch output so that is trusts the CA of your ES node.

Thanks for the reply.

But in the docs:
`ssl.certificate_authorities

(Static) List of paths to PEM encoded certificate files that should be trusted.
This setting and ssl.truststore.path cannot be used at the same time.

You cannot use this setting and ssl.truststore.path at the same time.`

I am a little confused about this "PEM encoded" thing.

My certificates are store the way as shown in the below image. The path is "/usr/share/elasticsearch/config/certs" And the zip file contains two files one with extension ".crt" and another with ".key".

Please explain me what exactly should I need to follow.

ca.crt is your PEM encoded certificate authority.

Thanks for the clarification.

But now there is one more thing, in the image I've shared the folder "ca" and the zip files, they all contains "ca.crt" and "ca.key" then what is the difference between these files.

Also, which path should I choose ?

Another question is that, at many places (stackoverflow, medium etc.) I noticed that the path they are using is "/etc/elasticsearch/..." but in my file structure I don't have "elasticsearch" inside "etc". It is stored in "/usr/share/elasticsearch".

What is the difference between these two paths ?

Hello as i knew the ca is the authorite certificate so this to make the elastic certificate trusted in client machine for https zip file contain ca.crt and .key

and for the path /etc/elasticsearch/ is file for the configuration
and /usr/share/elasticsearch for binary file so for the execution file
if you install elasticsearch as service must be the same