ElasticSearch With Logstash SSL communication error

Hi!
(version 8.11.1)

I keep hitting a problem with my ELK stack configuration and I need some help with it.
I needed to enable Kibana login page and I found out that I need to set the ssl on for the elasticsearch configuration.
My first question would be if it's possible to enable the login page on kibana in another way.

I followed the tutorial listed here: Getting started with the Elastic Stack and Docker-Compose | Elastic Blog and I managed to enable the Login page on Kibana, both the Elasticsearch and Kibana image are running without problem.
The issue is when I add Logstash to the stack. This is the current state of configuration that I have:
setup

setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes: 
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD env variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD env variabler in the .env file";
          exit 1;
        fi;
        if [ ! -f config/certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f config/certs/certs.zip ]; then
          echo "Creating certs";
          bin/elasticsearch-certutil cert --name elasticsearch --silent --pem -out config/certs/certs.zip --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://elasticsearch:9200 -u elastic:${ELASTIC_PASSWORD} | grep -q "${CLUSTER_NAME}"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://elasticsearch:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "Done!";
        '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/elasticsearch/elasticsearch.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120

this for elasticsearch:

elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    restart: unless-stopped
    container_name: elasticsearch
    depends_on:
      setup:
        condition: service_healthy
    ports:
      - 9200:9200
    environment:
      - discovery.type=single-node
      - node.name=elasticsearch
      - cluster.name=${CLUSTER_NAME}
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/elasticsearch/elasticsearch.key
      - xpack.security.http.ssl.certificate=certs/elasticsearch/elasticsearch.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/elasticsearch/elasticsearch.key
      - xpack.security.transport.ssl.certificate=certs/elasticsearch/elasticsearch.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - elasticsearch-data:/usr/share/elasticsearch/data
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test: ["CMD-SHELL", "curl -s --cacert config/certs/ca/ca.crt https://elasticsearch:9200 -u elastic:${ELASTIC_PASSWORD} | grep -q \"${CLUSTER_NAME}\" || exit 1"]
      interval: 30s
      timeout: 30s
      retries: 5
      start_period: 15s

and this is for logstash:

  logstash:
    image: docker.elastic.co/logstash/logstash:${STACK_VERSION}
    restart: unless-stopped
    container_name: logstash
    depends_on:
      elasticsearch:
        condition: service_healthy
    user: "0"
    environment:
      - ELASTICSEARCH_HOST=https://elasticsearch:9200
      - ELASTIC_USER=elastic
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
    volumes:
      - certs:/etc/logstash/config/certs
      - ./logstash/pipeline:/usr/share/logstash/pipeline/
    healthcheck:
      test: "bin/logstash -t"
      interval: 30s
      timeout: 30s
      retries: 5
      start_period: 15s

and the logstash conf:

output {
	elasticsearch {
		hosts => ["https://elasticsearch:9200"]
		user => "elastic"
		password => "${ELASTIC_PASSWORD}"
		manage_template => false
		index => "test-%{+YYYY.MM.dd}"
		ssl_enabled => true
		ssl_certificate_authorities => ["/etc/logstash/config/certs/ca/ca.crt"]
		ssl_certificate => "/etc/logstash/config/certs/elasticsearch/elasticsearch.crt"
		ssl_key => "/etc/logstash/config/certs/elasticsearch/elasticsearch.key"
}

With this configuration I get this error:

[ERROR][logstash.javapipeline    ][main] Pipeline error {:pipeline_id=>"main", :exception=>#<Java::JavaSecuritySpec::InvalidKeySpecException: java.security.InvalidKeyException: IOException : algid parse error, not a sequence>, :backtrace=>["sun.security.rsa.RSAKeyFactory.engineGeneratePrivate(sun/security/rsa/RSAKeyFactory.java:253)"

and from there the logstash is shutting down.
If there is a need i can post the full error. What I got from this is that the key is not valid and If I remove there 2 lines:

ssl_certificate => "/etc/logstash/config/certs/elasticsearch/elasticsearch.crt"
ssl_key => "/etc/logstash/config/certs/elasticsearch/elasticsearch.key"

First is seems that the connection is working:

logstash               | [2024-04-05T09:33:57,521][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://elasticsearch:9200"]}
logstash               | [2024-04-05T09:33:57,557][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@elasticsearch:9200/]}}
logstash               | [2024-04-05T09:33:57,749][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://elastic:xxxxxx@elasticsearch:9200/"}
logstash               | [2024-04-05T09:33:57,750][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.11.1) {:es_version=>8}
logstash               | [2024-04-05T09:33:57,750][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
logstash               | [2024-04-05T09:33:57,772][INFO ][logstash.outputs.elasticsearch][main] Not eligible for data streams because config contains one or more settings that are not compatible with data streams: {"index"=>"log-%{+YYYY.MM.dd}"}

and then I get these erros:

elasticsearch          | {"@timestamp":"2024-04-05T09:35:55.999Z", "log.level": "WARN", "message":"received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/172.28.0.3:9200, remoteAddress=/172.28.0.4:33244}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[elasticsearch][transport_worker][T#2]","log.logger":"org.elasticsearch.http.netty4.Netty4HttpServerTransport","elasticsearch.cluster.uuid":"eX4SRKGNRnGI7SzSB0_DpQ","elasticsearch.node.id":"dT8PCmiTTziACkRZCOTTFA","elasticsearch.node.name":"elasticsearch","elasticsearch.cluster.name":"elk-cluster"}
logstash               | [2024-04-05T09:35:56,000][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"elasticsearch:9200 failed to respond", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::OrgApacheHttp::NoHttpResponseException: elasticsearch:9200 failed to respond>}
logstash               | [2024-04-05T09:35:56,001][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ClientProtocolException] elasticsearch:9200 failed to respond"}
logstash               | [2024-04-05T09:35:56,013][ERROR][logstash.licensechecker.licensereader] Unable to retrieve Elasticsearch cluster info. {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError}
logstash               | [2024-04-05T09:35:56,013][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}

If you have any suggestion I'm happy to hear them.

Hi @Blaj_Dragos

First You shouldn't need those two lines... Just the ssl_certificate_authoritie line.

I think there's something else going on... I think you have more than one logstash config file or something...

Because as you can see something's trying to connect over HTTP not HTTPS is there more than 1 conf file in your local ./logstash/pipeline directory? if so They get concatenated together

OK, so elasticsearch complains that it received http on an https channel. It cannot respond so it drops the connection. logstash then logs that it cannot connect to http://elasticsearch:9200, which is indeed http.

If xpack.monitoring.elasticsearch.hosts is set in logstash.yml then I believe the licensechecker will connect to that. The issue is not with your elasticsearch output.

1 Like

Hi! I'm sorry for the late response!

I removed the two lines and when I checked the /usr/share/logstash/pipeline directory ont he running container I found only one file "logstash.conf" there with the config i posted above.

Hi! I'm sorry for the late response!

I checked "/usr/share/logstash/config/logstash.yml" file and found this lines inside: "xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]".
Then I put these lines in docker compose for logstash:

XPACK_MONITORING_ELASTICSEARCH_USERNAME: username
XPACK_MONITORING_ELASTICSEARCH_PASSWORD: password
XPACK_MONITORING_ELASTICSEARCH_HOSTS: https://elasticsearch:9200
XPACK_MONITORING_ELASTICSEARCH_SSL_CERTIFICATEAUTHORITY: path/to/certificate/ca.crt

And it's working now. Thank you for the help!