Logstash not connecting to Elasticsearch - using Docker-Compose

Hello,

I'm trying to create indexes in elasticsearch from a postgresql database.
So I set up docker compose:

version: '3.8'

services:
  postgres:
    image: postgres:latest
    volumes:
      - C:\Users\theor\desktop\travail\noota\noot-search\initdb:/docker-entrypoint-initdb.d/
      - postgres_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=mysecretpassword
    ports:
      - "5432:5432"
    networks:
      - elastic-infrastructure-network

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.11.2
    environment:
      - discovery.type=single-node
      - ELASTIC_USERNAME=elastic
      - ELASTIC_PASSWORD=tidoz!@#6BtAqY7sQSck
      - logger.level=ERROR
    ports:
      - "9200:9200"
      - "9300:9300"
    networks:
      - elastic-infrastructure-network

  kibana:
    image: docker.elastic.co/kibana/kibana:8.11.2
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch
    networks:
      - elastic-infrastructure-network

  logstash:
    image: docker.elastic.co/logstash/logstash:8.11.2
    environment:
      - LOG_LEVEL=error
    volumes:
      - ./logstash_configs/logstash-dossiers.conf:/usr/share/logstash/pipeline/logstash-dossiers.conf
      - ./logstash_configs/logstash-familytags.conf:/usr/share/logstash/pipeline/logstash-familytags.conf
      - C:\Users\theor\desktop\travail\noota\noot-search\jdbc_driver:/usr/share/logstash/jdbc_driver
    depends_on:
      - postgres
      - elasticsearch
    networks:
      - elastic-infrastructure-network

networks:
  elastic-infrastructure-network:


volumes:
  postgres_data:

I also installed the jdbc plugin and created configuration files for my indexes:

logstash_configs/logstash-dossiers.conf : 
input {
  jdbc {
    jdbc_driver_library => "/usr/share/logstash/jdbc_driver/postgresql.jar"
    jdbc_driver_class => "org.postgresql.Driver"
    jdbc_connection_string => "jdbc:postgresql://postgres:5432/postgres"
    jdbc_user => "postgres"
    jdbc_password => "mysecretpassword"
    schedule => "0 * * * *"
    statement => "SELECT * FROM dossiers"
  }
  	tcp {
		port => 50000
	}
}

output {
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    user => "elastic"
    password => "tidoz!@#6BtAqY7sQSck"
    index => "dossiers_index"
    document_type => "_doc"
  }
}

logstash_configs/logstash-familytags.conf :

input {
  jdbc {
    jdbc_driver_library => "/usr/share/logstash/jdbc_driver/postgresql.jar"
    jdbc_driver_class => "org.postgresql.Driver"
    jdbc_connection_string => "jdbc:postgresql://postgres:5432/postgres"
    jdbc_user => "postgres"
    jdbc_password => "mysecretpassword"
    schedule => "0 * * * *"
    statement => "SELECT * FROM familytags"
  }
}

output {
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    user => "elastic"
    password => "tidoz!@#6BtAqY7sQSck"
    index => "familytags"
    document_type => "_doc"
  }
}

But when I run docker compose, I have the following logs:

Elasticsearch :

2023-12-15 14:02:44 Dec 15, 2023 1:02:44 PM sun.util.locale.provider.LocaleProviderAdapter <clinit>
2023-12-15 14:02:44 WARNING: COMPAT locale provider will be removed in a future release
2023-12-15 14:02:27 Created elasticsearch keystore in /usr/share/elasticsearch/config/elasticsearch.keystore

Logstash :

2023-12-15 14:02:27 Using bundled JDK: /usr/share/logstash/jdk
2023-12-15 14:02:42 Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
2023-12-15 14:02:45 [2023-12-15T13:02:45,142][ERROR][logstash.licensechecker.licensereader] Unable to retrieve Elasticsearch cluster info. {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError}
2023-12-15 14:02:45 [2023-12-15T13:02:45,150][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
2023-12-15 14:02:45 [2023-12-15T13:02:45,169][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
2023-12-15 14:03:15 [2023-12-15T13:03:15,163][ERROR][logstash.licensechecker.licensereader] Unable to retrieve Elasticsearch cluster info. {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError}
2023-12-15 14:03:15 [2023-12-15T13:03:15,164][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
2023-12-15 14:03:45 [2023-12-15T13:03:45,162][ERROR][logstash.licensechecker.licensereader] Unable to retrieve Elasticsearch cluster info. {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError}
2023-12-15 14:03:45 [2023-12-15T13:03:45,163][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
2023-12-15 14:02:27 2023/12/15 13:02:27 Setting 'log.level' from environment.

I've tried disabling SSL and re-enabled it but nothing works.
I'm a little lost, I don't really know how to solve this problem, if you have a solution I'm interested.

Thank you in advance for your help.

Theo.

Version 8 will use https per default, so you should use https not http.

Is your Elasticsearch container running without any issues? You would have a lot more logs from it if it is correctly running, can you share the logs for your Elasticsearch container?

Also, what is the result of the command from your docker host?

curl -k https://elasticsearch:9200 -u elastic

It will ask for your elastic password.

Hello ,

This is the result for the command you sent.

sh-5.0$ curl -k https://elasticsearch:9200 -u elastic
Enter host password for user 'elastic':
{
  "name" : "f43bc57bd9e6",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "yk2Qf3LQQWqGmViNzDuRkg",
  "version" : {
    "number" : "8.11.2",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "76013fa76dcbf144c886990c6290715f5dc2ae20",
    "build_date" : "2023-12-05T10:03:47.729926671Z",
    "build_snapshot" : false,
    "lucene_version" : "9.8.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}
sh-5.0$ 

This is the logs i have inside my elasticsearch container :

2023-12-15 15:42:50 WARNING: COMPAT locale provider will be removed in a future release
2023-12-15 15:42:38 Created elasticsearch keystore in /usr/share/elasticsearch/config/elasticsearch.keystore

This is one of my .conf files used to create the index :

input {
  jdbc {
    jdbc_driver_library => "/usr/share/logstash/jdbc_driver/postgresql.jar"
    jdbc_driver_class => "org.postgresql.Driver"
    jdbc_connection_string => "jdbc:postgresql://postgres:5432/postgres"
    jdbc_user => "postgres"
    jdbc_password => "mysecretpassword"
    schedule => "0 * * * *"
    statement => "SELECT * FROM dossiers"
  }
  tcp {
    port => 50000
  }
}

output {
  elasticsearch {
    hosts => ["https://elasticsearch:9200"]
    user => "elastic"
    password => "tidoz!@#6BtAqY7sQSck"
    cacert => "/usr/share/logstash/http_ca.crt"
    index => "dossiers_index"
    document_type => "_doc"
    ssl => true
    ssl_certificate_verification => false  # Désactivation temporaire de la vérification SSL
  }
}

I see no issues, it should work.

In your first configurations you were using http in your logstash output, it needs to be https, can you change it back to https and check if the erro still persists?

Does it get constantly logged or just while starting the containers?

Also, you can remove this line document_type => "_doc", this does not exist anymore.

I made the changes :

This is my compose file :


services:
  postgres:
    image: postgres:latest
    volumes:
      - C:\Users\theor\desktop\travail\noota\noot-search\initdb:/docker-entrypoint-initdb.d/
      - postgres_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=mysecretpassword
    ports:
      - "5432:5432"
    networks:
      - elastic-infrastructure-network

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.11.2
    environment:
      - discovery.type=single-node
      - ELASTIC_USERNAME=elastic
      - ELASTIC_PASSWORD=tidoz!@#6BtAqY7sQSck
      - logger.level=ERROR
    ports:
      - "9200:9200"
      - "9300:9300"
    networks:
      - elastic-infrastructure-network

  kibana:
    image: docker.elastic.co/kibana/kibana:8.11.2
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch
    networks:
      - elastic-infrastructure-network

  logstash:
    image: docker.elastic.co/logstash/logstash:8.11.2
    environment:
      - LOG_LEVEL=error
    volumes:
      - ./logstash_configs/config:/usr/share/logstash/pipeline
      - ./logstash_configs/http_ca.crt:/usr/share/logstash/http_ca.crt:ro
      - C:\Users\theor\desktop\travail\noota\noot-search\jdbc_driver:/usr/share/logstash/jdbc_driver

    depends_on:
      - postgres
      - elasticsearch
    networks:
      - elastic-infrastructure-network

networks:
  elastic-infrastructure-network:


volumes:
  postgres_data:

and one of my config file :

input {
  jdbc {
    jdbc_driver_library => "/usr/share/logstash/jdbc_driver/postgresql.jar"
    jdbc_driver_class => "org.postgresql.Driver"
    jdbc_connection_string => "jdbc:postgresql://postgres:5432/postgres"
    jdbc_user => "postgres"
    jdbc_password => "mysecretpassword"
    schedule => "0 * * * *"
    statement => "SELECT * FROM dossiers"
  }
}

output {
  elasticsearch {
    hosts => ["https://elasticsearch:9200"]
    user => "elastic"
    password => "tidoz!@#6BtAqY7sQSck"
    ssl_certificate_authorities => "/usr/share/logstash/http_ca.crt"
    ssl_verification_mode => "full"
    index => "dossiers_index"
    ssl => true
  }
}

I still have issues :

2023-12-15 17:00:24 Using bundled JDK: /usr/share/logstash/jdk
2023-12-15 17:00:38 Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
2023-12-15 17:00:41 [2023-12-15T16:00:41,029][ERROR][logstash.licensechecker.licensereader] Unable to retrieve Elasticsearch cluster info. {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError}
2023-12-15 17:00:24 2023/12/15 16:00:24 Setting 'log.level' from environment.
2023-12-15 17:00:41 [2023-12-15T16:00:41,040][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
2023-12-15 17:00:41 [2023-12-15T16:00:41,067][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
2023-12-15 17:01:11 [2023-12-15T16:01:11,063][ERROR][logstash.licensechecker.licensereader] Unable to retrieve Elasticsearch cluster info. {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError}
2023-12-15 17:01:11 [2023-12-15T16:01:11,069][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
2023-12-15 17:01:41 [2023-12-15T16:01:41,061][ERROR][logstash.licensechecker.licensereader] Unable to retrieve Elasticsearch cluster info. {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError}
2023-12-15 17:01:41 [2023-12-15T16:01:41,062][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
2023-12-15 17:02:11 [2023-12-15T16:02:11,061][ERROR][logstash.licensechecker.licensereader] Unable to retrieve Elasticsearch cluster info. {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError}
2023-12-15 17:02:11 [2023-12-15T16:02:11,062][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
```

Can you please run your docker compose setting the log level for logstash to INFO and share the logs?

Just updated the docker-compose :

version: '3.8'

services:
  postgres:
    image: postgres:latest
    volumes:
      - C:\Users\theor\desktop\travail\noota\noot-search\initdb:/docker-entrypoint-initdb.d/
      - postgres_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=mysecretpassword
    ports:
      - "5432:5432"
    networks:
      - elastic-infrastructure-network

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.11.2
    environment:
      - discovery.type=single-node
      - ELASTIC_USERNAME=elastic
      - ELASTIC_PASSWORD=tidoz!@#6BtAqY7sQSck
      - logger.level=ERROR
    ports:
      - "9200:9200"
      - "9300:9300"
    networks:
      - elastic-infrastructure-network

  kibana:
    image: docker.elastic.co/kibana/kibana:8.11.2
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch
    networks:
      - elastic-infrastructure-network

  logstash:
    image: docker.elastic.co/logstash/logstash:8.11.2
    environment:
      - LOG_LEVEL=info
    volumes:
      - ./logstash_configs/config:/usr/share/logstash/pipeline
      - ./logstash_configs/http_ca.crt:/usr/share/logstash/http_ca.crt:ro
      - C:\Users\theor\desktop\travail\noota\noot-search\jdbc_driver:/usr/share/logstash/jdbc_driver

    depends_on:
      - postgres
      - elasticsearch
    networks:
      - elastic-infrastructure-network

networks:
  elastic-infrastructure-network:


volumes:
  postgres_data:


And got those logs :

2023-12-15 23:32:01 [2023-12-15T22:32:01,641][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://elasticsearch:9200/][Manticore::ClientProtocolException] PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors"}
2023-12-15 23:32:01 [2023-12-15T22:32:01,661][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://elasticsearch:9200/][Manticore::ClientProtocolException] PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors"}
2023-12-15 23:32:01 [2023-12-15T22:32:01,707][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::JavaxNetSsl::SSLHandshakeException: PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors>}
2023-12-15 23:32:01 [2023-12-15T22:32:01,708][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://elasticsearch:9200/][Manticore::ClientProtocolException] PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors"}
2023-12-15 23:32:06 [2023-12-15T22:32:06,649][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::JavaxNetSsl::SSLHandshakeException: PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors>}
2023-12-15 23:32:06 [2023-12-15T22:32:06,649][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://elasticsearch:9200/][Manticore::ClientProtocolException] PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors"}
2023-12-15 23:32:06 [2023-12-15T22:32:06,650][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::JavaxNetSsl::SSLHandshakeException: PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors>}
2023-12-15 23:32:06 [2023-12-15T22:32:06,650][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://elasticsearch:9200/][Manticore::ClientProtocolException] PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors"}
2023-12-15 23:32:06 [2023-12-15T22:32:06,659][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::JavaxNetSsl::SSLHandshakeException: PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors>}
2023-12-15 23:32:06 [2023-12-15T22:32:06,660][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://elasticsearch:9200/][Manticore::ClientProtocolException] PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors"}
2023-12-15 23:32:06 [2023-12-15T22:32:06,682][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::JavaxNetSsl::SSLHandshakeException: PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors>}

So, you have a certificate error.

How did you created the certificate for your Elasticsearch?

The setting ssl_certificate_authorities needs to point to the same CA you used to create your Elasticsearch certificate, from the logs you shared something is not correct.

See if setting ssl_verification_mode to none will work, this will validate that you indeed have a certificate error.

I get the 'http_ca.cert' in '/usr/share/elasticsearch/config/certificates/ca.crt' and i docker cp to my project, then i use it like this :

  - ./http_ca.crt:/usr/share/logstash/config/http_ca.crt:ro
2023-12-16 00:54:11 [2023-12-15T23:54:11,764][ERROR][logstash.licensechecker.licensereader] Unable to retrieve Elasticsearch cluster info. {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError}
2023-12-16 00:54:11 [2023-12-15T23:54:11,765][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
2023-12-16 00:54:11 [2023-12-15T23:54:11,786][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
2023-12-16 00:54:11 [2023-12-15T23:54:11,897][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
2023-12-16 00:54:12 [2023-12-15T23:54:12,078][INFO ][org.reflections.Reflections] Reflections took 110 ms to scan 1 urls, producing 131 keys and 463 values
2023-12-16 00:54:12 [2023-12-15T23:54:12,492][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
2023-12-16 00:54:12 [2023-12-15T23:54:12,528][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>16, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x68a2566 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
2023-12-16 00:54:13 [2023-12-15T23:54:13,166][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.64}
2023-12-16 00:54:13 [2023-12-15T23:54:13,176][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
2023-12-16 00:54:13 [2023-12-15T23:54:13,189][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
2023-12-16 00:54:13 [2023-12-15T23:54:13,199][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
2023-12-16 00:54:13 [2023-12-15T23:54:13,290][INFO ][org.logstash.beats.Server][main][0710cad67e8f47667bc7612580d5b91f691dd8262a4187d9eca8cf87229d04aa] Starting server on port: 5044
2023-12-16 00:54:41 [2023-12-15T23:54:41,785][ERROR][logstash.licensechecker.licensereader] Unable to retrieve Elasticsearch cluster info. {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError}
2023-12-16 00:54:41 [2023-12-15T23:54:41,786][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
2023-12-16 00:54:41 [2023-12-15T23:54:41,796][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"elasticsearch:9200 failed to respond", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::OrgApacheHttp::NoHttpResponseException: elasticsearch:9200 failed to respond>}
2023-12-16 00:54:41 [2023-12-15T23:54:41,797][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ClientProtocolException] elasticsearch:9200 failed to respond"}

Is http_ca.cert the same file as ca.crt? You need to use ca.crt as the certificate authority in Logstash.

There is still something using http instead of https

Ok i will try this, causeI get the 'http_ca.cert' in '/usr/share/elasticsearch/config/certificates/ca.crt' manually after the container boot.

It's weird because I use https everywhere.

That said, it might be simpler to disable https for testing, right?

^^^ Hmmmm

I suspect you are not actually using the pipeline conf file you think you are...

I'll try to clarify and re-explain in more detail what I'm trying to do:

I have a psql database in a docker container. I would like to clone the psql database into Elasticsearch so that I can use Elasticsearch to search my data.

Basically, I have the following database in my postgres docker container:

postgres=# \dt
              List of relations
 Schema |      Name       | Type  |  Owner   
--------+-----------------+-------+----------
 public | alembic_version | table | postgres
 public | analysis        | table | postgres
 public | autofamilytags  | table | postgres
 public | business        | table | postgres
 public | dossiers        | table | postgres
 public | familytags      | table | postgres
 public | guideline       | table | postgres
 public | oauth           | table | postgres
 public | oauth2_client   | table | postgres
 public | oauth2_code     | table | postgres
 public | oauth2_token    | table | postgres
 public | parrainage      | table | postgres
 public | records         | table | postgres
 public | subscriptions   | table | postgres
 public | topictrackers   | table | postgres
 public | users           | table | postgres
(16 rows)

I would like to insert the contents of the 'folders' and 'records' tables into elasticsearch and create indexes containing the data from each 'folders' and 'records' tables( i will add the following tables later)

To do this I created configuration files for each index (psql table) that I want to insert into elasticsearch in the logstash_configs/config folder:

    Répertoire : noot-search\logstash_configs\config

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----       18/12/2023     16:56            485   logstash-dossiers.conf
-a----       18/12/2023     17:00            477   logstash-recorsds.conf

The files have the following content:
logstash-dossiers.conf :

input {
  jdbc {
    jdbc_driver_library => "/usr/share/logstash/jdbc_driver/postgresql.jar"
    jdbc_driver_class => "org.postgresql.Driver"
    jdbc_connection_string => "jdbc:postgresql://postgres:5432/postgres"
    jdbc_user => "postgres"
    jdbc_password => "mysecretpassword"
    schedule => "0 * * * *"
    statement => "SELECT * FROM dossiers"
  }
}

output {
  elasticsearch {
    hosts => ["https://elasticsearch:9200"]
    index => "dossiers_index"
  }
}

logstash-records.conf :

input {
  jdbc {
    jdbc_driver_library => "/usr/share/logstash/jdbc_driver/postgresql.jar"
    jdbc_driver_class => "org.postgresql.Driver"
    jdbc_connection_string => "jdbc:postgresql://postgres:5432/postgres"
    jdbc_user => "postgres"
    jdbc_password => "mysecretpassword"
    schedule => "0 * * * *"
    statement => "SELECT * FROM records"
  }
}

output {
  elasticsearch {
    hosts => ["https://elasticsearch:9200"]
    index => "records"
  }
}

These are the only .conf files I have in my project on my pc !

The complete architecture is as follows:


Répertoire : noot-search\

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
d-----       14/12/2023     09:56                  app
d-----       14/12/2023     10:53                  initdb
d-----       14/12/2023     17:29                  jdbc_driver
d-----       18/12/2023     17:08                  logstash_configs
-a----       14/12/2023     09:56           1481   .gitignore
-a----       18/12/2023     19:18           1354   docker-compose.yml
-a----       18/12/2023     17:00             47   Dockerfile
-a----       13/12/2023     02:40           1132   requirements.txt

The last version of my docker-compose.yml is the following :

version: '3.8'

services:
  postgres:
    image: postgres:latest
    volumes:
      - C:\Users\theor\desktop\travail\noota\noot-search\initdb:/docker-entrypoint-initdb.d/
      - postgres_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=mysecretpassword
    ports:
      - "5432:5432"
    networks:
      - elastic-infrastructure-network

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.11.2
    environment:
      - discovery.type=single-nodea
      - ELASTIC_USERNAME=elastic
      - ELASTIC_PASSWORD=tidoz!@#6BtAqY7sQSck
      - logger.level=ERROR
    ports:
      - "9200:9200"
      - "9300:9300"
    networks:
      - elastic-infrastructure-network

  kibana:
    image: docker.elastic.co/kibana/kibana:8.11.2
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch
    networks:
      - elastic-infrastructure-network

  logstash:
    image: docker.elastic.co/logstash/logstash:8.11.2
    environment:
      - LOG_LEVEL=info
    volumes:
      - ./logstash_configs/config:/usr/share/logstash/pipeline
      - C:\Users\theor\desktop\travail\noota\noot-search\jdbc_driver:/usr/share/logstash/jdbc_driver
    depends_on:
      - postgres
      - elasticsearch
    networks:
      - elastic-infrastructure-network

networks:
  elastic-infrastructure-network:


volumes:
  postgres_data:

I feel like I'm missing something

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.