Hi,
I'm trying to configure TLS between Logstash and Elastic.
Currently I have configured (correctly, it seems), TLS on my cluster (3 nodes) and I can access to it using Kibana.
In my Logstash (installed in localhost on each servers) I'm using elasticsearch output filter but when I start it I get only the error:
[2018-03-02T18:59:24,125][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx@localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx@localhost:9200/][Manticore::ClientProtocolException] PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target"}
[2018-03-02T18:59:25,400][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0xae84822@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:246
while on elasticsearch logs I get:
[2018-03-02T18:59:24,124][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [elk1] http client did not trust this server's certificate, closing connection [id: 0x3612b68c, L:0.0.0.0/0.0.0.0:9200 ! R:/127.0.0.1:47433]
can you help me?
I know that ssl should not be mandatory since I specify https in hosts key and I know also that should not be used ssl_certificate_verification.
On the server runing logstash. Replace localhost:9200 by your_ip_server:9200 (or DNS_name:9200) if elasticsearch is not running on the same server than your logstash.
If you register Elasticsearch's root certificate in Logstash keystore, then you don't need to add a cacert in your logstash conf file. It's used by default. You'll only need to register username & password, as Hallaoui mentionned.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.