Hi,
I have been using ELK since last 5 years. My codebase is mostly for logstash where the input is a JDBC connection (DB) and after filtering output is the Elasticsearch cluster (for most of the cases).
So far I was using version 6.8.x for both Elasticsearch and logstash.
Recently I have updated to a new Elasticsearch cluster with version 8.11 (in my local), and I was successfully able to make it up and running for both ES and Kibana with SSL enabled, i.e., https://localhost:9200
for Elasticsearch.
Also I have created a plain and simple API Key for running shell scripts to create/manage index, templates etc., and those are also working fine.
The major problem I am facing now is with logstash. The same script that was working since last 5 years is now failing because of the connection issue with Elasticsearch due to security.
Can you please help me out on what are the required changes needed for the elasticsearch
input
, filter
and output
plugins to read and write to the ES cluster version 8.x?
Below is a small snippet of my logstash script's output
plugin. Please guide me where am I getting it wrong.
Also, please suggest the changes I would need to do if I send the data to a remote cluster rather than in my own localhost
.
Thank you in advance.
output {
if "test_standalone" in [tags] {
#stdout {
# codec => rubydebug {
# metadata => true
# }
#}
# Insert in test Index
elasticsearch {
hosts => ["localhost:9200"]
index => "test_new"
document_id => "%{[@metadata][_id]}"
api_key => "<API_Key ID>:<API_Key_encoded_value>"
manage_template => false
# action => "index"
ssl_enabled => true
retry_on_conflict => 5
pipeline => "my_ingest_pipeline"
}
}
}
Please note, that I have used the api_key
in <id_value>:<encoded_value>
format here.
The error is-
[2023-12-05T12:54:49,042][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2023-12-05T12:54:49,199][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://localhost:9200/]}}
[2023-12-05T12:54:49,397][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::JavaxNetSsl::SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target>}
[2023-12-05T12:54:49,399][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://localhost:9200/][Manticore::ClientProtocolException] PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target"}
[2023-12-05T12:54:49,417][INFO ][logstash.outputs.elasticsearch][main] Not eligible for data streams because config contains one or more settings that are not compatible with data streams: {"index"=>"test_new", "retry_on_conflict"=>5}
[2023-12-05T12:54:49,417][INFO ][logstash.outputs.elasticsearch][main] Data streams auto configuration (`data_stream => auto` or unset) resolved to `false`