How to connect to `elasticsearch` version `8.x` using `API Key` from `logstash`?

Hi,

I have been using ELK since last 5 years. My codebase is mostly for logstash where the input is a JDBC connection (DB) and after filtering output is the Elasticsearch cluster (for most of the cases).

So far I was using version 6.8.x for both Elasticsearch and logstash.

Recently I have updated to a new Elasticsearch cluster with version 8.11 (in my local), and I was successfully able to make it up and running for both ES and Kibana with SSL enabled, i.e., https://localhost:9200 for Elasticsearch.

Also I have created a plain and simple API Key for running shell scripts to create/manage index, templates etc., and those are also working fine.

The major problem I am facing now is with logstash. The same script that was working since last 5 years is now failing because of the connection issue with Elasticsearch due to security.

Can you please help me out on what are the required changes needed for the elasticsearch input, filter and output plugins to read and write to the ES cluster version 8.x?

Below is a small snippet of my logstash script's output plugin. Please guide me where am I getting it wrong.
Also, please suggest the changes I would need to do if I send the data to a remote cluster rather than in my own localhost.

Thank you in advance.

output {
	if "test_standalone" in [tags] {
		#stdout {
		#	codec => rubydebug {
		#		metadata => true
		#	}
		#}

        # Insert in test Index
        elasticsearch {
            hosts => ["localhost:9200"]
            index => "test_new"
            document_id => "%{[@metadata][_id]}"
            api_key => "<API_Key ID>:<API_Key_encoded_value>"
            manage_template => false
            # action => "index"
            ssl_enabled => true
            retry_on_conflict => 5
            pipeline => "my_ingest_pipeline"
        }
	}
}

Please note, that I have used the api_key in <id_value>:<encoded_value> format here.

The error is-

[2023-12-05T12:54:49,042][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2023-12-05T12:54:49,199][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://localhost:9200/]}}
[2023-12-05T12:54:49,397][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::JavaxNetSsl::SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target>}
[2023-12-05T12:54:49,399][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://localhost:9200/][Manticore::ClientProtocolException] PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target"}
[2023-12-05T12:54:49,417][INFO ][logstash.outputs.elasticsearch][main] Not eligible for data streams because config contains one or more settings that are not compatible with data streams: {"index"=>"test_new", "retry_on_conflict"=>5}
[2023-12-05T12:54:49,417][INFO ][logstash.outputs.elasticsearch][main] Data streams auto configuration (`data_stream => auto` or unset) resolved to `false`

Update:

With the below configuration, I have been successfully able to send data from logstash to elasticsearch, both are having version 8.x, through SSL-

I just added ssl_verification_mode => "none" in the config and instead of using the encoded value for the api key, I used the actual value for the API Key that I had received via the API.

output {
	if "test_standalone" in [tags] {
		#stdout {
		#	codec => rubydebug {
		#		metadata => true
		#	}
		#}

        # Insert in test Index
        elasticsearch {
            hosts => ["localhost:9200"]
            index => "test_new"
            document_id => "%{[@metadata][_id]}"
            api_key => "<API_Key ID>:<API_Key_value>"
            manage_template => false
            # action => "index"
            ssl_enabled => true
            ssl_verification_mode => "none"
            retry_on_conflict => 5
            pipeline => "my_ingest_pipeline"
        }
	}
}

But I am still unclear if the same, i.e., the above settings is the correct one in order to connect to a remote ElasticSearch cluster, say, running in elastic.co cloud.

Any help or insight will be appreciated. Thanks!

After updating the same settings with the remote Elasticsearch cluster, instead of localhost:9200, I am receiving the below error-

:exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable:

Can someone please suggest what changes do I need to make in this config?

Note: I have tried with both ssl_verification_mode => "none" and without this, i.e., ssl_verification_mode => "full" (by default)

        # Insert in test Index
        elasticsearch {
            hosts => ["<remote server address>"]
            index => "test_new"
            document_id => "%{[@metadata][_id]}"
            api_key => "<API_Key ID>:<API_Key_value>"
            manage_template => false
            # action => "index"
            ssl_enabled => true
            # ssl_verification_mode => "none"
            retry_on_conflict => 5
            pipeline => "my_ingest_pipeline"
        }

Hi @pushanbhattacharya Nice Thread thanks for posting your journey.

Are you connecting to Elastic Cloud?

if so you have to put port :443 on the Host URL otherwise it will be set to default :9200 which does not work for Elastic Cloud

Note when you copy the cluster name
https://asldkfjhasdlfkjhasdflkasjdfh.es.us-west1.gcp.cloud.es.io

but you need to put
https://asldkfjhasdlfkjhasdflkasjdfh.es.us-west1.gcp.cloud.es.io:443

Also you should be able to comment out the verification because the cert is Trusted / Publicly Signed
WfyhbUOanbG9DoqiRvBBjtLH.png)

# ssl_verification_mode => "none"

1 Like

Hi @stephenb,

Yes, we have moved to Elastic Cloud (v8.x)

Thank you so so much! I got really stuck there and this simple change helped me fixing the issue.

A suggestion from my side: It would be great if this information (that you provided) is shared in the documentation.

Now, I shall try the same with Elasticsearch input and filter plugins, and share my findings.

Below is the final version-

        # Insert in test Index
        elasticsearch {
            hosts => ["<remote server address>:443"]
            index => "test_new"
            document_id => "%{[@metadata][_id]}"
            api_key => "<API_Key ID>:<API_Key_value>"
            manage_template => false
            # action => "index"
            ssl_enabled => true
            retry_on_conflict => 5
            pipeline => "my_ingest_pipeline"
        }

Thanks again!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.