Connecting to Elasticsearch node in Elastic cloud through python promt

Hi, In order to create an ingestion pipeline into the Elasticsearch node running in Elastic Cloud, I'm using the following code (API authentication):

# API authentication
config = configparser.ConfigParser()
config.read('api_auth.ini')

es = Elasticsearch(
    cloud_id=config['DEFAULT']['cloud_id'],
    api_key=(config['DEFAULT']['apikey_id'], config['DEFAULT']['apikey_key']),
)

es.info()

where my api_auth.ini file looks like:

[DEFAULT]
cloud_id = YYYYYYYYYYYYYYYYYY
apikey_id = ZZZZZZZZZZZZZZZZZ
apikey_key = XXXXXXXXXXXXXXX

However, I'm not being able to setup a connection and getting the following connection error:

elasticsearch.exceptions.ConnectionError: ConnectionError(<urllib3.connection.HTTPConnection object at 0x7ff370053ee0>: Failed to establish a new connection: [Errno 61] Connection refused) caused by: NewConnectionError(<urllib3.connection.HTTPConnection object at 0x7ff370053ee0>: Failed to establish a new connection: [Errno 61] Connection refused)

Can you kindly help fixing this issue? Thanks.

Adding port=9200, scheme="https" in the es now apparently fixed that error. But I'm getting the following connection error now:

elasticsearch.exceptions.ConnectionError: ConnectionError((<urllib3.connection.HTTPSConnection object at 0x7f7b98ecfc40>, 'Connection to 318ee65a637a4d979a1fb8f38045448c.us-central1.gcp.cloud.es.io timed out. (connect timeout=10)')) caused by: ConnectTimeoutError((<urllib3.connection.HTTPSConnection object at 0x7f7b98ecfc40>, 'Connection to 318ee65a637a4d979a1fb8f38045448c.us-central1.gcp.cloud.es.io timed out. (connect timeout=10)'))

Can anyone help fixing the connection timeout error? I've tried the following fix but didn't work out:

es.cluster.health(wait_for_status='yellow', request_timeout=1)

Anybody encountered similar error before?

Bhi @Sentel_AI Welcome to the community!

I'm not a python guru but Elastic Cloud does not run on port 9200 it runs on 443 and/or 9243.

But according to the docs you shouldn't need any of those are you sure you have the API key and ID correct and cloud ID correct?

I just did this

from elasticsearch import Elasticsearch

# you can use RFC-1738 to specify the url

# ... or specify common parameters as kwargs

es = Elasticsearch(
    cloud_id="mycluster:sdfgsdfgdsfgdsfglvdsfgsdfgdsfgmNiM2JkNzRjNDY3JGIxZTUyOWEwNTBkNjRkODZhMzIxZTBhMjU3YjRlODhh",
    http_auth=("elastic", "dsfgsdfgsdfgg"),
)

output = es.info(),

print(output, end=" ")

and it worked fine...

This also worked fine...

es = Elasticsearch(
    cloud_id="mycluster:sdfgsdfgdsfgdsfgsdfgsdfgsdfgsdfgsdfgsdfgsdfgsdfgdsfgkODZhMzIxZTBhMjU3YjRlODhh",
    api_key=('cew-IX0BKn301Oe9gPyz', '02I9MJ4OR2GdxYqY3I3cKA'),

remember this is NOT and Elastic Cloud API key it is an Elasticsearch API key which can only be created via and API call see here

POST /_security/api_key
{
  "name": "my-api-key",
  "expiration": "1d",   
  "role_descriptors": { 
    "role-a": {
      "cluster": ["all"]
      
    }
  }
}

results

{
  "id" : "cew-IX0BKn301Oe9gPyz",
  "name" : "my-api-key",
  "expiration" : 1637026684066,
  "api_key" : "02I9MJ4OR2GdxYqY3I3cKA"
}

Please don't anyone worry those are invalid API keys

Thanks for the reply. I indeed created the Elastic Cloud authentication keys using the Elastic cloud documentation. Nonetheless, I just discovered that by directly passing the cloud_id and http_auth into the script at least fixes the connection issue. However, I'm still having ConnectionError when I'm trying to ingest some data.

elasticsearch.exceptions.ConnectionError: ConnectionError(<urllib3.connection.HTTPConnection object at 0x7f98913b6c10>: Failed to establish a new connection: [Errno 61] Connection refused) caused by: NewConnectionError(<urllib3.connection.HTTPConnection object at 0x7f98913b6c10>: Failed to establish a new connection: [Errno 61] Connection refused)

Let me get back to you after a bit more debugging. Maybe the way I'm trying to write the data into the Elasticsearch node is problematic. Much thanks.

Update: the problem somehow was in Espandas - the step where I am trying to ingest a pandas DataFrame into Elasticsearch. Nonetheless, for now I'm using an explicit mapping (i.e. by explicitly defining the mapping of the document) and only calling the functions from Elasticsearch-py to create and populate the indices. It works.

However, while using API authentication I did encounter the following error:

elasticsearch.exceptions.AuthorizationException: AuthorizationException(403, 'security_exception', 'action [indices:admin/create] is unauthorized for API key id [XXXXXXXXYYYYYZZZZZ] of user [found-internal-userconsole-proxy], this action is granted by the index privileges [create_index,manage,all]')

Nevertheless, for now, I'm using elastic user and being able to ingest the data (without using Espandas). I needed the solution quite fast. So this works for the time being. However, I would like to have an access with the API authentication as well. Will look into this further.

Yes you will need to learn about users and roles.

Then you will need to create the correct API key

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.