Unable to create client connect; SSL certificate verify failed

Finally I have 5.8.1 version in my opt directory!

When I did a --dry-run command o fmy config and actions file I got this error

root@ba08c43b9d35:/opt/elasticsearch-curator/Curator# curator  --config /opt/elasticsearch-curator/Curator/curator.yml --dry-run /opt/elasticsearch-curator/Curator/actions_file.yml
2020-05-12 18:50:55,061 DEBUG                curator.cli                    run:110  Client and logging options validated.
2020-05-12 18:50:55,061 DEBUG                curator.cli                    run:114  default_timeout = 30
2020-05-12 18:50:55,061 DEBUG                curator.cli                    run:118  action_file: /opt/elasticsearch-curator/Curator/actions_file.yml
2020-05-12 18:50:55,067 DEBUG                curator.cli                    run:120  action_config: {'actions': {1: {'action': 'delete_indices', 'description': 'Delete if indices consume more than 300MB.', 'options': {'ignore_empty_list': True}, 'filters': [{'filtertype': 'pattern', 'kind': 'suffix', 'value': '-logs'}, {'filtertype': 'space', 'disk_space': 0.3, 'source': 'name', 'use_age': True, 'timestring': '%Y.%m.%d.%h'}]}}}
2020-05-12 18:50:55,067 DEBUG     curator.validators.SchemaCheck               __init__:26   Schema: {'actions': <class 'dict'>}
2020-05-12 18:50:55,067 DEBUG     curator.validators.SchemaCheck               __init__:27   "Actions File" config: {'actions': {1: {'action': 'delete_indices', 'description': 'Delete if indices consume more than 300MB.', 'options': {'ignore_empty_list': True}, 'filters': [{'filtertype': 'pattern', 'kind': 'suffix', 'value': '-logs'}, {'filtertype': 'space', 'disk_space': 0.3, 'source': 'name', 'use_age': True, 'timestring': '%Y.%m.%d.%h'}]}}}
2020-05-12 18:50:55,067 DEBUG     curator.validators.SchemaCheck               __init__:26   Schema: {'action': Any(In(['alias', 'allocation', 'close', 'cluster_routing', 'create_index', 'delete_indices', 'delete_snapshots', 'forcemerge', 'freeze', 'index_settings', 'open', 'reindex', 'replicas', 'restore', 'rollover', 'shrink', 'snapshot', 'unfreeze']), msg="action must be one of ['alias', 'allocation', 'close', 'cluster_routing', 'create_index', 'delete_indices', 'delete_snapshots', 'forcemerge', 'freeze', 'index_settings', 'open', 'reindex', 'replicas', 'restore', 'rollover', 'shrink', 'snapshot', 'unfreeze']")}
2020-05-12 18:50:55,067 DEBUG     curator.validators.SchemaCheck               __init__:27   "action type" config: {'action': 'delete_indices', 'description': 'Delete if indices consume more than 300MB.', 'options': {'ignore_empty_list': True}, 'filters': [{'filtertype': 'pattern', 'kind': 'suffix', 'value': '-logs'}, {'filtertype': 'space', 'disk_space': 0.3, 'source': 'name', 'use_age': True, 'timestring': '%Y.%m.%d.%h'}]}
2020-05-12 18:50:55,068 DEBUG     curator.validators.SchemaCheck               __init__:26   Schema: {'action': Any(In(['alias', 'allocation', 'close', 'cluster_routing', 'create_index', 'delete_indices', 'delete_snapshots', 'forcemerge', 'freeze', 'index_settings', 'open', 'reindex', 'replicas', 'restore', 'rollover', 'shrink', 'snapshot', 'unfreeze']), msg="action must be one of ['alias', 'allocation', 'close', 'cluster_routing', 'create_index', 'delete_indices', 'delete_snapshots', 'forcemerge', 'freeze', 'index_settings', 'open', 'reindex', 'replicas', 'restore', 'rollover', 'shrink', 'snapshot', 'unfreeze']"), 'description': Any(<class 'str'>, <class 'str'>, msg=None), 'options': <class 'dict'>, 'filters': <class 'list'>}
2020-05-12 18:50:55,068 DEBUG     curator.validators.SchemaCheck               __init__:27   "structure" config: {'action': 'delete_indices', 'description': 'Delete if indices consume more than 300MB.', 'options': {'ignore_empty_list': True}, 'filters': [{'filtertype': 'pattern', 'kind': 'suffix', 'value': '-logs'}, {'filtertype': 'space', 'disk_space': 0.3, 'source': 'name', 'use_age': True, 'timestring': '%Y.%m.%d.%h'}]}
2020-05-12 18:50:55,071 DEBUG     curator.validators.SchemaCheck               __init__:26   Schema: {'allow_ilm_indices': Any(<class 'bool'>, All(Any(<class 'str'>, msg=None), <function Boolean at 0x7fb679a65a70>, msg=None), msg=None), 'continue_if_exception': Any(<class 'bool'>, All(Any(<class 'str'>, msg=None), <function Boolean at 0x7fb679a65c20>, msg=None), msg=None), 'disable_action': Any(<class 'bool'>, All(Any(<class 'str'>, msg=None), <function Boolean at 0x7fb679a65dd0>, msg=None), msg=None), 'ignore_empty_list': Any(<class 'bool'>, All(Any(<class 'str'>, msg=None), <function Boolean at 0x7fb679a65f80>, msg=None), msg=None), 'timeout_override': Any(Coerce(int, msg=None), None, msg=None)}
2020-05-12 18:50:55,071 DEBUG     curator.validators.SchemaCheck               __init__:27   "options" config: {'ignore_empty_list': True}
2020-05-12 18:50:55,072 DEBUG     curator.validators.SchemaCheck               __init__:26   Schema: <function Filters.<locals>.f at 0x7fb679a655f0>
2020-05-12 18:50:55,072 DEBUG     curator.validators.SchemaCheck               __init__:27   "filters" config: [{'filtertype': 'pattern', 'kind': 'suffix', 'value': '-logs'}, {'filtertype': 'space', 'disk_space': 0.3, 'source': 'name', 'use_age': True, 'timestring': '%Y.%m.%d.%h'}]
2020-05-12 18:50:55,072 DEBUG     curator.validators.SchemaCheck               __init__:26   Schema: {'filtertype': Any(In(['age', 'alias', 'allocated', 'closed', 'count', 'empty', 'forcemerged', 'ilm', 'kibana', 'none', 'opened', 'pattern', 'period', 'shards', 'space', 'state']), msg="filtertype must be one of ['age', 'alias', 'allocated', 'closed', 'count', 'empty', 'forcemerged', 'ilm', 'kibana', 'none', 'opened', 'pattern', 'period', 'shards', 'space', 'state']"), 'kind': Any('prefix', 'suffix', 'timestring', 'regex', msg=None), 'value': Any(<class 'str'>, msg=None), 'exclude': Any(<class 'bool'>, All(Any(<class 'str'>, msg=None), <function Boolean at 0x7fb679a69b90>, msg=None), msg=None)}
2020-05-12 18:50:55,072 DEBUG     curator.validators.SchemaCheck               __init__:27   "filter" config: {'filtertype': 'pattern', 'kind': 'suffix', 'value': '-logs'}
2020-05-12 18:50:55,073 DEBUG     curator.validators.filters                      f:48   Filter #0: {'filtertype': 'pattern', 'kind': 'suffix', 'value': '-logs', 'exclude': False}
2020-05-12 18:50:55,073 DEBUG     curator.validators.SchemaCheck               __init__:26   Schema: {'filtertype': Any(In(['age', 'alias', 'allocated', 'closed', 'count', 'empty', 'forcemerged', 'ilm', 'kibana', 'none', 'opened', 'pattern', 'period', 'shards', 'space', 'state']), msg="filtertype must be one of ['age', 'alias', 'allocated', 'closed', 'count', 'empty', 'forcemerged', 'ilm', 'kibana', 'none', 'opened', 'pattern', 'period', 'shards', 'space', 'state']"), 'disk_space': Any(Coerce(float, msg=None), msg=None), 'reverse': Any(<class 'bool'>, All(Any(<class 'str'>, msg=None), <function Boolean at 0x7fb679a69290>, msg=None), msg=None), 'use_age': Any(<class 'bool'>, All(Any(<class 'str'>, msg=None), <function Boolean at 0x7fb679a693b0>, msg=None), msg=None), 'exclude': Any(<class 'bool'>, All(Any(<class 'str'>, msg=None), <function Boolean at 0x7fb679a6d0e0>, msg=None), msg=None), 'threshold_behavior': Any('greater_than', 'less_than', msg=None), 'source': Any('name', 'creation_date', 'field_stats', msg=None), 'stats_result': Any('min_value', 'max_value', msg=None), 'timestring': Any(<class 'str'>, msg=None)}
2020-05-12 18:50:55,073 DEBUG     curator.validators.SchemaCheck               __init__:27   "filter" config: {'filtertype': 'space', 'disk_space': 0.3, 'source': 'name', 'use_age': True, 'timestring': '%Y.%m.%d.%h'}
2020-05-12 18:50:55,074 DEBUG     curator.validators.filters                      f:48   Filter #1: {'filtertype': 'space', 'disk_space': 0.3, 'source': 'name', 'use_age': True, 'timestring': '%Y.%m.%d.%h', 'stats_result': 'min_value', 'exclude': False, 'reverse': True, 'threshold_behavior': 'greater_than'}
2020-05-12 18:50:55,074 DEBUG                curator.cli                    run:123  Full list of actions: {1: {'action': 'delete_indices', 'description': 'Delete if indices consume more than 300MB.', 'options': {'ignore_empty_list': True, 'disable_action': False, 'continue_if_exception': False, 'allow_ilm_indices': False, 'timeout_override': None}, 'filters': [{'filtertype': 'pattern', 'kind': 'suffix', 'value': '-logs', 'exclude': False}, {'filtertype': 'space', 'disk_space': 0.3, 'source': 'name', 'use_age': True, 'timestring': '%Y.%m.%d.%h', 'stats_result': 'min_value', 'exclude': False, 'reverse': True, 'threshold_behavior': 'greater_than'}]}}
2020-05-12 18:50:55,074 DEBUG                curator.cli                    run:128  action_disabled = False
2020-05-12 18:50:55,074 DEBUG                curator.cli                    run:132  continue_if_exception = False
2020-05-12 18:50:55,074 DEBUG                curator.cli                    run:134  timeout_override = None
2020-05-12 18:50:55,074 DEBUG                curator.cli                    run:136  ignore_empty_list = True
2020-05-12 18:50:55,074 DEBUG                curator.cli                    run:138  allow_ilm_indices = False
2020-05-12 18:50:55,074 INFO                 curator.cli                    run:148  Preparing Action ID: 1, "delete_indices"
2020-05-12 18:50:55,075 INFO                 curator.cli                    run:162  Creating client object and testing connection
2020-05-12 18:50:55,075 DEBUG              curator.utils             get_client:809  kwargs = {'hosts': ['elk'], 'port': 9200, 'use_ssl': True, 'certificate': '/opt/kibana/config/certs/elastic-ca.pem', 'ssl_no_validate': True, 'http_auth': 'elastic:esqelk', 'master_only': False, 'aws_sign_request': False, 'client_key': None, 'url_prefix': '', 'client_cert': None, 'aws_key': None, 'aws_secret_key': None, 'aws_token': None, 'timeout': 30}
2020-05-12 18:50:55,075 DEBUG              curator.utils             get_client:871  Checking for AWS settings
2020-05-12 18:50:55,082 DEBUG              curator.utils             get_client:886  "requests_aws4auth" module present, but not used.
2020-05-12 18:50:55,082 INFO               curator.utils             get_client:903  Instantiating client object
/opt/elasticsearch-curator/lib/elasticsearch/connection/http_requests.py:105: UserWarning: Connecting to https://elk:9200 using SSL with verify_certs=False is insecure.
2020-05-12 18:50:55,083 INFO               curator.utils             get_client:906  Testing client connectivity
2020-05-12 18:50:55,130 ERROR              curator.utils             get_client:915  HTTP N/A error: HTTPSConnectionPool(host='elk', port=9200): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fb679a14b90>: Failed to establish a new connection: [Errno -2] Name or service not known'))
2020-05-12 18:50:55,130 CRITICAL           curator.utils             get_client:923  Curator cannot proceed. Exiting.

My config file is

# Remember, leave a key empty if there is no value.  None will be a string,
# not a Python "NoneType"
client:
  hosts:
    - ***
  port: 9200
  url_prefix:
  use_ssl: True
  certificate: /opt/kibana/config/certs/elastic-ca.pem
  ssl_no_validate: True
  http_auth: ***:****
  timeout: 30
  master_only: False

logging:
  loglevel: DEBUG
  logfile:
  logformat: default
  blacklist: ['elasticsearch', 'urllib3']

This seems pretty straightforward here. It does not know whatever *** is in your hosts block of the client configuration file. It appears to be an invalid endpoint, or unreachable, or something to that extent.

Have you tried putting in an IP address rather than a host? Since ssl_no_validate is True, it won't matter that it's not a resolvable name.

I tried replacing hosts name *** with an IP which is what my docker runs on with elasticsearch,logstash,kibana running. But after replacing the IP I got this error. So I am wondering if it is a ssl_validation issue.

2020-05-12 20:04:53,644 INFO               curator.utils             get_client:903  Instantiating client object
/opt/elasticsearch-curator/lib/elasticsearch/connection/http_requests.py:105: UserWarning: Connecting to https://192.168.xx.xxx:9200 using SSL with verify_certs=False is insecure.
2020-05-12 20:04:53,645 INFO               curator.utils             get_client:906  Testing client connectivity
2020-05-12 20:04:57,666 ERROR              curator.utils             get_client:915  HTTP N/A error: HTTPSConnectionPool(host='192.168.xx.xxx', port=9200): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fc64e65b610>: Failed to establish a new connection: [Errno 113] No route to host'))
2020-05-12 20:04:57,667 CRITICAL           curator.utils             get_client:923  Curator cannot proceed. Exiting.

config.yml

client:
  hosts:
    - 192.168.xx.xxx

how can I check if the hosts : *** is valid endpoint or reachable?

Regarding this, I removed the directory after "certificate" but still have same errors.

We want the "use_ssl : True" so curator will connect with HTTPS instead of HTTP.
If "ssl_no_validate: True", curator will not validate the certificate it receives from Elasticsearch.

The verification is for hostname, that is what we want disabled. SSL is still turned on and CA is used. So I think I should provide the path under "certificate" as I have in config.yml file above.

You do not need a certificate at all to connect to an SSL server if you do not plan on verifying the validity of the certificate. Python and URLLib3 will yield all kinds of error messages when you attempt to connect to an SSL endpoint with verification turned off. This is what you're trying to do, it seems. Regardless, I suggest you disable/remove/set to empty the certificate configuration parameter if you also have ssl_no_validate: True. The certificate becomes pointless without verification/validation.

In fact, I suggest disabling all of it until you can actually connect to Elasticsearch. Once you're actually connecting to Elasticsearch, then you can worry about certificates.

So, let's back up and make sure your machine connectivity is working.

What do you get if you run:

curl -vv -k https://ELASTICSEARCH_IP:9200/

at the command line on the same machine where you're installing Curator? (replacing ELASTICSEARCH_IP with the IP of your Elasticsearch endpoint)

In Elasticsearch.yml, network.host is 192.168.a.b. When I run the command with this IP, I get following error

curl -vv -k https://192.168.a.b:9200/

root@ba08c43b9d35:/opt/elasticsearch/config# curl -vv -k https://192.168.a.b:9200/

  • Trying 192.168.a.b...
  • TCP_NODELAY set
  • connect to 192.168.a.b port 9200 failed: Connection refused
  • Failed to connect to 192.168.a.b port 9200: Connection refused
  • Closing connection 0
    curl: (7) Failed to connect to 192.168.a.b port 9200: Connection refused

And thanks a lot for explaining through the steps as we go. I am learning all this as a newbie.

This is great, I agree. certificate in config.yml is empty.

I dont undertstand why Connection is refused. I am hosting my ELK inside a docker which I Putty into at 192.168.dd.eee.
The website where I see Kibana discover and logs aggregated is for instance - "elkelkelk:5602". I am still able to open it and run it which shows that Elasticsearch is running.

root@ba08c43b9d35:/opt/elasticsearch# service elasticsearch  status
 * elasticsearch is running

This isn't merely SSL problems. This is simple connectivity at this point.

Are you certain this is the Elasticsearch server? There does not appear to be any service listening on that IP and port.

Could this be the Kibana server IP? Make sure that the IP and port are what are configured for Elasticsearch.

Docker is a different thing entirely. If you do not have the port forwarded to be reachable on the outside, Elasticsearch will be running inside the container, but not available to anyone on the outside.

How did you launch your docker run command?

Does elasticsearch endpoint mean what IP is elasticsearch on as mentioned inside elasticsearch.yml , networks.host? I want to make sure I am using the right IP address.

When using Docker, the "Elasticsearch endpoint" can mean different things. In your case, if you have launched docker run -p 9200:9200 (other options) containername:tag where containername is some container running Elasticsearch, then your endpoint will be the docker host IP and the port will be 9200. However, if you omitted to map the port(s), then Elasticsearch will only be running inside the container, and will be unreachable by anything outside the container.

This is the doker run command I ran.

 docker run -d --name esqelk -p 5602:5601 -p 9201:9200 -p 5045:5044 -it -e LOGSTASH_START=0 -e KIBANA_START=0 esqelkv9

This is why you can't reach Elasticsearch on 9200. It's because you mapped it to 9201.

Likewise, Kibana is on 5602, and you have external beats mapping to 5045.

Now that you know this, try the command again:

curl -vv -k https://192.168.a.b:9201/
1 Like

One more thing: sorry I missed this detail. In elasticsearch.yml this is the host and port.

#network.host: 192.168.a.b
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200

It is not 192.168.a.b, and I didnt notice it was commented out and 0.0.0.0 is the actual one
I tried curl command with 0.0.0.0 and still had same error

root@ba08c43b9d35:/opt/elasticsearch/config# curl -vv -k https://0.0.0.0:9201/
*   Trying 0.0.0.0...
* TCP_NODELAY set
* connect to 0.0.0.0 port 9201 failed: Connection refused
* Failed to connect to 0.0.0.0 port 9201: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 0.0.0.0 port 9201: Connection refused

You can't reach 0.0.0.0. That simply means "bind on every available IP address."

So it should still be:

curl -vv -k https://192.168.a.b:9201/

or whatever the IP of the docker host is.

So I ran curl with 192.168.dd.eee: 9201. This was the error

root@ba08c43b9d35:/opt/elasticsearch/config# curl -vv -k https://192.168.dd.eee:9201/
*   Trying 192.168.dd.eee...
* TCP_NODELAY set
* connect to 192.168.dd.eee port 9201 failed: No route to host
* Failed to connect to 192.168.dd.eee port 9201: No route to host
* Closing connection 0
curl: (7) Failed to connect to 192.168.dd.eee port 9201: No route to host

What is the host of the docker machine itself (not the container)? Use that IP address.

Got it I did ip a command and in docker0 network adpater

3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:2f:4c:e3:71 brd ff:ff:ff:ff:ff:ff
    inet 172.yy.y.y/16 brd 172.17.255.255 scope global docker0

Using this inet address, I did curl command

[root@localhost ~]# curl -vv -k https://172.yy.y.y:9201/
* About to connect() to 172.yy.y.y port 9201 (#0)
*   Trying 172.yy.y.y...
* Connected to 172.yy.y.y (172.yy.y.y) port 9201 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
* Server certificate:
*       subject: CN=instance
*       start date: Jan 25 03:03:28 2020 GMT
*       expire date: Jan 24 03:03:28 2023 GMT
*       common name: instance
*       issuer: CN=Elastic Certificate Tool Autogenerated CA
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 172.yy.y.y:9201
> Accept: */*
>
< HTTP/1.1 403 Forbidden
< content-type: application/json; charset=UTF-8
< content-length: 263
<
* Connection #0 to host 172.yy.y.y left intact
{"error":{"root_cause":[{"type":"security_exception","reason":"action [cluster:monitor/main] is unauthorized for user [anonymous_user]"}],"type":"security_exception","reason":"action [cluster:monitor/main] is unauthorized for user [anonymous_user]"},"status":403}

But even if we are inside the container, can we not connect to default port? Why do we have to use 9201 instead of 9200 from inside a container?

We have deviated into a different topic from where we started.

The original topic was about SSL connectivity for Curator. Now that we have dug deep, it's clear that the issue is much more about Docker, and understanding its networking complexities. I recommend starting a different topic with the primary goal being to get a basic curl call to work from a different machine (not container on the same machine, though that is also important) to ensure remote hosts can access Elasticsearch consistently. Once that is well understood, Curator will be trivial to add on.