Curator - SSL Connection Issue

I just installed Curator 4.2 from the repository. It is installed on the ELK server. I created the ~/.curator/curator.yml file with the following contents:

# Remember, leave a key empty if there is no value.  None will be a string,
# not a Python "NoneType"
client:
  hosts:
    - server.domain.local (with my actual FQDN here)
  port: 5044
  url_prefix:
  use_ssl: True
  certificate: /etc/pki/tls/certs/logstash-forwarder.crt (my self signed crt file)
  client_cert:
  client_key:
  ssl_no_validate: False
  http_auth:
  timeout: 180
  master_only: False

logging:
  loglevel: INFO
  logfile:
  logformat: default
  blacklist: ['elasticsearch', 'urllib3']

I created an action file delete_older_90_days.yml with the following contents:

# Remember, leave a key empty if there is no value.  None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True.  If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
  1:
    action: delete_indices
    description: >-
      Delete indices older than 90 days (based on index name), for logstash-
      prefixed indices. Ignore the error if the filter does not result in an
      actionable list of indices (ignore_empty_list) and exit cleanly.
    options:
      ignore_empty_list: True
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: logstash-
      exclude:
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: days
      unit_count: 90
      exclude:

When I run curator --dry-run ~/.curator/delete_older_90_days.yml I get the following errors:

2017-01-06 10:29:27,298 INFO      Preparing Action ID: 1, "delete_indices"
/opt/elasticsearch-curator/lib/python35.zip/urllib3/connection.py:337: SubjectAltNameWarning:
Certificate for garcia.magicsprings.local has no `subjectAltName`, falling back to check for a
`commonName` for now. This feature is being removed by major browsers and deprecated by RFC 2818.
(See https://github.com/shazow/urllib3/issues/497 for details.)

 - repeats that error a few times

Unable to create client connection to Elasticsearch.  Error: ConnectionError(('Connection aborted.',
RemoteDisconnected('Remote end closed connection without response',))) caused by:
ProtocolError(('Connection aborted.', RemoteDisconnected('Remote end closed connection without
response',)))

I tried setting ssl_no_validate: True but that did not resolve the issue. Then I get:

/opt/elasticsearch-curator/lib/python35.zip/urllib3/connectionpool.py:843: InsecureRequestWarning:
Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See:
https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings

-repeated a few times

Unable to create client connection to Elasticsearch.  Error: ConnectionError(('Connection aborted.',
ConnectionResetError(104, 'Connection reset by peer'))) caused by: ProtocolError(('Connection aborted.',
ConnectionResetError(104, 'Connection reset by peer')))

I'm new to this, so I'm assuming its something simple. Any suggestions?

I'm very confused here. This certificate is for Beats (or if you're still using logstash-forwarder, which has been deprecated) to talk to Logstash. Curator doesn't need to talk to Logstash on port 5044. It needs to talk to Elasticsearch, usually on port 9200, or something thereabouts.

That said, the error you're getting indicates that the certificate was created in a way that there are some things missing that are expected in newer certificates. It is apparently not a fatal error for now. I'm guessing, though, that the "unable to connect" errors are because you're trying to connect to Logstash, and not Elasticsearch.

I tried 5044 since 9200 gave me errors originally. I moved back to 9200 and that gives me:

/opt/elasticsearch-curator/lib/python35.zip/elasticsearch/connection/http_urllib3.py:70: UserWarning:
Connecting to server.domain.local using SSL with verify_certs=False is insecure.
Unable to create client connection to Elasticsearch.  Error:
ConnectionError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f35d9d3f438>: Failed to establish
a new connection: [Errno 111] Connection refused) caused by:
NewConnectionError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f35d9d3f438>: Failed to
establish a new connection: [Errno 111] Connection refused)

I also tried connecting on 9200 using no SSL and that gives the same Connection refused errors.

EDIT: my key has that name based on a guide I was using, but I'm using Beats and winlogbeat to ship log files.

How is Elasticsearch secured? Is it secured with X-Pack or something else?

Are you using the same certificate (which I find highly unlikely)? What do the Elasticsearch logs say when you try to connect?

What do you see if you run curl -XGET http://server.domain.local:9200? what about curl -XGET https://server.domain.local:9200 (please substitute the appropriate IP if necessary)

When I hit localhost:

$ curl -XGET http://127.0.0.1:9200
{
  "name" : "NO479iq",
  "cluster_name" : "untergeek-test",
  "cluster_uuid" : "2I-CSpkcQ-iU56pSgu6wWg",
  "version" : {
    "number" : "5.1.1",
    "build_hash" : "5395e21",
    "build_date" : "2016-12-06T12:36:15.409Z",
    "build_snapshot" : false,
    "lucene_version" : "6.3.0"
  },
  "tagline" : "You Know, for Search"
}

What does your Elasticsearch output block look like in your winlogbeat.yml file? That will indicate whether SSL is involved, and what we're up against.

Apparently I have 9200 completely locked down. Even from localhost I can't get anything on 9200.

This was the guide I used when setting everything up. So, I'm using nginx to secure it.

Elasticsearch is commented out in my winlogbeat.yml file. Under logstash I have:

output.logstash:
  # The Logstash hosts
  hosts: ["garcia.magicsprings.local:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  ssl.certificate_authorities: ["C:\\logstash-forwarder.crt"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

With that guide, you wouldn't have an SSL certificate, but you probably have authentication credentials. That would mean using the http_auth: "user:pass" line in the curator.yml file.

What does your Elasticsearch output block look like in your Logstash configuration file, then?

In my Logstash config file I have:

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

That's the input block. I need to see the output block, the one that sends data to Elasticsearch.

Sorry

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

I'm looking over that guide, and the proxy isn't to secure Elasticsearch, but Kibana. Elasticsearch is not behind anything. If you can't reach Elasticsearch on port 9200, then something else is wrong.

What do you see if you try: curl -XGET http://127.0.0.1:9200? If you see nothing, what server is it on? You'll need the appropriate IP address, as those .local domain names will not work.

I get

curl: (7) Failed to connect to 127.0.0.1 port 9200: Connection refused

It also fails with localhost. So something I setup is locking down the ports, just not exactly sure what.

OK... I feel real stupid now. The elasticsearch service had crashed. Just restarted it and everything is working now.

You don't need use_ssl: True in Curator. Just use hosts: 127.0.0.1 with port: 9200, or blank. No need to specify a certificate either.

Thank you for taking a look at it. Can't believe I didn't check the services first.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.