Connection to ES nodes can only be made with Elastic Superuser 401 Error

Elastic version 6.5.1 on RPM
I've been working on our encrypted Logstash to Elasticsearch connection and the only way I can get it to work is by using our elastic user (superuser roles). I've followed the instructions in the documentation on creating a logstash_internal user and I am unable to get that user to connect. The error that I keep getting with the logstash_internal user is:

[WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://logstash_internal:xxxxxx@xx.xxx.xx.xxx:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'https://"

Heres what I have for my users by running GET _xpack/security/user

"logstash_user" : {
    "username" : "logstash_user",
    "roles" : [
      "logstash_reader",
      "logstash_admin"
    ],
    "full_name" : "Kibana User for Logstash",
    "email" : null,
    "metadata" : { },
    "enabled" : true
  },
  "logstash_internal" : {
    "username" : "logstash_internal",
    "roles" : [
      "logstash_writer"
    ],
    "full_name" : "Internal Logstash User",
    "email" : null,
    "metadata" : { },
    "enabled" : true
  },

This is what I get when I run GET _xpack/security/role

"logstash_reader" : {
    "cluster" : [ ],
    "indices" : [
      {
        "names" : [
          "logstash-*"
        ],
        "privileges" : [
          "read",
          "view_index_metadata"
        ]
      }
    ],
    "applications" : [ ],
    "run_as" : [ ],
    "metadata" : { },
    "transient_metadata" : {
      "enabled" : true
    }
  },
  "logstash_writer" : {
    "cluster" : [
      "manage_index_templates",
      "monitor"
    ],
    "indices" : [
      {
        "names" : [
          "logstash-*"
        ],
        "privileges" : [
          "write",
          "delete",
          "create_index"
        ]
      }
    ],

Heres our Logstash config:

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["https://xx.xxx.xx.xxx:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    user => "logstash_internal"
    password => "xxxxxxxxx"
    ssl => true
    cacert => "/etc/logstash/ca.crt"
        }
}

Seems like everything should be set to use logstash_internal to access our ES cluster but the 401 errors continue. I'd say its a config problem but as mentioned earlier the system has no problems with our elastic user running with the superuser role. Any help would be appreciated and enjoy your day.

Hello,

Can you please verify that the password you are using for logstash_internal is the correct one? You can use the _authenticate API as follows:

curl -ulogstash_internal:xxxxxxxx -X GET "https://xx.xxx.xx.xxx:9200/_xpack/security/_authenticate"

Also, what is your authentication realm configuration in Elasticsearch? Can you share the relevant part of your elasticsearch.yml?
The logstash_internal user belongs in the native realm. Is the native realm enabled in Elasticsearch ?

elastic is one of our built-in users so it will always be enabled, regardless of the realm configuration you have and that might explain why it works when using that in the elasticsearch output plugin of logstash.

Ioannis,

Setting up the Native realm in the elasticsearch.yml did the trick. I was under the impression that the Native realm was enabled by default but was clearly incorrect. It's available by default not enabled by default, big difference. For others to learn from I've posted what my configs have ended up looking like for Logstash and Elasticsearch (part of them).

Logstash

input {
  beats {
    port => 5044

  }
}

output {
  elasticsearch {
    hosts => ["https://xx.xxx.xx.xxx:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    user => "logstash_internal"
    password => "Password"
    ssl => true
    cacert => "/etc/logstash/ca.crt"
    }
}

Elasticsearch

###The settings below would be used if we implement --pem formated CA, certs and keys. -Ryan
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: /etc/elasticsearch/abc.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/abc.crt
xpack.security.transport.ssl.certificate_authorities: [ "/etc/elasticsearch/ca.crt" ]
#
###Enable TLS and specify the information required to access the nodes certificate.
###The settings below would be used if we implement --pem formated CA, certs and keys. -Ryan
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /etc/elasticsearch/abc.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/abc.crt
xpack.security.http.ssl.certificate_authorities: [ "/etc/elasticsearch/ca.crt" ]
###These are the settings we need to configure and implement for the Native realm. -Ryan 2018/12/20
xpack.security.authc.realms.native1:
  type: native
  order: 0

Appreciate your time Ioannis!!!

1 Like

Loannis,

I fixed this a while ago but there ended up being an extra step to this that other users may need to apply. I had to add the metricbeat-* indices to the logstash_writer role in order for anything metricbeat related to appear, even when using elastic as a superuser. The logstash_admin_user account also had the logstash_writer role. In the Kibana GUI I just went into Management-->Roles-->logstash_writer and where the indices are I just added metricbeat-* (write, delete, create index) and data started populating. You might have to click on the + Add Index Privilege to add the index depending on Kibana GUI. Hopefully this points people in the right direction if they run across this problem.

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.