Upgrade from 6.7 to 7.1-auth failure

Good morning-I recently upgraded from 6.6 to 6.7 in preparation for 7.1 upgrade. I was able to resolve all issues that were reported within upgrade assistant. (the biggest being the native realm yml change) During the upgrade to 7.1, I was forced to make some changes within my ES yml file with respect to the realms and discovery nodes and once those changes were made, ES would start up just fine. When I went to upgrade my Kibana instance (which lives on the same node as my ES instance) I was able to get the service started just fine however the webpage would not load anymore (continued to get the Kibana server is not ready) even though the service would be online and up for minutes. I noticed a few things in the logs all pointing to authentication and was wondering how something like this could happen during the upgrade to 7.1 when 6.7 was functioning just fine for days on end. Is this tied to the way my realm is configured? I have provided logs along with my ES yml config in hopes someone can point me in the right direction of getting this resolved. Please let me know if you need anything else. Thanks. (one side note, the authentication piece and passwords were never messed with and have been in place for months-since initial installation, not sure if that matters or not) The elastic user is set in my kibana.yml file and has not been touched either.

ES logs

[2019-05-21T08:21:17,111][INFO ][o.e.x.s.a.AuthenticationService] [server.com] Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]
[2019-05-21T08:21:17,576][INFO ][o.e.x.s.a.AuthenticationService] [server.com] Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]
[2019-05-21T08:21:17,727][INFO ][o.e.x.s.a.AuthenticationService] [server.com] Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]
[

ES yml

xpack.security.enabled: true
xpack.security.authc.realms:
 native.realm1:
   order: 0

Kibana journal (journalctl -u kibana)

May 21 08:46:53 server.com kibana[6834]: {"type":"log","@timestamp":"2019-05-21T13:46:53Z","tags":["license","warning","xpack"],"pid":6834,"message":"License information from the X-Pack plugin could not be obtained from Elasticsearch for the [data] cluster. [security_exception] failed to authenticate user [elastic], with { header={ WWW-Authenticate=\"Basic realm=\\\"security\\\"
May 21 08:46:53 server.com kibana[6834]: {"type":"log","@timestamp":"2019-05-21T13:46:53Z","tags":["warning","task_manager"],"pid":6834,"message":"PollError [security_exception] failed to authenticate user [elastic], with { header={ WWW-Authenticate=\"Basic realm=\\\"security\\\"

Just to add, while I was doing some troubleshooting, I attempted to do a curl -XGET on my local ES host and got this message. I also tried to perform the reset password steps found in the article below however I received this error message. At this point, I do not know where to go from here and while I did take a snapshot, I made the mistake of not being able to identify which ID was tagged for .security-6 index.

 curl -XGET 'http://localhost:9200'
{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}

https://www.elastic.co/guide/en/elastic-stack-overview/7.1/get-started-built-in-users.html

Failed to determine the health of the cluster running at http://xx.xx.xxx.xx:9200
Unexpected response code [503] from calling GET http://xx.xx.xxx.xx:9200/_cluster/health?pretty
Cause: master_not_discovered_exception

It is recommended that you resolve the issues with your cluster before running elasticsearch-setup-passwords.
It is very likely that the password changes will fail when run against an unhealthy cluster.

Can someone assist? Anyone have an idea on where to go from here?

This seems to be your problem.

Something is preventing your cluster from forming correctly.
The node you are trying to authenticate against cannot access enough master nodes to form a quorum.

The cluster configuration settings were changed in 7.0 to support our new Zen2 discovery & cluster coordination algorithm, and it looks like something has gone wrong with your upgrade to the new settings.

Can you provide your Elasticsearch.yml & tell us about how many nodes/what sort of node you have in your cluster?

Absolutely @TimV -my yml file is below-please note that this is a single node and I am running both Kibana and Elasticsearch off of it. The intent is to eventually join more nodes but for now I wanted to run through the upgrade with a single node. (I have masked the IP for security reasons however the IP in the discovery zen field is the IP of the local host)

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["xx.xx.xxx.xx"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes:
#
# For more information, consult the zen discovery module documentation.
#
cluster.initial_master_nodes: node-1
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
#--------------------------------- Security ------------------------------------
xpack.security.enabled: true
xpack.security.authc.realms:
 native.realm1:
   order: 0

Update-I commented out the discovery piece within the yml file and then went ahead and changed the realm to file realm to create a super user based off of that. It looks something went fishy with my auth. piece so I went ahead and created a new user to connect with. After doing so, I was able to get in just fine. Thanks again @TimV for pointing me in the right direction and helping me figure this out. I had come across your posts on resetting\creating new super users based off of file based realm and everything looks good.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.