After enable xpack i cant change passwords

After enable xpack i cant change passwords, commen to discovery.seed_hosts and
cluster.initial_master_nodes did not help!

I get this error when trying to setup password:

/usr/share/elasticsearch/bin# sudo ./elasticsearch-setup-passwords interactive

Failed to determine the health of the cluster running at http://172.31.47.37:9200
Unexpected response code [503] from calling GET http://172.31.47.37:9200/_cluster/health?pretty
Cause: master_not_discovered_exception

It is recommended that you resolve the issues with your cluster before running elasticsearch-setup-passwords.
It is very likely that the password changes will fail when run against an unhealthy cluster.

Do you want to continue with the password setup process [y/N]y

Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y


Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana]:
Reenter password for [kibana]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:

Unexpected response code [503] from calling PUT http://172.31.47.37:9200/_security/user/apm_system/_password?pretty
Cause: Cluster state has not been recovered yet, cannot write to the [null] index

Possible next steps:
* Try running this tool again.
* Try running with the --verbose parameter for additional messages.
* Check the elasticsearch logs for additional error details.
* Use the change password API manually.

ERROR: Failed to set password for user [apm_system].

I don't need elastic search to be running as a cluster. I will be using current node only. Here's my elasticsearch.yml

#sticsearch Configuration =========================

# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: master1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
network.host: ["0.0.0.0"]
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#discovery.type: single-node
discovery.seed_hosts: ["127.0.0.1","[::1]"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
cluster.initial_master_nodes: ["15.164.230.26"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
action.auto_create_index: .monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*

I found some people resolved similar problem by adding discovery.type: single-node However I couldn't start elastic search with this line inserted due to error.

Thank you!

The real problem is not with changing password, but rather the cluster is not formed yet as indicated by the error message:

Failed to determine the health of the cluster running at http://172.31.47.37:9200
Unexpected response code [503] from calling GET http://172.31.47.37:9200/_cluster/health?pretty
Cause: master_not_discovered_exception

If this node has previously joined a cluster, you need bring up other nodes (especially the master one) to let them form the cluster first. A node persists its cluster and member status so it survives across restart. Before this is fixed, nothing can really be done. Based on the elasticsearch.yml file, is the master node on 15.164.230.26?

If this is just a test environment and you don't care about the data, you can also just delete everything under the data directory, comment out discovery.seed_hosts and cluster.initial_master_nodes in the config file and restart the node. Please NOTE this means data loss. So please proceed only if you are sure the data is no longer needed.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.