Hey guys,
I will start with my main goal here, trying to move my elastic cluster to use native security/authentication.
I've recently taken over a project involving an elastic cluster(Gold plan) trying to make it work with native security. Currently there is no security at implemented at all.
At first I've set the Elasticsearch.yml to support native authentication and enabled security.
This resulted in an error from not enabling SSL support - fixed, service up successfully.
From there I got a ping from our monitoring service that the node I've just changed is not recognized as up in the cluster(5 nodes total).
I went to one of the other node and ran http://<NODE_ADDRESS>:9200/_cluster/health?pretty which indeed resulted in 4/5 nodes are up.
When trying to the same for the node I've just changed I get an authentication request but I don't have working credentials for that.
Googled a bit about this and stumbled on ‘Elasticsearch-setup-passwords' to setup the passwords again.
When ran in interactive mode got the following
Failed to determine the health of the cluster running at http://<NODE_ADDRESS>:9200
Unexpected response code [503] from calling GET http://<NODE_ADDRESS>:9200/_cluster/health?pretty
Cause: master_not_discovered_exception
It is recommended that you resolve the issues with your cluster before running elasticsearch-setup-passwords.
It is very likely that the password changes will fail when run against an unhealthy cluster.
Do you want to continue with the password setup process [y/N]
I kept going,trying to change the password anyway and got the following:
Unexpected response code [503] from calling PUT http://<NODE_ADDRESS>:9200/_security/user/apm_system/_password?pretty
Cause: Cluster state has not been recovered yet, cannot write to the [null] index
Possible next steps:
* Try running this tool again.
* Try running with the --verbose parameter for additional messages.
* Check the elasticsearch logs for additional error details.
* Use the change password API manually.
ERROR: Failed to set password for user [apm_system].
From googling about this I understood that some resolved that by running in single mode.
Do I have to?As I still want to do this to the whole cluster.
I guess I had an error when reconfiguring the Elasticsearch.yml to single mode since I could not get it to start.
Elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: cluster
#
#
#Server 1
# ------------------------------------ Node ------------------------------------
##
# Use a descriptive name for the node:
#
node.name: NODE1
#
#node.attr.rack: r1
#script.inline: true
#script.indexed: true
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/indexes/
#
# Path to log files:
#
path.logs: /var/log/elasticsearch/
#
#
# Path NFS Backup
#
path.repo: ["/backup/esbackup/"]
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#bootstrap.mlockall: true
#
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
discovery.zen.ping.unicast.hosts: ["IP1", "IP2", "IP3", "IP4", "IP5"]
#
discovery.zen.minimum_master_nodes: 3
#discovery.seed_hosts: ["NODE1", "NODE2", "NODE3", "NODE4","NODE5"]
node.master: true
node.data: true
# Bootstrap the cluster using an initial set of master-eligible nodes:
#cluster.initial_master_nodes: ["NODE1", "NODE2", "NODE3", "NODE4","NODE5"]
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
## set Monitoring (enabled)
xpack.monitoring.collection.enabled: true
## set Security (disabled)
xpack.security.enabled: true
# xpack.security.transport.filter.enabled: false
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12
xpack.security.transport.ssl.client_authentication: optional
xpack:
security:
authc:
realms:
native:
native1:
order: 0
Sorry for the long post, could really use your help on this.
Thanks!