Hello,
I've an ELK stack 7.6.2 with logstash, an elasticsearch cluster with 3 nodes and kibana. I would like to add security but the only doc I can find always start 'from scratch' I would like to have an example on an already running cluster in order not te mess up with it.
Data is coming from logstash to ES, I can stop logstash for a short moment if it can help. But I don't want to lose data. In what order should I proceed ?
You may go on and investigate how Logstash and Filebeat/whatever log processor you use work when the ES cluster is unavailable (buffering in memory, backpressure, etc.), but the most probable scenario is that eventually you will lose logs, unless you provide some kind of queuing.
The easiest solution is to enable Logstash's persistent queue feature, this way when the ES cluster is restarting, events will accumulate on disk.
So I think the following could work:
Enable PQ on all Logstash nodes. (Set PQ size according to your needs.)
Configure SSL on all Elasticsearch nodes.
Configure SSL on all Logstash nodes
Restart Elasticsearch nodes.
Restart Logstash nodes.
EDIT: and don't forget to configure Filebeat for SSL too.
Hi
Thanks for answering, OK, but what about the elasticsearch restart ? that's this part that worry me the most. I do node per node or do I need to edit the three elasticsearch.yml. I would like a more precise procedure.
I already use queuing but with rabbitmq
its logs ->rabbit ->logstash->ES<-kibana
When you say ssl on logstash, what does that mean ? I did a test on my dev environment (one node for everything) and came I just need to add user/pass in the output part ?
Anyway, thanks again for taking the time to answer me.
Best regards
You can edit elasticsearch.yml anytime, changes take effect on restart only.
EDIT: so in my post above, step 2 is editing the elasticsearch.yml files. (And doing all the SSL stuffs like creating certs, copying them to the nodes, etc.)
Hi, let's forgot about ssl between logstash and kibana. I think I have an issue.
If I try to run /usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto
i get
Unexpected response code [503] from calling GET http://192.168.209.210:9200/_cluster/health?pretty
Cause: master_not_discovered_exception
It is recommended that you resolve the issues with your cluster before running elasticsearch-setup-passwords.
It is very likely that the password changes will fail when run against an unhealthy cluster.
Do you want to continue with the password setup process [y/N]y
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y
Unexpected response code [503] from calling PUT http://192.168.209.210:9200/_security/user/apm_system/_password?pretty
Cause: Cluster state has not been recovered yet, cannot write to the [null] index
Possible next steps:
* Try running this tool again.
* Try running with the --verbose parameter for additional messages.
* Check the elasticsearch logs for additional error details.
* Use the change password API manually.
ERROR: Failed to set password for user [apm_system].
but if I try to query my cluster I get.
security_exception","reason":"missing authentication credentials for REST request [/_cluster/settings]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}
So what do I need to do ?
I'm confused ...
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.