Elasticsearch is great utility for establishing search, and the Docker containers make deploying remotely a wonderful breeze. So far my team has succeeded in establishing the remote deployment of Docker contains when xpack.security.enabled=false, which allows REST calls to be made without passwords and over http instead of https.
When using Security, i.e., with xpack.security.enabled=true, we are finding the setup to be very difficult and confusing on remote servers because all tutorials and instructions we find seem to rely on either docker exec -it, i.e., in-docker commands deployed with interactive terminals, which are not accessible in our use case on remote servers, or similar interaction with the node that requires the user to provide input to a prompt to move the ES server forward, i.e., to provide a file location or a password. Additionally, certain documentation suggests that the default password of the superuser -u elastic is changeme although other documentation does not convey this.
To what tutorials and troubleshooting documentation do I point my team for our use case?
Here are examples of documentation that require interactive terminals, which we do not have.
For introduction to Security:
For SSL/HTTPS:
Thank you! We are grateful for your close advisory help. x ME
This is helpful, however somewhat more complex than should be necessary. Do you not agree?
In fact, if we drill down into that docker-compose.yml file it appears that simply having the environment variable ELASTICSEARCH_PASSWORD set before initializing the container is sufficient. (This launches two containers for Elastic Search and one for Kibana.)
Launching this on GCP with gcloud seems to be giving me difficulty. Previously I had been using gcloud compute instances create-with-container, which is a simple conduit from my command line to a specific virtual machine. It bypasses any needs for me to build my own Docker container, be responsible for those docker containers (e.g. Registry), or manage their deployment from the serverside (e.g. Cloud Run). All of the examples and tutorials I am seeing for GCP with any docker-compose.yml files requires CloudBuild and a cloudbuild.yml file to do all that management. This seems like this is unncessarily complex for deploying a runtime script after a Docker container has been deployed. Perhaps I am suffering from substandard search results. Can you help me?
This docker-compose, with two Elastic Search nodes and one Kibana node is sufficient and the standard, though currently somewhat complex and replies on trusting the software stack more than I had hoped for a tutorial. Do you have a simpler walk-through?
Perhaps even something very simple with the SSL certification files, because while testing on my local machine I can docker exec -it enough to run those scripts within the Docker container. It also appears that the default failure mode of not having access to the interactive terminal is to leave the Elastic Search node without a password, which means that I can send commands to the remote Docker container as I had previously. I seem to be confused about how these SSL certification files are being used by curl. I have walked through the script in the docker-compose and run those successfully; the certs are created in the appropriate places.
When I try to perform a healthcheck on the Docker container locally I am getting cert errors. I apologize: I know this is a simple SSL functionality, yet I am hoping you can walk me through it.
From a MacBook:
running curl --cacert config/certs/ca/ca.crt https://127.0.0.1:9200
returns curl: (77) error setting certificate verify locations: CAfile: config/certs/ca/ca.crt CApath: none
This appears to be the MacBook not finding the location of the necessary cert file locally; and, it would have no reason to be at the same Linux path, as this is the MacBook machine.
From locally inside the Docker container:
running curl --cacert config/certs/ca/ca.crt https://127.0.0.1:9200
returns:
curl: (60) SSL certificate problem: self signed certificate in certificate chain
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
Which appears to be curl notifying that the source cert is the same as the destination.
It is very helpful to run through these simple steps for posterity.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.