I've deployed an Elasticsearch cluster into Azure using the ARM template and associated script found here: ARM Template. I used default settings, except for the admin user name and password. After deployment succeeded, I am able to connect to Kibana, but I'm unable to make HTTP requests to the cluster on the Kibana machine's IP, port 9200. I'm using Postman to make the request, and I just get a "Could not get any response" message. I have tried both http and https.
I have added my IP in the Azure network security group on port 9200 into the Kibana server.
Thanks, Michelle. It was my understanding that the default deployment had the Kibana server set up as a "jumpbox" to act as the gateway for Elasticsearch communication. Is that not correct?
EDIT: Also, my deployment doesn't have an Elastic server. It has the Kibana server and three data servers (data-0, -1, -2). The only public IP in the deployment is for the Kibana server.
@bo.clifton You have a couple of options if you intend to also expose the Elasticsearch cluster (composed of the data nodes) to the public internet:
Deploy an external loadbalancer by using external as the value for the loadBalancerType parameter
Deploy Application Gateway by using gateway as the value for the loadBalancerType. Application Gateway has many more parameters to customize the deployment to your needs. From experience, it also takes around 20-30 minutes to deploy.
In both scenarios, an internal load balancer is also deployed, to allow Kibana to communicate with the cluster. By default, the template deploys only an internal load balancer.
Be careful about exposing Elasticsearch to the public internet; ensure that you have appropriate Authentication and Authorization controls in place, as well as Transport Layer Security. The Elastic Stack Security features can help with this.
Thanks, @forloop. That's good information. If I set up an external load balancer in my current deployment (there's already the default internal lb in place), would I be able to forward traffic to a specific place to enable queries? Or is it necessary to rebuild the cluster with external lb in the config?
@forloop and @Michelle_Bennett I redeployed the cluster with an external load balancer and I'm able to get a response from the endpoint but the response just says I'm missing an auth token. I've tried to follow this article but seem to be missing something. Article If I'm not worried about encrypting the traffic FOR NOW, what further steps would you recommend?
I figured it out. Kind of...I deployed the template with an external load balancer in place with no X-pack plugins. When complete, I can get cluster health from the external IP of the external load balancer.
Now...if anyone knows how to restrict traffic to only allow from one IP, that would be all I need! (I realize that's probably a separate topic...if it's against forum rules, or if I don't get any answers, I'll move the question somewhere else)
Solved the last bit...just need to assign the created subnet to the created network security group (default name of kibana-nsg), then create a rule allowing port 9200-9205 (maybe less? 9200-9201 didn't work, but this did...) from your desired IP address. Hope that helps someone!
It would be easier to deploy a new cluster with the template, including an external load balancer.
Due to a limitation in Azure load balancers, only a single load balancer can address a backend port on a specified port. When an external load balancer is deployed, an internal load balancer is also deployed to allow Kibana to communicate with the cluster. The internal load balancer communicates with the backend pool on port 9200 meaning that the external load balancer cannot also communicate with the backend pool on port 9200. To work around this, the external load balancer communicates with the backend pool over port 9201 and iptables persistent rules are implemented on each VM to forward to port 9200.
where <credentials> is the base 64 encoded form of <username>:<password>, where <username> is a configured or built-in user (e.g. elastic) in Elasticsearch, and <password> is the password for that user.
The template takes care of deploying the necessary subnets and network security groups and associating them with the related resources. If you require limiting to specific IP addresses or sources, you can add additional rules to a deployed network security group to do so.
@forloop thanks a lot for the detailed info. Here's what I had to do to get everything working for me.
After the deployment with external load balancer, the network security group (NSG) is only set to control the Kibana server NIC. I associated the deployed subnet to the NSG and it was then able to filter the port requests for every device on the subnet. I then added the NSG rule I mentioned above, allowing ports 9200-9205 to pass through from my IP address only. Works like a charm.
Since I'm only going to have requests coming from one IP, I don't believe it's necessary to add encryption and authentication. Do you agree? I'm trying to keep the setup simple, but still robust enough to be reliable and secure.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.