1st please read the discussion that he posted, it is very important.
2nd so it is exactly clear if you make Elasticsearch as you describe above using http on a truly public IP anyone on the Internet can read, create, delete alter your data. At the very least secure it with SSL and Basic Authentication.
This is most likely a network configuration issue than an Elasticsearch issue. While forwarding ports from the internet facing IP of your router/firewall will allow public clients to access Elasticsearch. It will not allow private/internal clients to access Elasticsearch via the same public IP. Your router/firewall must support and be configured for NAT "hairpinning" to do that.
Even if you can configure NAT hairpinning, I would recommend against this, as it means all traffic to/from Elasticsearch MUST traverse your router/firewall, only to be turned around and sent back out the same interface that it entered. This unnecessarily adds latency and impacts performance.
A better solution is to leverage DNS to direct traffic to Elasticsearch by hostname or FQDN. Your external DNS provider would be configured to resolve public DNS requests to the public IP, while your internal DNS would be configured to resolve to the private IP. This ensures that both internal and external clients can reach Elasticsearch, even if your router/firewall doesn't support hairpinning, and with the best performance.
HelloAbdul,
When a service/app is started it is looking for the assigned network interface to.listen to, having it 0.0.0.0 it means it accepts data and can talk to everyone no matter from where the query comes, so if you have 3-4 network cards on that server with different IPs it means you can insert data from different networks, internal or external, even from localhost ( 127.0.0.1 ),. So let's say you have 2 network cards with IPs 192.168.0.2, 10.10.0.2 and 127.0.0.1. If you set 0.0.0.0 as the ip to listen to, everyone can i sert data or query your db, from 10.10.x.x and 192.168.0.x and 127.0.0.1. If you want it to be accesible from the internet ( which is not really recomended ) you need to know very well your usecase.
If you are behind a personal gateway ( router provided by your ISP ) i would make the ip in the config to be the internal ip of the server that provides internet access ( 192.168.x.x and i recomend port forwarding in the gateway from the public IP to your local on specific ports that you need. DO NOT set the gateway in DMZ towards your elasticsearch internal IP as all the ports of that machine will be exposed to the internet and you open the door for more vulnerabilities not only the database. You can then create rules in the machine's firewall to allow access on those ports only from specific machines using mac address filtering or ip filtering or any other trust mechanism you see fit ( tls and ssl auth might add another layer of security that you need )
If you are behind a corporate firewall make sure your IT dept makes comprehensive rules from the public ip to your elasticsearch server.
If it would be me wanting to provide a customer or some friend access to my data using kibana, grafana or any other custom tool that does not have access to my private network i would vpn the kibana in my network, mKe it talk with the elasticsearch and expose that kibana/grafana/whatever.tool machine only on port 5601 or the proxy port you use but i would stillenforce authentication between the 2 with tls, also activate user authentication and well defined user rights per group, that way not everyone can write or delete from your db.
This is if you want to play and se how things work.
As best practices, i would use one of the main rules in database security : NEVER EXPOSE YOUR DB TO THE INTERNET unless it'sa honeypot
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.