X-pack security module for a cluster on internet


(olivier hodac) #1

Continuing the discussion from Deploy elasticsearch cluster over internet:

Hey,

I'd like to know if security takes care of the security of the cluster if the nodes are communicating on internet:

I have 3 nodes, each one on a different host. I have a single network board and a single IP address for each one. I want to create a cluster from them. Therefore they are communicating on internet.

Do I have to setup some firewall (like UFW) on each one to take care of security or x-pakck security is enough to encrypt and protect access to the cluster?

thank's


(Ioannis Kakavas) #2

Hi,

From what I see in the previous topic, you are already using X-Pack, so you can configure TLS for your cluster ( see specifically this that talks about enabling TLS for the transport layer) .
TLS provides the following properties:

  • Node authentication Each node (mutually) authenticates the other nodes it connects to using public key cryptography
  • Confidentiality The communication between nodes is encrypted and protected from eavesdropping.
  • Integrity The communication between nodes is protected from alteration or undetected loss.

That said, you can use other layers of protection as you see fit for your environment, i.e. firewall rules on the host allowing ingress/egress traffic from/to specific hosts ( your other nodes )


(Albert Zaharovits) #3

In addition, there is an alternative to the host firewall, see ip-filtering . It might better suit your deployment scenario, such as when you have different rules for transport and HTTP interfaces, or just want all you config in one place :slight_smile:


(olivier hodac) #4

OK, it seems that it makes the job! but there is a note in the ip-filtering documentation. Can you tell me more about that?

Elasticsearch installations are not designed to be publicly accessible over the Internet. IP Filtering and the other security capabilities of X-Pack security do not change this condition.

To sum up: you would suggest that I let the 9200 and discovery ones open to internet and I put ip-filtering rules to deny all request except my cluster+logstash ones?


(Albert Zaharovits) #5

discovery ports will require client certificate authn because you have setup TLS for inter node comms.
9200 will require basic authn.
on top of that add ip filtering rules as restrictive as possible.


(olivier hodac) #6

OK so:

  • I let the ports open
  • i put network.host=0.0.0.0
  • i set some ip-filtering rules
  • I setup the TLS filtering

with this configuration, security will take care of encryption and ip-filtering

last question: there is no security on 9200? only basic auth? I do have to set a proxy to add a SSL on it? or x-pack can take care of it?


(Albert Zaharovits) #7

Roughly yes.
Some clarifications:
There is no 'TLS filtering'. You need TLS on the transport and HTTP interfaces (port 9200), see previous post for the spot on lingo and linked resources :slight_smile:
ip filtering is just a bonus to limit exposure to DOS (and limit exposure in case one client private key is stolen).

last question: there is no security on 9200? only basic auth? I do have to set a proxy to add a SSL on it? or x-pack can take care of it?

although it didn't came up TLS for the HTTP interface is mandatory in your case. See linked resources, xpack handles it.


(system) #8

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.