Enabling X-Pack makes elaticsearch fail to start

Recently generated certs etc to get elastic search users set up now that the feature is available with the basic license.

cluster.name: bnisia
node.name: elastic2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: ["192.168.50.241", "10.229.50.241"]
network.publish_host: elastic2.int.ellipse.net
http.port: 9200

discovery.zen.ping.unicast.hosts: ["elastic1.int.ellipse.net", "elastic2.int.ellipse.net"]
discovery.zen.minimum_master_nodes: 2
node.master: true

xpack.security.enabled: true
xpack.ml.enabled: false
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.path: cerrts/node-2.p12
xpack.security.transport.ssl.truststore.path: certs/node-2.p12

Now we can't get elasticsearch to start wtih the following error:

Jul  9 16:56:59 [localhost] systemd: Failed at step EXEC spawning /usr/share/elasticsearch/bin/elasticsearch: Exec format error

Can anyone point me in the right direction to troubleshoot? We have 2 nodes in the cluster with the same issue and the same config, but different ips and node name

I was able to get this to at least start by using our own certificate for our organization, but I am getting SSL received a record that axceeded the maximum length. I think I have a configuration issue, but I am not sure where, here is the relevant configuration we have at the moment.

xpack.security.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.key: certs/binary.net.key
xpack.security.transport.ssl.certificate: certs/binary.net.crt

Am I missing any settings for using your own certificate?

Got everything working!

Please share the solution in the thread, it might help someone in future :slight_smile:

The issue was a combination of incorrect SSL certs, and also the way that I was trying to curl elastic on 9200.

Some things to keep in mind are understanding SSL certs, and then once it should be working, make sure to query elastic with https://fqdn:9200/ rather than http or localhost or the ip address. It is also worth noting the difference between xpack.security.transport and xpack.security.http settings, which is pretty clear in the documentation if you read thoroughly instead of skimming.

Still not sure how to get it working with your own CA, although I suspect I had it working but was querying elasticsearch incorrectly =0

For us it is easier / better to just use our normal certs anyway.

Working:

cluster.name: bnisia
node.name: elastic1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: ["192.168.50.240", "10.229.50.240"]
network.publish_host: elastic1int.domain.net
http.port: 9200

discovery.zen.ping.unicast.hosts: ["elastic1int.domain.net", "elastic2int.domain.net"]
discovery.zen.minimum_master_nodes: 2
node.master: true

xpack.security.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.key: certs/domain.net.key
xpack.security.transport.ssl.certificate: certs/domain.net.crt
xpack.security.transport.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca.crt" ]
xpack.security.http.ssl.verification_mode: certificate
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: certs/domain.net.key
xpack.security.http.ssl.certificate: certs/domain.net.crt
xpack.security.http.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca.crt" ]

Also worth noting it is a better practice to use 3 nodes instead of 2, but all I could afford at the time of making the cluster was 2 servers with 2x Intel(R) Xeon(R) CPU E5-2643 v2 @ 3.50GHz and 64GB of RAM each, but so far, even when we add our netflow data they are able to handle a lot of data. Still, once I get my way we will be adding one more of these to the cluster, since with netflow data rolling in it puts both of these boxes at 60% CPU utilization on average.

1 Like