We have a public CA wildcard cert that I want to use with elasticsearch. I have the cert and elasticsearch configured, but when I run elasticsearch, it fails because I don't have the IP address of the Elasticsearch server as a SAN. Is it possible to have it only check the hostname and not the IP as well?
What fails? Please include the complete error and say where you are getting it from and after doing what.
Do you have more than one node in your ES cluster?
Are you talking about inter node communication?
Are you talking about Kibana connecting to ES?
Are you talking about your browser connecting to Kibana?
Are you talking about your browser connecting to the elasticsearch endpoint on HTTP port 9200 on a node of your ES cluster?
Are you talking about a transport client connecting to a node of your ES cluster on the transport port 9300.
Without more info we could start to assume a bunch of things but your question is currently a bit confusing because you say you want IT to verify via hostname and not IP but we don’t know what IT actually is. Also if your talking about inter node communication, usually if the cert doesn’t have the IPs what you do is you set:
xpack.security.transport.ssl.verification_mode : certificate
But it’s not so it uses hostnames, it is so that the certificate is checked to see if its signed by a valid authority. ( which means one present in the trust store )
Say more, it’ll be easier to point you to the relevant doc with context and help.
Ya that was pretty vague....
I configured Elasticsearch with security and SSL enabled (elasticsearch.yml shown at the bottom of post). Elasticsearch starts up and runs fine, however, when I run
elasticsearch-setup-passwords interactive, I get the below error. I've scrubbed sensitive data from the error and elasticsearch.yml. The certificate in use is a publicly signed cert from DigiCert, so I assumed I didn't need to set
D:\7.2\bin>elasticsearch-setup-passwords interactive SSL connection to https://IPAddress:9200/_security/_authenticate?pretty failed: No subject alternative names matching IP address IPAddress found Please check the elasticsearch SSL settings under xpack.security.http.ssl. ERROR: Failed to establish SSL connection to elasticsearch at https://IPAddress:9200/_security/_authenticate?pretty.
bootstrap.memory_lock: true cluster.name: ElasticStack cluster.initial_master_nodes: [ "fqdn" ] discovery.seed_hosts: [ "fqdn" ] network.host: fqdn http.port: 9200 transport.tcp.port: 9300 node.data: true node.ingest: true node.master: true node.max_local_storage_nodes: 1 node.name: fqdn path.data: D:\Data\Elasticsearch path.logs: D:\Logs\Elasticsearch xpack.license.self_generated.type: basic xpack.monitoring.collection.enabled: true xpack.security.enabled: true xpack.security.authc.accept_default_password: false xpack.security.authc.realms.native.native1.authentication.enabled: true xpack.security.authc.realms.native.native1.cache.hash_algo: "pbkdf2_1000" xpack.security.authc.password_hashing.algorithm: "pbkdf2_1000" xpack.security.authc.api_key.hashing.algorithm: "pbkdf2_1000" xpack.security.authc.api_key.cache.hash_algo: "pbkdf2_1000" xpack.security.http.ssl.enabled: true xpack.security.http.ssl.supported_protocols: [ "TLSv1.2" ] xpack.security.http.ssl.cipher_suites: [ "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ] xpack.security.http.ssl.certificate: "d:/7.2/config/certs/elastic.pem" xpack.security.http.ssl.key: "d:/7.2/config/certs/elastickey.pem" #xpack.security.http.ssl.certificate_authorities: "d:/7.2/config/certs/inca.crt" xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.client_authentication: "optional" xpack.security.transport.ssl.supported_protocols: [ "TLSv1.2" ] xpack.security.transport.ssl.cipher_suites: [ "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ] xpack.security.transport.ssl.certificate: "d:/7.2/config/certs/elastic.pem" xpack.security.transport.ssl.key: "d:/7.2/config/certs/elastickey.pem" #xpack.security.transport.ssl.certificate_authorities: "d:/7.2/config/certs/inca.crt" script.painless.regex.enabled: true
I'm not sure why this isn't working for you - when I have more time I'll see if I can dig into why we're not trying to connect using the FQDN in
You have two option to work around this:
- Pass the
elasticsearch-setup-passwordsto specify your own URL.
elasticsearch-setup-passwords interactive --url https://fqdn:9200/
- Update your elasticsearch.yml to set
http.publish_hostto your fqdn. setup-passwords should pick that up, even though it seems not to be reading
Out of interest, is there a reason why you are setting this as your hashing algorithm? 1000 is quite a lost cost factor for PBKDF2, but is still slower than some of the other options we offer.
- If you're optimizing for performance, PBKDF isn't a good choice.
- If you're optimizing for security, 1000 isn't enough iterations.
I tried method 2 first but it still presented the same error. After that I tried method one and then we were good to go.[quote="TimV, post:5, topic:189636, full:true"]
I am trying to get as close to FIPS as possible on a free license but also limit the performance impact. I assumed that PBKDF got me to FIPS compliance in regards to the hashing algorithm while being the least resource intense. Is there a better algo I should be using to meet what I'm looking for?
FIPS and Password hashing is a vague area. I haven't kept up to date with the latest FIPS docs, but as of a year or so ago, FIPS didn't provide any requirements around password hashing.
So, technically, there is no "FIPS approved password hashing algorithm", but nor is there a FIPS-rejected algorithm.
PBKDF2 is a FIPS approved key derivation function (that is, it takes a passphrase and turns it into something that can be used as an encryption key) which is the closest FIPS gets to offering a password hashing algorithm.
But the SHA variations are also approved hash algorithms. So, if all you care about is only using algorithms that FIPS permits (and don't care whether you're using the best available algorithm for the intended purpose) storing passwords using a SHA based hash would do that.
Storing passwords on disk (
api_key.hashing.algorithm) using PBKDF2 with 1,000 rounds does not achieve the security purpose of making it difficult to perform brute-force attacks on those passwords. Each attempt takes less than 5ms on any sort of modern hardware. You want to aim for something more like 100ms (& up to 1s depending on how much CPU time you are willing to spend on it).
So it depends on your purposes here.
If your only requirement is to restrict yourself to algorithms that would be available on a FIPS-mode JVM, with the lowest possible performance impact, and you don't care about the actual security implications, just use the
SSHA256 (salted sha-256) hasher. It uses FIPS algorithms and is super fast (<1ms). But, it's not secure for hashes that are going to be on permanent storage (disk).
If you want to use FIPS-compatible algorithms and be secure, then you really want to use
pbkdf2_50000 when writing password hashes to disk. Those cost factors are far more reasonable than the
1000 you have now.
pbkdf2_1000 hash is fine (
*) for use as the
cache.hash_algo because those password hashes don't get written to disk, so the performance/security tradeoff is different. We consult the cache on every request that comes in, so you don't want to use a >100ms algorithm there, but because it is difficult for an attacker to get a copy of the cached values from memory, the risk is acceptable.
*) Fine in our estimation, for general workloads. Your needs may differ. I don't know what data you are storing or whose passwords you are caching.