I have set up SSO with OKTA to be able to enforce MFA.
Using OKTA as my IDP did however not disable the built-in users such as 'elastic'
I want to disable the elastic user to enforce principle of least privilege
to get rid of the risk of someone brute forcing the account and read all the
sensitive information from my elastic-search instance.
But when i follow these steps i get the following error message:
{
"error": {
"root_cause": [
{
"type": "validation_exception",
"reason": "Validation Failed: 1: only existing users can be disabled;"
}
],
"type": "validation_exception",
"reason": "Validation Failed: 1: only existing users can be disabled;"
},
"status": 400
}
I would love to get some more information on how to deal with this properly since i am kind of lost here atm.
Unfortunately it is not possible to disable built-in users (and note the elastic user would be the only way of managing ECE in the case of some catastrophic Okta failure)
I'd recommend using the CLI to set the password to a sufficiently long string that brute forcing isn't feasible and store that in a vault (eg along with the emergency recovery tokens that are generated at startup)
If you have a support rep, get them to open an ER on your behalf asking for the ability to disable built-in users, and we'll give it some more discussion internally.
Sad to hear that i am not able to disable this user. In case of an Okta emergency i could use
the API console to enable this account again than right? Or even switch back
temporarily to the native authentication realm in the "user settings overrides".
If i could just put on some extra protection with IP restrictions that would also be really helpful to at least minimize the attack surface some more, but than i find this:
From what i am feeling right now is that i can't really protect the data going into elasticsearch as much as i want to. A quick win for me would thus be to at a very minimum be able disable the built-in user accounts and use accounts with lesser privileges to write only to a limited set of indices to spread the risk.
I just realized during a similar conversation with somebody else that I might have misunderstood your question ... did you mean that you wanted to disable the elastic user in each ES cluster?
(I was answering the question of disabling the built-in user into the ECE admin UI - apologies)
OK so the answer is a bit more complicated unfortunately:
Yes, you can disable the elastic user BUT
Each elasticsearch also contains internal users that ECE uses to do admin tasks on each ES cluster (and it's trivial to reverse engineer their usernames), so disabling elastic specifically gives you no benefit at all
Longer term we do have the desire for our built-in users to use certificates for cluster management, which would allow you to disable the file realm (which is where all these users + elastic live) - no ETA for that yet.
A different option that I haven't thought about until now (so I'm shooting from the hip here), looks something like:
Put an nginx in between the proxy and the user
Make it disable any request with basic auth
Make it block the ES and Kibana login request URLs
I think that if you're using SAML you'll always see session authentication using Bearer auth and cookies, and it uses the Kibana SAML endpoint instead of the user/pass login endpoint .. so that should do what you want?
(This won't affect internal traffic which never exits the proxy)
Thanks for the suggestions, i really appreciate what you are trying to do for me!
However, when i look at how to solve my problems for securing the elasticsearch instance,
i still see a lot of hoops to jump and i am not really sure at this point of i will trust it with
our companies sensitive data with it as it stands now. I am using the enterprise env to be unburdened if having to actively maintain these instances, but when i need to supply my own nginx instances or load balancers this completely misses the point imho.
The only counterpoint is that an external LB is required for ECE anyway, to load balance across the set of proxies, so making that LB be an nginx set and then adding the additional logic to block the undesired login types
Otherwise without additional configuration, the most locked down in the near term you can get is probably (I'm making some assumptions here about your usage scenarios!):
Use IP filtering to lock down access to ES to just automated clients
I am getting the assumption that i might be using the term ECE incorrectly.
With ECE i mean that i have an account on "https://cloud.elastic.co/" where all my instances
are being deployed and managed, would i than still need an external load balancer?
This i did, but for some reason Elasticsearch does not honor that setting.
I added the following to my settings here:
Ah - sorry yes terminology confusion, ECE is the platform that we use to power cloud.elastic.co, but is also separately available (for folks who want to run in their own cloud account, or on prem etc etc). And all of the mention of LBs etc was specific to ECE.
There's a lot of overlap in their capabilities, but also slight differences, and IP filtering is one of those.
For ESS it is API only, people who request its use get looped into the beta of our "public API" programme ... its full release (with UI support, like we have for ECE) is coming soonish but you can request more information about getting API access at https://cloud.elastic.co/help (disclaimer: I don't know if there are any qualifying requirements for the early access)
This i did, but for some reason Elasticsearch does not honor that setting
Assume you meant that this did force people to access Kibana via SAML, but they could hit the Elasticsearch cluster via basic auth?
If so, that is expected (the Kibana settings only restrict Kibana access) - as discussed, this setting in conjunction IP filtering to restrict any ES access is a common usage.
Thanks for all the good help, i have a better understanding now of how everything works!
One last question, can i set up an API gateway in front of elasticsearch and if so, how would this
work for the managed elastic cloud. Most documentation i find is for the on-prem solution.
There should be no issue in using an API gateway in a standard configuration, and it should be similar to how it works for on-prem, ie you give it an endpoint (the cluster_id.region.aws.found.io URL) and the basic auth credentials.
What is the sort of element of the on-prem guide that you think might not translate to cloud?
Well mostly if i look at the documentation on how to achieve certain things trough
the API or UI i first have to check if the mentioned option is in the first place available at all.
{
"error": "no handler found for uri [/api/v1/deployments/ip-filtering/rulesets] and method [GET]"
}
The documentation that is online feels really hard to navigate trough to find what i want to achieve because it can also be contra-dictionary from time to time. Is there any easy way to translate the on-prem docs to managed cloud i am missing now for myself?
Apologies for the confusion - that's because we're currently only giving out the ESS specific
("managed cloud") API docs to people in "public API" beta programme.
Does wiring ES up to the API gateway require use of the "ESS API" (aside from the IP filtering requirements we've discussed)?
This i really do not know, it most likely seems like it.
At this point i have become totally lost, i am trying one last thing before i will look into other solutions. I tried to do the IP filtering by following the following doc:
Even though i get a 200OK response from the sever when setting up a filter (also tried to restart the cluster) i am getting no luck of successfully enforcing the whitelist. This is really the last thing that stands in between me and getting this into production so i really hope you can help me with this one.
Currently this is only possible by requesting to be part of the "public API" beta programme (at which point we share the documentation that is not online). That conversation can be kicked off over at cloud.elastic.co/help
Once IP filtering is publicly available (soon but no ETA) then it will be configurable via UI (and publicly available documentation).
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.