I have a very strange issue with a newly setup 3-node 5.1.1 cluster (with X-Pack basic installed). Normally I access the ES HTTP port through an Apache reverse proxy. This works fine for most actions, I can use Sense or HQ with no problem.
But when I try to DELETE an index, Elasticsearch responds with a 403 Forbidden. I can't find any explanation, not in the response nor in the logs. I can execute the same request with cURL against the localhost Elastic HTTP port, and there the request succeeds.
I made sure that it is not the reverse proxy that is blocking the request, I can see the request arriving in Elasticsearch when tracing the localhost traffic behind the proxy.
I'm out of options to try, and I totally fail to understand what is going on here. Does anybody have an explanation for this?
Does the user you are trying to delete with have a role assigned that allows them the
As I only have a basic license of X-Pack, the security features are disabled. So I guess user roles are not an issue
Or did I misunderstand some concept with security here? Also, when issuing the DELETE via localhost directly to Elasticsearch wit cURL (without any authentication), the DELETE works and is acknowledged
Hmm.. if x-pack security is disabled, there should not be any 403 forbiddens sent back from Elasticsearch, does it reproduce every time? This sounds like something that your proxy is doing (though I know you said you tested it without the proxy)
Yeah, I was suspecting also my proxy (or CORS) for a long time but what I have checked several times
- no apparent limitations (like Limit on DELETE) in the proxy
- I can see the HTTP request when watching with wireshark on loopback interface
- no obviously wrong headers in the request
- the response from Elastic is just a simple 403 Forbidden
I just checked again and yes, I can reproduce it. This is my new test installation with 5.1.1. Maybe I have messed up something. I will try another setup and see if I can reproduce it there.
At least this surprises you too So it's not expected behaviour.
Found it. I setup a second cluster identically to the first, and got the same behaviour.
Turns out it was actually a CORS issue. I had
But not http.cors.allow-origin. And it seems that the default for that one has changed from '*' to "no origins allowed. Adding
fixed it. I can do that because I have all the security features I want and need implemented in the reverse proxy.
Thanks for your help! Always good to have somebody make me check and recheck again
Great! Glad you were able to figure it out!
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.