So yeah I click Create Index pattern and It just goes to the spinny thing and then back here
Like I said the logs are clean. I check the Kibana logs and all I can see is 200 return codes.
And ES has nothing in the logs.
Did you run filebeat setup that should have created the index pattern for you. You should have run setup before you started filebeat for the first time.
Note this does not answer why you cannot create it though but I'm curious?
Oh and one thing to make it even weirder, I did create one for logstash without issue last week.
It was the first one and I think it provided a helper because none existed at that point.
I checked the inspection console... This looks relevent
Edit: I've reached my reply max for the day...
Everything else works.
Currently auth is oauth2-proxy + nginx + keycloak using a dedicated role for the proxy with es-security-runas-user header set by nginx.
My user is setup as superuser.
Literally everything else is working. But this could be a corner case you think?
EDIT2: Used kubectl to setup a proxy to bypass the ingress. Logged in with native kibana login. Got this. (And yes I checked that I had the config you asked for added.
So it seems this part doesn't work correctly when behind the nginx proxy. Possibly because the es-security-run-as user isn't getting set on the request, or because the payloads are too big.
I wonder if the ES payload is too big. I'm going to go look at my nginx logs.
But yeah I can't do any more posts for the day because this account is too new. So edits it is!
EDIT: Got it.
2021/02/19 05:25:17 [error] 13768#13768: *20811191 client intended to send too large body: 1050693 bytes, client: 10.42.29.0, server: elastic, request: "PUT /.kibana/_create/index-pattern%3Ad9b33250-7272-11eb-b224-9bfa5cf1c02a?refresh=wait_for HTTP/1.1", host: "elastic"
Edit: I just dropped the index's as they were already 3GB+ and attempted to create the index pattern when the new index was still only 5MB. It worked.
Found the issue. It seems error messages and es_security-run-as-user aren't friends. So that's possibly a bug.
But yeah basically the size of the index was too large. Even increasing the allowed reply sizes to insane values didn't allow it to work . So yeah I flushed the indexes and just created it with a small index. And it worked fine.
Not sure if that's a bug tho. The index was 5GB so possibly.,
Good you got it solved, thanks for reporting back.
5GB for an index is pretty small, index pattern is really just about meta data about the index, field types etc so it in the mb, so not sure what happened.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.