Login expiration Issue (Kibana 5.2)

Hello, i have ELK cluster on Azure. (3 Windows server nodes) and recently I have updated all ELK stack to the 5.2 version.
But now i am facing with login issue when more than one node is up and i see "session expired error".
But when i turn off the rest of the nodes then login is possible. Issue is reproducible for all of the nodes.
Please advise how is it possible and how to fix it? I am sure that this issue is not related with timeout.
But sometimes error changes from ERR_CONNECTION_TIMED_OUT to ERR_TOO_MANY_REDIRECTS.
http://rgho.st/8dG9psrWZ

What version did you upgrade from?

Are you logging in with a user/pwd you created in the Kibana Management section? Or a default elastic user?

Hello. Upgraded from 4.5 kibana, 2.4 logstah, and 2.4 elastic search. But if the to be more precisely I just did a new install with a new config files in order to use x-pack to have authentification so as I was not applicable with an old version of kibana and even with iis application request routing and URL rewrite module. Logging with default elastic user, or with default kibana user (is it the root cause)? Issue is reproducible for both users.

I think we need to determine which realm of users you are using. Prior to 5.0, I think the default was to use the file realm. With file realm, users and roles have to be managed for each node.
https://www.elastic.co/guide/en/x-pack/current/how-security-works.html

The 5.x versions of the stack default to native users (described in the above link).

So if you're having log in issues, we should first make sure we know which type of users you are using. If it's the file realm and your nodes are not configured with the same users and roles, that would cause a problem.

Another thing to check would be that your nodes all think they are part of the same cluster. Can access your Elasticsearch cluster and use the Kibana Dev Tools console to run GET _cat/nodes. Or use curl if you can't log in to Kibana while all the nodes are up.

Hello,
Here is an output for your request.

10.0.1.6 9 98 14 mdi - node3
10.0.1.5 13 94 16 mdi * node2
10.0.1.4 10 60 19 mdi - node1

It looks like your nodes are all part of the same cluster, so that's good.

I've seen a case where a user can't log in to Kibana because of some cached data in the browser. One way to see if that's causing you problems is either to try logging in to Kibana in a browser where you've cleared all cached data. Another way is to open a new private window.
In Chrome its New incognito window.

In Firefox it's File > New Private Window

In Internet Explorer 11 it's Tools > InPrivate Browsing

I may have just reproduced your problem. Here's what I did.

I had a 5.2.1 Elasticsearch node running (which I forgot about).

I installed an unreleased 5.3.0 build of Elasticsearch and Kibana and started them up.

I opened Kibana and it went to the status page and everything was red. The errors told me that I had Elasticsearch 5.2.1 and wasn't compatible with my Kibana 5.3.0.

I stopped the 5.2.1 Elasticsearch node.

I tried to log in to Kibana but it failed with Too many redirects.

I stopped and restarted my one Elasticsearch 5.3.0 node.

Now Kibana works.

So if you ever had both your old and new versions of Elasticsearch running at the same time, it might help if you stop them all, and then start them back up.

You should be able to perform a rolling upgrade between minor version changes, but there may be some steps to do it correctly that I didn't do (since my situation was unintentional).

Hello,
I had previous versons but they are currently stopped and deleted (except logstash which is disabed in services) and these VM were restarted.
So what i should do? Tried incognito window. Also does not work. So looks like only new iclean nstall is required.

Make sure you set xpack.security.encryptionKey and use the same value across all Kibana instances.

Thanks but too late.
i have already deleted all existing VMs in Azure and working of creating a new ELK stack on a new VMs.

Still facing with this issue even after clean install on a new VMs.
"Too many redirects" and "The page isn't redirecting properly".
I have added xpack.security.encryptionKey to all 3 Kibana (5,2.1) config files and restarted the nodes but still facing with an issue.

I believe the issue you are hitting with "Too many redirects" is a current bug when Elasticsearch could not be accessed when starting up Kibana. Double check that elasticsearch.url is correct and the right permissions are provided.

There was parameter enabled discovery.zen.ping.unicast.hosts: [" ", " ", "" ] in elasticsearch.yml . I have disabled it and looks liks this issue has gone.

Great to hear!

But now i can not see all the nodes in monitoring. but only two of three. Is it related with nodes discovering and with disabled parameter?

looks like after disabling this parameter each node can see only itself but not the rest of them. Please advise how to fix it?

Did you see this doc?

It discusses zen discovery. If you still have problems, you should probably ask on the Elasticsearch channel.

Regards,
Lee

Hello,
Yes I see this doc but there was described similar configuration that I had in my yml file. But unicast discovering does not work with your x-pack as I can see and it was a root cause of these redirects till that moment when it was turned off. It works only for Kibana without x-pack. What about "host" discovering? I have not checked it. Does it work?

Hello,
Have managed somehow (God only knows how) to make it work (only 2 nodes of 3) after installing Azure Cloud Plugin and adjusting config files.
But, still not clear why Elastic and Kibana see only 2 of 3 nodes. All of them have the same configuration.
Is there any limitation on quantity of nodes in ELK? I do not see it honestly in Kibana and Elastic config files.

Found an error why 3-d node can not connect:
[o.e.x.m.e.Exporters ] [node3] skipping exporter [default_local] as it is not ready yet
failed to send join request to master [{node2} .... {SOMEIP}{SOMEIP:9300}], reason [IllegalStateException[Future got interrupted]; nested: InterruptedException]