Runing multiple Kibana Instances vs load balancing

Want to load balancing multiple Kibana v.8.6 instances, wondering if login sessions are to be considered state-less or if we need to apply session stickyness and if rooted sticky on what, cookie...?

Also what to do about health checking, any good URLs to hit?

As always appreciate all hints, TIA!

This doc covers the concept of Load Balancing Kibana in more detail, and should answer your questions about sessions & cookies.

It doesn't mention a health check, but in theory you should be able to do a GET against the root Kibana URL and it should return a 2xx when healthy (I believe it returns a 503 when its starting up).

Thanks, one Q about the uniqueness that doc talks of:

These settings must be unique across each Kibana instance:

server.uuid // if not provided, this is autogenerated
server.name
path.data
pid.file
server.port


When using a file appender, the target file must also be unique:

logging:
  appenders:
    default:
      type: file
      fileName: /unique/path/per/instance

Aint that only an issue when talking of multiple instances on the same server?

Seems not viable just to hit root URL for health probing as it redirects to location: login.

Maybe a simple tcp connect probe will work just fine as boot time isn't considered serious for kibana...

$ curl -ki https://node0247:5601
HTTP/1.1 302 Found
location: /login?next=%2F
x-content-type-options: nosniff
referrer-policy: no-referrer-when-downgrade
content-security-policy: script-src 'self' 'unsafe-eval'; worker-src blob: 'self'; style-src 'unsafe-inline' 'self'
kbn-name: <redacted>
kbn-license-sig: cfb29c85f3350e2b9c53087013ead321c865e45ed82e31b237d4c66dd7419919
cache-control: private, no-cache, no-store, must-revalidate
content-length: 0
Date: Sun, 26 Mar 2023 15:13:55 GMT
Connection: keep-alive
Keep-Alive: timeout=120

$ curl -ki https://node0247:5601/app/kibana
HTTP/1.1 302 Found
location: /login?next=%2Fapp%2Fkibana
x-content-type-options: nosniff
referrer-policy: no-referrer-when-downgrade
content-security-policy: script-src 'self' 'unsafe-eval'; worker-src blob: 'self'; style-src 'unsafe-inline' 'self'
kbn-name: <redacted>
kbn-license-sig: cfb29c85f3350e2b9c53087013ead321c865e45ed82e31b237d4c66dd7419919
cache-control: private, no-cache, no-store, must-revalidate
content-length: 0
Date: Sun, 26 Mar 2023 15:14:23 GMT
Connection: keep-alive
Keep-Alive: timeout=120

Aint that only an issue when talking of multiple instances on the same server?

Yes, that appears to only be important if they are hosted on the same server.

Seems not viable just to hit root URL for health probing as it redirects to location: login.

You should be able to just hit the login URL and check for a 2xx, if you hit the login URL and the server isn't ready, I think you'll get the 503 still.

:slight_smile: off course, silly me:

$ curl -ki https://node0247:5601/login
HTTP/1.1 200 OK

with just aligned:

xpack.encryptedSavedObjects.encryptionKey

with all other xpack.* at default values, it seems to work load balancing across multiple instances with just one instance per OS instance and current just using tcp cnx probe, might enhance w/login URL later... thanks for directions!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.