Cannot login to Kibana since few days

Dear community,

Since few days I cannot login to kibana anymore (through any users including elastic).
Here are the Kibana's logs when I try to login:

shwr - kibana | {
    "type": "log",
    "@timestamp": "2021-06-02T12:10:13+00:00",
    "tags": ["info", "plugins", "security", "routes"],
    "pid": 9,
    "message": "Logging in with provider \"basic\" (basic)"
}
shwr - kibana | {
    "type": "log",
    "@timestamp": "2021-06-02T12:10:13+00:00",
    "tags": ["error", "plugins", "security", "session", "index"],
    "pid": 9,
    "message": "Failed to create session value: cluster_block_exception"
}
shwr - kibana | {
    "type": "log",
    "@timestamp": "2021-06-02T12:10:13+00:00",
    "tags": ["error", "plugins", "security", "routes"],
    "pid": 9,
    "message": "ResponseError: cluster_block_exception\n    at onBody (/usr/share/kibana/node_modules/@elastic/elasticsearch/lib/Transport.js:333:23)\n    at IncomingMessage.onEnd (/usr/share/kibana/node_modules/@elastic/elasticsearch/lib/Transport.js:260:11)\n    at IncomingMessage.emit (events.js:327:22)\n    at endReadableNT (internal/streams/readable.js:1327:12)\n    at processTicksAndRejections (internal/process/task_queues.js:80:21) {\n  meta: {\n    body: { error: [Object], status: 429 },\n    statusCode: 429,\n    headers: {\n      'content-type': 'application/json; charset=UTF-8',\n      'content-length': '435'\n    },\n    meta: {\n      context: null,\n      request: [Object],\n      name: 'elasticsearch-js',\n      connection: [Object],\n      attempts: 0,\n      aborted: false\n    }\n  }\n}"
}
shwr - kibana | {
    "type": "error",
    "@timestamp": "2021-06-02T12:10:13+00:00",
    "tags": [],
    "pid": 9,
    "level": "error",
    "error": {
        "message": "Internal Server Error",
        "name": "Error",
        "stack": "Error: Internal Server Error\n    at HapiResponseAdapter.toError (/usr/share/kibana/src/core/server/http/router/response_adapter.js:121:19)\n    at HapiResponseAdapter.toHapiResponse (/usr/share/kibana/src/core/server/http/router/response_adapter.js:75:19)\n    at HapiResponseAdapter.handle (/usr/share/kibana/src/core/server/http/router/response_adapter.js:70:17)\n    at Router.handle (/usr/share/kibana/src/core/server/http/router/router.js:164:34)\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (internal/process/task_queues.js:93:5)\n    at handler (/usr/share/kibana/src/core/server/http/router/router.js:124:50)\n    at exports.Manager.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/toolkit.js:60:28)\n    at Object.internals.handler (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:46:20)\n    at exports.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:31:20)\n    at Request._lifecycle (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:370:32)\n    at Request._execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:279:9)"
    },
    "url": "https://192.168.10.37:5601/internal/security/login",
    "message": "Internal Server Error"
}
shwr - kibana | {
    "type": "response",
    "@timestamp": "2021-06-02T12:10:13+00:00",
    "tags": [],
    "pid": 9,
    "method": "post",
    "statusCode": 500,
    "req": {
        "url": "/internal/security/login",
        "method": "post",
        "headers": {
            "connection": "upgrade",
            "host": "192.168.10.37:5601",
            "content-length": "179",
            "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0",
            "accept": "*/*",
            "accept-language": "fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3",
            "accept-encoding": "gzip, deflate, br",
            "referer": "https://website/kibana/login?next=%2Fkibana%2F",
            "content-type": "application/json",
            "kbn-version": "7.12.0",
            "origin": "https://website"
        },
        "remoteAddress": "192.168.10.38",
        "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0",
        "referer": "https://website/kibana/login?next=%2Fkibana%2F"
    },
    "res": {
        "statusCode": 500,
        "responseTime": 99,
        "contentLength": 77
    },
    "message": "POST /internal/security/login 500 99ms - 77.0B"
}

I don't really understand that error but I've got a recurrent warning from elastic :

shwr - elastic | {
    "type": "server",
    "timestamp": "2021-06-02T12:15:06,067Z",
    "level": "WARN",
    "component": "o.e.c.r.a.DiskThresholdMonitor",
    "cluster.name": "showrom-SOC",
    "node.name": "189dcff3b2e1",
    "message": "high disk watermark [90%] exceeded on [fZq9kU2yTN-K6UpjmC1uVw][189dcff3b2e1][/usr/share/elasticsearch/data/nodes/0] free: 28.1gb[7.4%], shards will be relocated away from this node; currently relocating away shards totalling [0] bytes; the node is expected to continue to exceed the high disk watermark when these relocations are complete",
    "cluster.uuid": "X0IQMqxETYi6MLAGr3tR5A",
    "node.id": "fZq9kU2yTN-K6UpjmC1uVw"
}

I did a bit of space on the system :

/dev/mapper/centos_kiss-root   50G   30G   21G  59% /
devtmpfs                      3.9G     0  3.9G   0% /dev
tmpfs                         3.9G     0  3.9G   0% /dev/shm
tmpfs                         3.9G   43M  3.8G   2% /run
tmpfs                         3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1                    1014M  192M  823M  19% /boot
/dev/sda3                     401G  397G  4.2G  99% /root/snapshot
/dev/mapper/centos_kiss-home  376G  348G   29G  93% /home
overlay                       376G  348G   29G  93% /home/kontron/dockerdata/overlay/0da460371f00be7777eeb58777d18fa2ef2e02ee16949f2947df3b7bac0c78bf/merged
shm                            64M     0   64M   0% /home/kontron/dockerdata/containers/3c22b2d3f91d2252d6cab24adde10641aa1529541b50816689341efe912bc65f/shm
overlay                       376G  348G   29G  93% /home/kontron/dockerdata/overlay/da5f150b4b2b9e4be39eba683814d8869d563436f7bc5973b0ddfae858461338/merged
shm                            64M     0   64M   0% /home/kontron/dockerdata/containers/01203c7da40609bc1b796a02cd2cac3c6c21204ab0152f87ab66cc1f74335811/shm
overlay                       376G  348G   29G  93% /home/kontron/dockerdata/overlay/7e932c92412ffab811dd31526b554f49b320de687ad1616c1a917e96d809d128/merged
shm                            64M     0   64M   0% /home/kontron/dockerdata/containers/e4926a6cf46d0d6f10777ecbebccb03dd190e9889b2c4b9adbb7af1c2ce827e7/shm
tmpfs                         782M  4.0K  782M   1% /run/user/0
tmpfs                         782M  4.0K  782M   1% /run/user/1006
overlay                       376G  348G   29G  93% /home/kontron/dockerdata/overlay/d07b554fc00c736e18a6e4b33cf1bd9b31b8a26e2fd03d58f678089a45e62fef/merged
shm                            64M     0   64M   0% /home/kontron/dockerdata/containers/4749bcc5cca83dfad801d75e5b0dee425fd29b13905baa51c3dfceac27355e78/shm
overlay                       376G  348G   29G  93% /home/kontron/dockerdata/overlay/48f3d0b0ed23b0d3587d78f02e1e8518a63c510d4a2bb949e4fd7c4b5f7997b4/merged
shm                            64M     0   64M   0% /home/kontron/dockerdata/containers/189dcff3b2e136d93f7e10e89c0e1bad75dae27c17297f791bbd7b2184973e94/shm
overlay                       376G  348G   29G  93% /home/kontron/dockerdata/overlay/5cf8c2cfd8b56d4804f8895c26bb173534b1941cca59c53b986802640190255d/merged
overlay                       376G  348G   29G  93% /home/kontron/dockerdata/overlay/556b24d64170010c1e7522782795f8c942dd14e1a58f88b9e2fa7727f54cd7d8/merged
shm                            64M     0   64M   0% /home/kontron/dockerdata/containers/f145c62634f731e6ee3fb8fd794cf575854698258636da1263b2c322994802a7/shm
shm                            64M     0   64M   0% /home/kontron/dockerdata/containers/25d53f1af10fd06e4c9b00698a578eba1b3f200c7002f5bc7932233300b1e79a/shm
overlay                       376G  348G   29G  93% /home/kontron/dockerdata/overlay/5534dbd5068c765478646e7ee049441f5069b2355d2c3be50a53438e9e37a54b/merged
shm                            64M     0   64M   0% /home/kontron/dockerdata/containers/b8329eef5aec9bd92a1f7c53be244f4749b4eb2a973304bf9af4144c4ce6fa99/shm

29 GB space is by far enough for what I'm storing.
Is the Kibana problem related to elastic watermarks ?

I've tried this(from https://stackoverflow.com/a/50609418/9914294) :

$ curl -XPUT -H "Content-Type: application/json" https://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' -u elastic

Unfortunately it doesn't solve the problem at all.
And the second adviced command is not working because it doesn't recognize _cluster :

PUT _cluster/settings
{"transient":{"cluster.routing.allocation.disk.watermark.low":"500mb","cluster.routing.allocation.disk.watermark.high":"250mb","cluster.routing.allocation.disk.watermark.flood_stage":"2gb","cluster.info.update.interval":"1m"}}

PS : My elasticsearch is a single-node cluster and each soft of ELK is running in a separate docker container.

Here is my elastic config :

cluster.name : showrom-SOC
network.host: 0.0.0.0
http.port: 9200
xpack.license.self_generated.type: trial
xpack.security.enabled: true
xpack.monitoring.collection.enabled: false
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: none
xpack.security.transport.ssl.certificate: ${CONFIG_DIR}/elasticsearch.crt
xpack.security.transport.ssl.certificate_authorities: ${CONFIG_DIR}/ca.crt
xpack.security.transport.ssl.key: ${CONFIG_DIR}/elasticsearch.key
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: ${CONFIG_DIR}/elasticsearch.key
xpack.security.http.ssl.certificate_authorities: ${CONFIG_DIR}/ca.crt
xpack.security.http.ssl.certificate: ${CONFIG_DIR}/elasticsearch.crt

Is it a problem of licence or expired trial ?

Do you know how to fix the login issue ?

Thanks for your help and have a nice day ! :smiley:

What version of Elasticsearch are you running?

What output do you get from this command?

curl -uelastic -XGET "http://localhost:9200/_xpack?pretty"

What is the output from this:

curl -uelastic -XGET 'http://localhost:9200/_cluster/health?pretty'

(if your node is disconnected from the cluster, then that might take 30 seconds and then respond with an error)

What does the log file ( elasticsearch.log ) say?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.