Kibana url not working. FATAL Error: [config validation of [elasticsearch].username]: value of "elastic" is forbidden. This is a superuser account that cannot write to system indices that Kibana needs to function. Use a service account token instead

Hi Team,

Kindly help here how can i fix this. ?

Previously, ELK failed due to an error "java.lang. IllegalStateException: A node cannot be upgraded directly from version [7.10.2] to version [8.4.3], it must first be upgraded to version [7.17.0] "Following that, we reinstalled Elasticsearch with version 7.17.0. However, the Kibana version remains 8.4.3.

Is it necessary to have both versions (elasticsearch and kibana)?

I restarted Elasticsearch, kibana, and logstash, but my kibana url is still down.
Kibana keeps failing, and the following errors have been observed.

Status of ufw: inactive

The server port status is shown below.

I could see 5601 is not open also
root@elk:~# netstat -tulpn | grep LISTEN
tcp 0 0* LISTEN 632/systemd-resolve
tcp 0 0* LISTEN 846/sshd: /usr/sbin
tcp6 0 0 :::* LISTEN 571359/java
tcp6 0 0 :::9200 :::* LISTEN 472296/java
tcp6 0 0 :::9300 :::* LISTEN 472296/java
tcp6 0 0 :::22 :::* LISTEN 846/sshd: /usr/sbin

Elastic search logs

root@elk:/var/log/elasticsearch# tail -f wakefit.log
at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered( [x-pack-ilm-7.17.0.jar:7.17.0]
at org.elasticsearch.xpack.core.scheduler.SchedulerEngine.notifyListeners( [x-pack-core-7.17.0.jar:7.17.0]
at org.elasticsearch.xpack.core.scheduler.SchedulerEngine$ [x-pack-core-7.17.0.jar:7.17.0]
at java.util.concurrent.Executors$ [?:?]
at [?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:?]
at java.util.concurrent.ThreadPoolExecutor$ [?:?]
at [?:?]
[2022-10-19T21:07:53,794][INFO ][o.e.x.i.IndexLifecycleRunner] [node-1] policy [apm-rollover-30-days] for index [apm-7.10.2-span] on an error step due to a transient error, moving back to the failed step [check-rollover-ready] for execution. retry attempt [194]

Kibana service logs

root@elk:/var/log/elasticsearch# sudo systemctl status kibana

kibana.service - Kibana

Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)

Active: failed (Result: exit-code) since Wed 2022-10-19 15:53:49 IST; 5h 16min ago

Process: 488438 ExecStart=/usr/share/kibana/bin/kibana (code=exited, status=78)

Main PID: 488438 (code=exited, status=78)

Oct 19 15:53:49 elk systemd[1]: kibana.service: Scheduled restart job, restart counter is at 165.

Oct 19 15:53:49 elk systemd[1]: Stopped Kibana.

Oct 19 15:53:49 elk systemd[1]: kibana.service: Start request repeated too quickly.

Oct 19 15:53:49 elk systemd[1]: kibana.service: Failed with result 'exit-code'.

Oct 19 15:53:49 elk systemd[1]: Failed to start Kibana.

root@elk:/var/log/elasticsearch# sudo journalctl -fu kibana.service

-- Logs begin at Tue 2022-10-11 09:15:11 IST. --

Oct 19 15:53:46 elk kibana[488438]: at bootstrap (/usr/share/kibana/src/core/server/bootstrap.js:99:9)

Oct 19 15:53:46 elk kibana[488438]: at Command. (/usr/share/kibana/src/cli/serve/serve.js:216:5)

Oct 19 15:53:46 elk kibana[488438]: FATAL Error: [config validation of [elasticsearch].username]: value of "elastic" is forbidden. This is a superuser account that cannot write to system indices that Kibana needs to function. Use a service account token instead. Learn more: Service accounts | Elasticsearch Guide [8.0] | Elastic

Oct 19 15:53:46 elk systemd[1]: kibana.service: Main process exited, code=exited, status=78/CONFIG

Oct 19 15:53:46 elk systemd[1]: kibana.service: Failed with result 'exit-code'.

Oct 19 15:53:49 elk systemd[1]: kibana.service: Scheduled restart job, restart counter is at 165.

Oct 19 15:53:49 elk systemd[1]: Stopped Kibana.

Oct 19 15:53:49 elk systemd[1]: kibana.service: Start request repeated too quickly.

Oct 19 15:53:49 elk systemd[1]: kibana.service: Failed with result 'exit-code'.

Oct 19 15:53:49 elk systemd[1]: Failed to start Kibana.

Could you please help how to troubleshoot this ? Kindly let us know if required any other information.

Kibana might be using "elastic" user ID. If this is not changed, try to replace with kibana_system(or other).

@swchandu kindly review the kibana.yml file below and help me to fix. Elastic is the user name I use. It was working fine for me.

server.port: 5601 "" "elk-server"
elasticsearch.hosts: ["http://localhost:9200"]
elasticsearch.username: "elastic"
elasticsearch.password: "RTMRpVBdtJwPhHR9"

You would need to use another ID and not elastic

I don't have any additional accounts. Please advise on how to accomplish this.

Also, since my kibana version is 8.4.3, can we upgrade my elasticsearch version from 7.17 to 8.4.3?

I believe kibana and elasticsearch should be the same version.

Welcome to our community! :smiley: I wanted to point a few things if you don't mind!

Please format your code/logs/config using the </> button, or markdown style back ticks. It helps to make things easy to read which helps us help you :slight_smile:

You need to, yes. You can run different minor versions for a short term, but Elasticsearch 7.X is not (long term) compatible with Kibana 8.X. shows you what is compatible.

As it mentions, you cannot use the elastic superuser as this is a reserved user.
See Security APIs | Elasticsearch Guide [8.0] | Elastic for how to create new users.

Sure @warkolm

Currently i have restarored the sanpshot and debugging the issue. will reach you if any help required. thanks for your update.


I have restored the snapshot. trying to debug the issue.
I have observed elasticsearch service failed. i'm unable to find the error logs and looks not updating the logs. "journalctl -xe" not showing any entries.

Rest of the services (kibana) running.

Could you help to debug this. My elastic and kibana installed in same server.

Elastic search logs

root@elk:~# tail -f /var/log/elasticsearch/wakefit.log 
	at org.elasticsearch.env.NodeEnvironment.lambda$new$0( ~[elasticsearch-7.10.2.jar:7.10.2]
	at org.elasticsearch.env.NodeEnvironment$NodeLock.<init>( ~[elasticsearch-7.10.2.jar:7.10.2]
	at org.elasticsearch.env.NodeEnvironment.<init>( ~[elasticsearch-7.10.2.jar:7.10.2]
	at org.elasticsearch.node.Node.<init>( ~[elasticsearch-7.10.2.jar:7.10.2]
	at org.elasticsearch.node.Node.<init>( ~[elasticsearch-7.10.2.jar:7.10.2]
	at org.elasticsearch.bootstrap.Bootstrap$5.<init>( ~[elasticsearch-7.10.2.jar:7.10.2]
	at org.elasticsearch.bootstrap.Bootstrap.setup( ~[elasticsearch-7.10.2.jar:7.10.2]
	at org.elasticsearch.bootstrap.Bootstrap.init( ~[elasticsearch-7.10.2.jar:7.10.2]
	at org.elasticsearch.bootstrap.Elasticsearch.init( ~[elasticsearch-7.10.2.jar:7.10.2]
	... 6 more

ELK server port status

root@elk:~# netstat -tulpn | grep LISTEN
tcp        0      0 *               LISTEN      636/systemd-resolve 
tcp        0      0    *               LISTEN      842/sshd: /usr/sbin 
tcp        0      0  *               LISTEN      30788/node          
tcp6       0      0 :::5044                 :::*                    LISTEN      799/java            
tcp6       0      0 :::22                   :::*                    LISTEN      842/sshd: /usr/sbin 
tcp6       0      0          :::*                    LISTEN      799/java            

Elasticsearch config file

root@elk:~# cat /etc/elasticsearch/elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
# Please consult the documentation for further information on configuration options:
# ---------------------------------- Cluster -----------------------------------
# Use a descriptive name for your cluster:
# wakefit
# ------------------------------------ Node ------------------------------------
# Use a descriptive name for the node:
# node-1
# Add custom attributes to the node:
#node.attr.rack: r1
# ----------------------------------- Paths ------------------------------------
# Path to directory where to store the data (separate multiple locations by comma):
# /var/lib/elasticsearch /mnt/elk_volume_1/elasticsearch
#    - /var/lib/elasticsearch
#    - /mnt/elk_volume_1/elasticsearch
# Path to log files:
path.logs: /var/log/elasticsearch
# ----------------------------------- Memory -----------------------------------
# Lock the memory on startup:
#bootstrap.memory_lock: true
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
# Elasticsearch performs poorly when the system is swapping the memory.
# ---------------------------------- Network -----------------------------------
# Set the bind address to a specific IP (IPv4 or IPv6):
# Set a custom port for HTTP:
http.port: 9200
# For more information, consult the network module documentation.
# --------------------------------- Discovery ----------------------------------
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["", "[::1]"]
#discovery.seed_hosts: ["host1", "host2"]
# Bootstrap the cluster using an initial set of master-eligible nodes:
#cluster.initial_master_nodes: ["node-1"]
# For more information, consult the discovery and cluster formation module documentation.
# ---------------------------------- Gateway -----------------------------------
# Block initial recovery after a full cluster restart until N nodes are started:
#gateway.recover_after_nodes: 3
# For more information, consult the gateway module documentation.
# ---------------------------------- Various -----------------------------------
# Require explicit names when deleting indices:
discovery.type: single-node
#action.destructive_requires_name: true true
http.cors.enabled:  true
#discovery.type: single-node true

Can you share more of your Elasticsearch logs please.

@swchandu @warkolm Thanks a lot for your help.
above issue is fixed after changing service account access.

I have restored the snapshot .Will it be possible to restore dashboards as previous.?

@warkolm @leandrojmp

Could you please help to troubleshoot this issue. once i opened kibana url i could see events only.

logs are not fetching from the server
Please check below logs in dashboard

6,332 hits
Nov 17, 2022 @ 20:12:13.704 - Nov 17, 2022 @ 20:27:13.704

Time	_source
	Nov 17, 2022 @ 20:27:06.699	@timestamp:Nov 17, 2022 @ 20:27:06.699 event.duration:52.1 event.dataset:system.process event.module:system metricset.period:10,000 service.type:system ecs.version:1.6.0 agent.type:metricbeat agent.version:7.10.0 agent.hostname:ip-10-0-1-69 agent.ephemeral_id:e717c31f-ca32-45e7-9d95-3a057871d4bb cloud.provider:aws cloud.machine.type:c5.4xlarge cloud.region:ap-south-1 cloud.availability_zone:ap-south-1a process.args:/var/www/, /var/www/, --access-logfile, /var/log/gunicorn/access.log, --error-logfile, /var/log/gunicorn/error.log, --timeout, 240, --log-level, debug, --workers, 50, --enable-stdio-inheritance, --capture-output, --bind, unix:/run/gunicorn.sock, wakefit.wsgi:application process.ppid:22127 process.pgid:22127 process.working_directory:/var/www/ process.executable:/usr/bin/python3.6 system.process.fd.limit.hard:4,096 system.process.fd.limit.soft:1,024 system.process.state:sleeping,180 system.process.cpu.start_time:Nov 17, 2022 @ 19:39:12.000 system.process.cmdline:/var/www/ /var/www/ --access-logfile /var/log/gunicorn/access.log --error-logfile /var/log/gunicorn/error.log --timeout 240 --log-level debug --workers 50 --enable-stdio-inheritance --capture-output --bind unix:/run/gunicorn.sock wakefit.wsgi:application system.process.cgroup.memory.path:/system.slice/gunicorn.service system.process.cgroup.memory.mem.failures:0 system.process.cgroup.memory.mem.limit.bytes:8EB system.process.cgroup.memory.mem.usage.max.bytes:25.5GB system.process.cgroup.memory.mem.usage.bytes:20.3GB system.process.cgroup.memory.memsw.failures:0 system.process.cgroup.memory.memsw.limit.bytes:0B system.process.cgroup.memory.memsw.usage.bytes:0B system.process.cgroup.memory.memsw.usage.max.bytes:0B system.process.cgroup.memory.kmem.failures:0 system.process.cgroup.memory.kmem.limit.bytes:8EB system.process.cgroup.memory.kmem.usage.bytes:161.5MB system.process.cgroup.memory.kmem.usage.max.bytes:185.8MB system.process.cgroup.memory.kmem_tcp.usage.bytes:0B system.process.cgroup.memory.kmem_tcp.usage.max.bytes:0B system.process.cgroup.memory.kmem_tcp.failures:0 system.process.cgroup.memory.kmem_tcp.limit.bytes:8EB system.process.cgroup.memory.stats.inactive_file.bytes:599.1MB system.process.cgroup.memory.stats.pages_out:22,834,561 system.process.cgroup.memory.stats.pages_in:28,118,673 system.process.cgroup.memory.stats.swap.bytes:0B system.process.cgroup.memory.stats.cache.bytes:1.4GB system.process.cgroup.memory.stats.page_faults:41,171,856 system.process.cgroup.memory.stats.hierarchical_memsw_limit.bytes:0B system.process.cgroup.memory.stats.inactive_anon.bytes:0B system.process.cgroup.memory.stats.active_anon.bytes:18.8GB system.process.cgroup.memory.stats.mapped_file.bytes:924KB system.process.cgroup.memory.stats.rss.bytes:18.8GB system.process.cgroup.memory.stats.active_file.bytes:819.8MB system.process.cgroup.memory.stats.unevictable.bytes:0B system.process.cgroup.memory.stats.hierarchical_memory_limit.bytes:8EB system.process.cgroup.memory.stats.major_page_faults:198 system.process.cgroup.memory.stats.rss_huge.bytes:0B system.process.cgroup.blkio.path:/system.slice/gunicorn.service,963 system.process.cgroup.path:/system.slice/gunicorn.service system.process.cgroup.cpu.cfs.shares:1,024,000 system.process.cgroup.cpu.stats.throttled.ns:0 system.process.cgroup.cpu.stats.throttled.periods:0 system.process.cgroup.cpu.stats.periods:0 system.process.cgroup.cpu.path:/system.slice/gunicorn.service system.process.cgroup.cpuacct.path:/system.slice/gunicorn.service,288,027,409,844 system.process.cgroup.cpuacct.stats.system.ns:1,621,130,000,000 system.process.cgroup.cpuacct.stats.user.ns:35,494,060,000,000 system.process.cgroup.cpuacct.percpu.1:1,767,683,453,400 system.process.cgroup.cpuacct.percpu.2:2,426,385,434,573 system.process.cgroup.cpuacct.percpu.3:2,360,895,309,350 system.process.cgroup.cpuacct.percpu.4:2,398,474,846,273 system.process.cgroup.cpuacct.percpu.5:2,433,103,281,476 system.process.cgroup.cpuacct.percpu.6:2,396,369,634,670 system.process.cgroup.cpuacct.percpu.7:1,573,104,978,704 system.process.cgroup.cpuacct.percpu.8:2,544,746,992,937 system.process.cgroup.cpuacct.percpu.9:2,418,113,299,650 system.process.cgroup.cpuacct.percpu.10:2,707,133,024,542 system.process.cgroup.cpuacct.percpu.11:2,478,944,970,407 system.process.cgroup.cpuacct.percpu.12:2,547,332,471,461 system.process.cgroup.cpuacct.percpu.13:2,314,417,078,545 system.process.cgroup.cpuacct.percpu.14:2,452,692,969,286 system.process.cgroup.cpuacct.percpu.15:1,621,403,958,028 system.process.cgroup.cpuacct.percpu.16:2,847,225,706,542 system.process.memory.size:1.1GB system.process.memory.rss.pct:0.75% system.process.memory.rss.bytes:235.6MB system.process.memory.share:39.8MB host.containerized:false host.ip:, fe80::a9:3bff:fec7:382 host.mac:02:a9:3b:c7:03:82 host.hostname:ip-10-0-1-69 host.architecture:x86_64 host.os.codename:bionic host.os.platform:ubuntu host.os.version:18.04.4 LTS (Bionic Beaver) host.os.kernel:5.4.0-1088-aws _id:ONQXhoQBX1mpuHxX9Npe _type:_doc _index:metricbeat-7.10.0-2022.10.25-000001 _score: -
	Nov 17, 2022 @ 20:27:06.699	@timestamp:Nov 17, 2022 @ 20:27:06.699 event.dataset:system.process event.module:system event.duration:52.1 metricset.period:10,000 host.mac:02:a9:3b:c7:03:82 host.hostname:ip-10-0-1-69 host.architecture:x86_64 host.os.kernel:5.4.0-1088-aws host.os.codename:bionic host.os.platform:ubuntu host.os.version:18.04.4 LTS (Bionic Beaver) host.containerized:false host.ip:, fe80::a9:3bff:fec7:382

Please start a new topic for your other question :slight_smile:

Sure @warkolm Thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.