I can not login elastic

hi
my disk space is full
and I can not login to elastic web
how can I clear cache disk

plz help me

How many nodes do you have?
What are the logs? Please read this about how to format.

I have one node
and I use net flow and filebeat

I have one node
and I use net flow and filebeat

Hi @miladmohabati

What version of Elasticsearch?

Do you have security enabled?

Https? Authentication?

I would stop the netflow.

From the host run

curl "http://localhost:9200/_cat/nodes/?v&h=name,du,dt,dup,hp,hc,rm,rp,r"

and

curl "http://localhost:9200/_cat/health"

If you have https an authentication

Then

curl -k -u elastic "https://..."

hi my friend
my version Elasticsearch = 8.11.1
yes https

after running curl "http://localhost:9200/_cat/nodes/?v&h=name,du,dt,dup,hp,hc,rm,rp,r"

curl: (52) Empty reply from server

okay thanks. Which means security is enabled
So in that case
You need to use https and authentication

curl -k - u elastic "https://localhost:9200/_cat/nodes/?v&h=name,du,dt,dup,hp,hc,rm,rp,r

thanks
and output

name     du     dt   dup hp    hc     rm rp r
srvelk 25gb 95.8gb 26.08 40 3.1gb 15.6gb 99 cdfhilmrstw

curl "http://localhost:9200/_cat/health"
epoch      timestamp cluster        status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1701622380 16:53:00 tooba-siem yellow 1 1 34 34 0 0 1 0 - 97.1%

now
how can I clear cache?

This says that your disk is not full.... only 26% dup (Disk Used Percent) 26%

So why do you think your disk is full?

What OS are you on?

You can run this

curl -k - u elastic "https://localhost:9200 /_cat/indices/*?v&s=pri.store.size:desc

That will show your indices in descending order of size...

You can pick some indices to DELETE but I suspect this may not be the trouble

This could be caused by something else do you mean you can not Log into Kibana?

hi my friend

oh,I forgot to say , I restore snapshot backup :upside_down_face: But the hard drive was full

ubuntu server

This command did not work

Sorry, I missed the last double quote,

curl -k -u elastic "https://localhost:9200 /_cat/indices/*?v&s=pri.store.size:desc"

Well, according to this

name     du     dt   dup hp    hc     rm rp r
srvelk 25gb 95.8gb 26.08 40 3.1gb 15.6gb 99 cdfhilmrstw

your hardrive is not full.... it is only 26% full so I am not sure what your problem is.

store.size dataset.size
yellow open   .ds-filebeat-8.11.1-2023.11.16-000001                          A7Y5ZlG7QheXryHgK0fQpA   1   1   23867648            0     12.9gb         12.9gb       12.9gb
green  open   .internal.alerts-observability.logs.alerts-default-000001      W-wiCs-7ToaRr9KjOjC0Iw   1   0          0            0       249b           249b         249b
green  open   .internal.alerts-observability.uptime.alerts-default-000001    NAS42JsdQbqhUTnzI6dvcw   1   0          0            0       249b           249b         249b
green  open   .internal.alerts-ml.anomaly-detection.alerts-default-000001    oe2borCxQ520NlD6Ok-Y2w   1   0          0            0       249b           249b         249b
green  open   .internal.alerts-observability.slo.alerts-default-000001       r8oGIwF-Rb-OEIHNuEPmcg   1   0          0            0       249b           249b         249b
green  open   .internal.alerts-observability.apm.alerts-default-000001       Wrgp26bbTlGTo6vYO3EOCQ   1   0          0            0       249b           249b         249b
green  open   .internal.alerts-observability.metrics.alerts-default-000001   iJ252YZ3Ty-ZWbx8MQihmA   1   0          0            0       249b           249b         249b
green  open   .kibana-observability-ai-assistant-conversations-000001        SbjXxyw9Q2u_WedT563dCw   1   0          0            0       249b           249b         249b
green  open   .internal.alerts-observability.threshold.alerts-default-000001 jLIF61xpTRieUynHu-YCMw   1   0          0            0       249b           249b         249b
green  open   .internal.alerts-security.alerts-default-000001                OoAsfZ4NQGWUTPRNXt1klw   1   0          0            0       249b           249b         249b
green  open   .kibana-observability-ai-assistant-kb-000001                   wGU1lhc3ReGtaCQMYNKdkw   1   0          0            0       249b           249b         249b
green  open   .internal.alerts-stack.alerts-default-000001                   XTeuNnW1RECTf797ioMpoQ   1   0          0            0       249b           249b         249b

So you have one big index from filebeat.

And your disk does not appear to be full.

You can delete that index but you will lose all your data and I don't know that that's necessary.

So let's try to figure out why you can't log into Kibana.

Exactly what error are you getting from Kibana.?

I’ve had the same issue as OP in the past (pro tip is really paying attention to your disk usage alarm). If my hot tier filled up, it would be impossible to log into Kibana, even with the cloud admin account. I’d need to use the API to remove or rollover an index and reboot the deployment before I could log in. I’m assuming it’s because Kibana can’t write to an index it needs (I have not seen it happen with warm nodes) but that’s a wild guess.

1 Like

my disk is full

df -h
Filesystem                         Size  Used Avail Use% Mounted on
tmpfs                              1.6G  1.3M  1.6G   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   96G   92G     0 **100%** /
tmpfs                              7.9G     0  7.9G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
/dev/sda2                          2.0G  129M  1.7G   8% /boot
tmpfs                              1.6G  4.0K  1.6G   1% /run/user/0

Which disk is the Elastic Data on? /dev/mapper/ubuntu--vg-ubuntu--lv

your tmpfs is also full...

So you only have 1 big index

yellow open .ds-filebeat-8.11.1-2023.11.16-000001 A7Y5ZlG7QheXryHgK0fQpA 1 1 23867648 0 12.9gb 12.9gb 12.9gb

So you can delete that if you want but you will lose all your data...

There must be other items on your disk. Elastic only takes ~13GB. Perhaps you can clean something else log files... something else?

curl -k -u elastic "https://localhost:9200/_cat/indices/*?v&s=pri.store.size:desc"
Enter host password for user 'elastic':
curl: (7) Failed to connect to localhost port 9200 after 1 ms: Connection refused

So the command that worked above no longer works?

I can't help much when you justs run a command and show me the results with no other explanation.

I would check to see if elasticsearch is still running

sudo systemctl status elasticsearch

Also, if your file system has 96 GB and elastic is only taking up. 13 GB I would see if there's something else on that file system that you could clean up.

systemctl status elasticsearch.service
× elasticsearch.service - Elasticsearch
Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2023-12-10 09:21:53 +0330; 11h ago
Docs: https://www.elastic.co
Process: 1589 ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILUR>
Main PID: 1589 (code=exited, status=1/FAILURE)
CPU: 2.417s

Dec 10 09:21:53 srvelk systemd-entrypoint[1589]: at java.base/java.nio.file.Files.createTempDirectory(Files.java:1017)
Dec 10 09:21:53 srvelk systemd-entrypoint[1589]: at org.elasticsearch.server.cli.ServerProcess.createTempDirectory(ServerProcess.java:268)
Dec 10 09:21:53 srvelk systemd-entrypoint[1589]: at org.elasticsearch.server.cli.ServerProcess.setupTempDir(ServerProcess.java:260)
Dec 10 09:21:53 srvelk systemd-entrypoint[1589]: at org.elasticsearch.server.cli.ServerProcess.createProcess(ServerProcess.java:203)
Dec 10 09:21:53 srvelk systemd-entrypoint[1589]: at org.elasticsearch.server.cli.ServerProcess.start(ServerProcess.java:104)
Dec 10 09:21:53 srvelk systemd-entrypoint[1589]: ... 7 more
Dec 10 09:21:53 srvelk systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 09:21:53 srvelk systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Dec 10 09:21:53 srvelk systemd[1]: Failed to start Elasticsearch.
Dec 10 09:21:53 srvelk systemd[1]: elasticsearch.service: Consumed 2.417s CPU time.

You can use the command to find 5 largest dirs:
du -h / | sort -rh | head -5