Nginx 502 bad gateway, and kibana restarting

My stack is not working and when I login I receive this message nginx "502 Bad gateway".
I have checked the kibana service and I found it continously restarting.
tailing logs for kibana I found "
di@dar-elk-7:~$ tail /var/log/kibana/kibana.log
tail: cannot open '/var/log/kibana/kibana.log' for reading: Permission denied
di@dar-elk-7:~$ sudo tail /var/log/kibana/kibana.log
{"type":"log","@timestamp":"2023-10-12T03:54:22+02:00","tags":["info","savedobjects-service"],"pid":43929,"message":"[.kibana] OUTDATED_DOCUMENTS_SEARCH_READ -> OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT. took: 575ms."}
{"type":"log","@timestamp":"2023-10-12T03:54:22+02:00","tags":["info","savedobjects-service"],"pid":43929,"message":"[.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH_READ -> OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT. took: 562ms."}
{"type":"log","@timestamp":"2023-10-12T03:54:22+02:00","tags":["info","savedobjects-service"],"pid":43929,"message":"[.kibana] OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT -> UPDATE_TARGET_MAPPINGS. took: 34ms."}
{"type":"log","@timestamp":"2023-10-12T03:54:22+02:00","tags":["info","savedobjects-service"],"pid":43929,"message":"[.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT -> UPDATE_TARGET_MAPPINGS. took: 36ms."}
{"type":"log","@timestamp":"2023-10-12T03:54:22+02:00","tags":["error","savedobjects-service"],"pid":43929,"message":"[.kibana] Unexpected Elasticsearch ResponseError: statusCode: 429, method: PUT, url: /.kibana_7.17.0_001/_mapping?timeout=60s error: [cluster_block_exception]: index [.kibana_7.17.0_001] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];,"}
{"type":"log","@timestamp":"2023-10-12T03:54:22+02:00","tags":["fatal","root"],"pid":43929,"message":"Error: Unable to complete saved object migrations for the [.kibana] index. Please check the health of your Elasticsearch cluster and try again. Unexpected Elasticsearch ResponseError: statusCode: 429, method: PUT, url: /.kibana_7.17.0_001/_mapping?timeout=60s error: [cluster_block_exception]: index [.kibana_7.17.0_001] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];,\n at migrationStateActionMachine (/usr/share/kibana/src/core/server/saved_objects/migrationsv2/migrations_state_action_machine.js:164:13)\n at processTicksAndRejections (node:internal/process/task_queues:96:5)\n at async Promise.all (index 0)\n at SavedObjectsService.start (/usr/share/kibana/src/core/server/saved_objects/saved_objects_service.js:181:9)\n at Server.start (/usr/share/kibana/src/core/server/server.js:330:31)\n at Root.start (/usr/share/kibana/src/core/server/root/index.js:69:14)\n at bootstrap (/usr/share/kibana/src/core/server/bootstrap.js:120:5)\n at Command. (/usr/share/kibana/src/cli/serve/serve.js:229:5)"}
{"type":"log","@timestamp":"2023-10-12T03:54:22+02:00","tags":["info","plugins-system","standard"],"pid":43929,"message":"Stopping all plugins."}
{"type":"log","@timestamp":"2023-10-12T03:54:22+02:00","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":43929,"message":"Monitoring stats collection is stopped"}
{"type":"log","@timestamp":"2023-10-12T03:54:22+02:00","tags":["error","savedobjects-service"],"pid":43929,"message":"[.kibana_task_manager] Unexpected Elasticsearch ResponseError: statusCode: 429, method: PUT, url: /.kibana_task_manager_7.17.0_001/_mapping?timeout=60s error: [cluster_block_exception]: index [.kibana_task_manager_7.17.0_001] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];,"}
{"type":"log","@timestamp":"2023-10-12T03:54:52+02:00","tags":["warning","plugins-system","standard"],"pid":43929,"message":""eventLog" plugin didn't stop in 30sec., move on to the next."}
di@dar-elk-7:~$

I can see that the disk write usage has been exceeded, so what can I do now to have access to the logs again then delete my old logs? "sorry for not being a developer, am a networker so I have less dev skills".
TIA

Closed Duplicate of