Hi
my logstash service starts without error but it stops with this error "Failed with result 'exit-code'"
then I run it without logstash service.
so after sending 5 entry point, the following error is occured:
[INFO ] 2020-07-27 10:41:09.254 [[main]>worker2] elasticsearch - retrying failed action with response code: 429 ({"type"=>"cluster_block_exception", "reason"=>"index [logstash-2020.07.27] blocked by: [TOO_MANY_REQUESTS/12/index read-only / allow delete (api)];"})
[INFO ] 2020-07-27 10:41:09.255 [[main]>worker2] elasticsearch - Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>1}
I try these solutions but it doesn't work for me:
PUT _settings
{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}
PUT logstash-2020.07.27/_settings
{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}
After deleting index, just 3 or 4 entry points can be add to index but others entry points fetch above error!
Any one can help me?
warkolm
(Mark Walkom)
July 27, 2020, 8:01am
2
What version are you running?
What is the output from _cat/nodes?v
and _cat/indices?v
?
Please format your code/logs/config using the </>
button, or markdown style back ticks. It helps to make things easy to read which helps us help you
run code on version 7.8.0
result for running cat/indices is:
yellow open elastalert_stat_status iOOTNQGEQBq8bBSSSoNQyA 1 1 6 0 29.5kb 29.5kb
yellow open elastalert_status_status 26KgVzZYRjuGDt4E -yiBQ 1 1 316 0 84.6kb 84.6kb
yellow open elastalert_status OFWCLY9FSvykESzsDvFodQ 1 1 14 0 32.7kb 32.7kb
yellow open elastalert_status_past 8VD63-WNTgaMHiLv_aza7A 1 1 0 0 208b 208b
green open .apm-agent-configuration e_kurIzPQp-jGMuKXgRlNg 1 0 0 0 208b 208b
yellow open elastalert_status_silence Yv-Se1rzSVS_9Ga9Xd3CIA 1 1 0 0 208b 208b
green open .kibana_1 BzA68BuVQBGa5CmVBd-R6w 1 0 129 9 119.5kb 119.5kb
green open .kibana-event-log-7.8.0-000001 tysDNrARSbmFsI3nC_q0aA 1 0 8 0 40.9kb 40.9kb
green open .apm-custom-link u8Tsm5ofRH6UI6iR7Ko3mQ 1 0 0 0 208b 208b
yellow open logstashtest RRK-IjU0Q929BFnZi7LA_w 1 1 1 0 5.1kb 5.1kb
green open .kibana_task_manager_1 NoaSsieASfOHmECNWed5Hg 1 0 5 8 67kb 67kb
yellow open logstash-test KClvmfynTQCa9fHLzroYhQ 1 1 13 0 35.1kb 35.1kb
yellow open logstash-2020.07.27 xiocNGiHQbekCKdx38owNA 1 1 3 0 8.9kb 8.9kb
yellow open elastalert_stat_error -NfeNANeT0K5H4liVBB4xg 1 1 1 0 10.4kb 10.4kb
yellow open elastalert_status_error afCFFG6JRwy82oz_YgbK0A 1 1 5 0 29.8kb 29.8kb
yellow open customer rb3uuSK0TseWp90yG6nimQ 1 1 1 0 3.5kb 3.5kb
also result for running _cat/nodes is:
127.0.0.1 17 97 14 0.72 0.68 0.55 dilmrt * mypc-Compaq-Elite-8300-SFF
my logstash config also is:
input { stdin { } }
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
Thanks
less capacity of hard disc is the main problem and solve this by adding following line to elasticsearch.yaml
cluster.routing.allocation.disk.threshold_enabled:false
to use the hard if even more than 95% disc is full.
warkolm
(Mark Walkom)
July 28, 2020, 5:16am
5
You are likely to run into bigger issues with that, as Elasticsearch will completely stop if there is no disk space available.
system
(system)
Closed
August 25, 2020, 5:18am
6
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.