Issue with Logstash

usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:385] elasticsearch - retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})

I have 5000 files which need to be processed every five minutes, I do have 1.7 G available

logstash runs fine for 2 couples hours and it crashes with the above-mentioned error. I don't see any data in kibana and It's recurring not sure what is causing this error

below is the attached logstash.configuration

input {
file {
path =>"/home/shared/msdp/LogStashOutputFormatted/SES_VG1/.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
type => "sesvg1"
}
file {
path =>"/home/shared/msdp/LogStashOutputFormatted/SES_VG1_Disk/
.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
type => "sesvg1disk"
}
file {
path =>"/home/shared/msdp/LogStashOutputFormatted/SES_VG2/.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
type => "sesvg2"
}
file {
path =>"/home/shared/msdp/LogStashOutputFormatted/SPP_VG1/
.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
type => "sppvg1"
}
file {
path =>"/home/shared/msdp/LogStashOutputFormatted/SPP_VG1_Disk/.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
type => "sppvg1disk"
}
file {
path =>"/home/shared/msdp/LogStashOutputFormatted/SPP_VG2/
.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
type => "sppvg2"
}
}

filter {
if [type] == "sesvg1disk" or [type] == "sppvg1disk"{
csv {
separator => ","
columns => ["nodeName","Filesystem","Size","Used","Avail","Utilization","MountedOn"]
}
}
else if [type] == "sppvg1"{
csv {
separator => "/"
columns => ["nodeName","APIName","Value","Share","Total"]
}
}
else if [type] == "sppvg2"{
csv {
separator => ","
columns => ["nodeName","APIName","Value","Share","Total"]
}
}
else {
csv {
separator => ","
columns => ["nodeName","APIName","Status","Share","Total"]
}
}
mutate {
convert => { "Filesystem" => "string" }
convert => { "Size" => "string" }
convert => { "Used" => "string" }
convert => { "Avail" => "string" }
convert => { "Utilization" => "integer" }
convert => { "MountedOn" => "string" }
convert => { "nodeName" => "string" }
convert => { "APIName" => "string" }
convert => { "Status" => "string" }
convert => { "Value" => "string" }
convert => { "Share" => "float" }
convert => { "Total" => "integer" }
}
}

output {
if [type] == "sesvg1" {
elasticsearch {
action => "index"
hosts => "http://localhost:9200"
index => "beatsesvg1"
}
}
else if [type] == "sesvg1disk" {
elasticsearch {
action => "index"
hosts => "http://localhost:9200"
index => "beatsesvg1disk"
}
}
else if [type] == "sppvg1" {
elasticsearch {
action => "index"
hosts => "http://localhost:9200"
index => "beatsppvg1"
}
}
else if [type] == "sppvg1disk" {
elasticsearch {
action => "index"
hosts => "http://localhost:9200"
index => "beatsppvg1disk"
}
}
else if [type] == "sppvg2" {
elasticsearch {
action => "index"
hosts => "http://localhost:9200"
index => "beatsppvg2"
}
}
else {
elasticsearch {
action => "index"
hosts => "http://localhost:9200"
index => "beatsesvg2"
}

This error typically mean that you are running out of disk space and have exceeded the 95% flooding watermark.

Moved it to the Logstash category.

Thanks for responding back

My disk utilization is at 92 percent and I did run the following command

curl -X PUT "localhost:9200/*/_settings" -H 'Content-Type: application/json' -d'{"index.blocks.read_only_allow_delete": null}' , but it didn't help

you meant every time 95 percent it will stop working

is there a work around apart from increasing disk space

Indexing can result in a lot of merging taking place, which will make the amount of used disk space go up and down. As running out of disk space can have serious consequences, I would recommend adding more by scaling up or out the cluster.

I have 30 G in a new partition changing the path.data and path.logs to different paths would help??

If you only have a single node adding this as an additional data path may not help as I believe Elasticsearch will not relocate shards between data paths on the same node.

Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.