Hello Logstash Community,
I’m encountering a QueueRuntimeException
in Logstash that says:"data to be written is bigger than page capacity"
Configurations are:
logstash.yml
pipeline.ordered: auto
config.support_escapes: true
path.logs: /path/logstash
http.enabled: true
http.port: 1234
http.host: localhost
pipeline.yml
- pipeline.id: tc-server-usage-analyse
path.config: path/tc-server-usage-analyse.cfg
queue.type: persisted
tc-server-usage-analyse.cfg
input {
exec {
command => '/path/monitoring_core.ksh -c /path/globalconfiguration.properties -e tc_server_usage_analyse.pl'
schedule => "25 * * * *"
}
}
filter {
if [message] =~ "^\{.*\}[\s\S]*$" {
json {
source => "message"
target => "parsed_json"
remove_field => "message"
}
split {
field => "[parsed_json][app]"
target => "serverusageanalyse"
remove_field => [ "parsed_json" ]
}
mutate {
remove_field => [ "[event][original]" ]
add_field => { "instance" => "${instance}" }
}
} else {
drop { }
}
}
output {
elasticsearch {
hosts => "host"
ilm_pattern => "{now/d}-000001"
ilm_rollover_alias => "app-monitoring-serverusageanalyse"
ilm_policy => "app-monitoring-common-policy"
doc_as_upsert => true
document_id => "%{[serverusageanalyse][uniqueId]}"
user => "username"
password => "password"
compression_level => 1
timeout => 300
data_stream => false
}
}
How do I address the issue of data exceeding the page capacity in the persisted queue? Are there any settings or adjustments I can make to handle large data from the exec input more effectively?
Any guidance is greatly appreciated!
Thanks in advance!