I have the following files which work perfectly and does exactly what I expect.
My .env
file:
COMPOSE_PROJECT_NAME=wp
LOGSTASH_DEBUG_SCHEDULE="* * * * *"
My docker-compose.yml
:
version: "3.8"
networks:
default:
name: dmarc
external: false
services:
logstash01:
image: docker.elastic.co/logstash/logstash:8.11.3
labels:
co.elastic.logs/module: logstash
user: root
volumes:
- "./pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro"
- "./conf.d:/etc/logstash/conf.d:ro"
- "./logstash-docker-only.yml:/usr/share/logstash/config/logstash.yml"
- "./certs:/usr/share/logstash/certs"
environment:
- xpack.monitoring.enabled=false
- ELASTIC_USER=elastic
- ELASTIC_PASS=changeme
- ELASTIC_HOST=es01
- ELASTIC_CA_FILE=/usr/share/logstash/certs/ca/ca.crt
- LOGSTASH_DEBUG_SCHEDULE=${LOGSTASH_DEBUG_SCHEDULE}
My ./pipelines.yml
:
- pipeline.id: disk
path.config: "/etc/logstash/conf.d/disk.conf"
My ./conf.d/disk.conf
input {
elasticsearch {
schedule => '${LOGSTASH_DEBUG_SCHEDULE:"0 * * * *"}'
hosts => "${ELASTIC_HOST}"
user => "${ELASTIC_USER}"
password => "${ELASTIC_PASS}"
index => "alerts-general"
query => '{"query":{"bool":{"must":[{"match":{"alert_type":"disk_usage"}},{"range":{"rule_date":{"lte":"now","gte":"now-61m"}}}]}},"sort":[{"rule_date":{"order":"desc"}}]}'
ssl_enabled => true
ssl_certificate_authorities => "${ELASTIC_CA_FILE}"
ssl_verification_mode => "full"
}
}
output {
stdout {}
}
My ./logstash-docker-only.yml
:
# Nothing, please leave this file empty
When I run the commands docker-compose up --build -d
, wait a few minutes then do a docker logs wp-logstash01-1
, Logstash perfectly gets data from my Elasticsearch alerts-general
index every minute.
DEFAULT VALUE CRASHES
However, if I comment out the line LOGSTASH_DEBUG_SCHEDULE
in my .env
file, and I do a docker-compose down && docker-compose up --build -d
, wait a few minutes, and then do a docker logs wp-logstash01-1
, I get the error below:
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-mixin-scheduler-1.0.1-java/lib/logstash/plugin_mixins/scheduler/rufus_impl.rb:187:in `do_schedule'
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/rufus-scheduler-3.9.1/lib/rufus/scheduler.rb:231:in `schedule_cron'
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-mixin-scheduler-1.0.1-java/lib/logstash/plugin_mixins/scheduler/rufus_impl.rb:63:in `__schedule'
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-mixin-scheduler-1.0.1-java/lib/logstash/plugin_mixins/scheduler/rufus_impl.rb:37:in `cron'
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-input-elasticsearch-4.18.0/lib/logstash/inputs/elasticsearch.rb:330:in `run'
/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:414:in `inputworker'
/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:405:in `block in start_input'
[2024-03-12T22:50:09,720][ERROR][logstash.javapipeline ][disk][affdb6799fd6a18d2077977ba792a0a6da07d489b9469be732894a3306258e40] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:disk
Plugin: <LogStash::Inputs::Elasticsearch password=><password>, hosts=>["es01"], ssl_enabled=>true, query=>"{\"query\":{\"bool\":{\"must\":[{\"match\":{\"alert_type\":\"disk_usage\"}},{\"range\":{\"rule_date\":{\"lte\":\"now\",\"gte\":\"now-61m\"}}}]}},\"sort\":[{\"rule_date\":{\"order\":\"desc\"}}]}", index=>"alerts-general", ssl_verification_mode=>"full", id=>"affdb6799fd6a18d2077977ba792a0a6da07d489b9469be732894a3306258e40", user=>"elastic", ssl_certificate_authorities=>["/usr/share/logstash/certs/ca/ca.crt"], enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_c19cd286-8ac0-4c1d-a28c-d07cb02012db", enable_metric=>true, charset=>"UTF-8">, size=>1000, retries=>0, scroll=>"1m", docinfo=>false, docinfo_fields=>["_index", "_type", "_id"], connect_timeout_seconds=>10, request_timeout_seconds=>60, socket_timeout_seconds=>60, ssl=>false, ssl_certificate_verification=>true>
Error: cannot schedule, scheduler is down or shutting down
Exception: Rufus::Scheduler::NotRunningError
What did I do wrong?