Hi everyone,
Background Information:
I have a dockerfile that creates an image for me to run my Logstash. Using this method will not write a .sincedb_* file which is my problem. In my past experiences with Logstash, I used ./bin/logstash -e which generated a .sincedb_* in .../data/plugins/inputs/file directory automatically once Logstash program terminated, but using docker it doesn't work.
Questions:
-
How can I specify in my Dockerfile the directory that Logstash is authorized to write to?
I found this code online. (you can see it in my Dockerfile down below)RUN echo 'chown -R logstash /usr/share/logstash/data/plugins/inputs/file' >> docker-entrypoint.sh RUN echo 'exec "$@"' >> docker-entrypoint.sh
-
Is my Logstash configuration correct for sincedb_path and sincedb_write_interval?
Dockerfile:
FROM docker.elastic.co/logstash/logstash:7.0.1
RUN rm -f /usr/share/logstash/pipeline/logstash.conf
RUN rm -f /usr/share/logstash/config/pipeline.yml
RUN rm -r /usr/share/logstash/data
RUN /usr/share/logstash/bin/logstash-plugin install logstash-filter-json && \
/usr/share/logstash/bin/logstash-plugin install logstash-input-cloudwatch_logs
ADD pipeline/ /usr/share/logstash/pipeline
ADD config/ /usr/share/logstash/config
ADD data/ /usr/share/logstash/data
RUN echo 'chown -R logstash /usr/share/logstash/data/plugins/inputs/file' >> docker-entrypoint.sh
RUN echo 'exec "$@"' >> docker-entrypoint.sh
Logstash configuration:
input{
cloudwatch_logs {
log_group_prefix => true
log_group => ["/aws-glue/crawlers"]
region => "${AWS_REGION}"
start_position => "beginning"
sincedb_path => "data/plugins/inputs/file/.sincedb_123"
sincedb_write_interval => "3"
}
}
filter {...}
output {...}
Error:
kourosh@itecoation-Lenovo-ideapad-330S-15IKB:~/Desktop/lambda_logs_to_es$ docker run cloudwatch
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-06-20T06:45:30,060][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2019-06-20T06:45:30,076][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2019-06-20T06:45:30,469][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.0.1"}
[2019-06-20T06:45:30,496][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"7e9c15e8-2331-40f4-9f06-0b488d673fe5", :path=>"/usr/share/logstash/data/uuid"}
[2019-06-20T06:45:46,110][ERROR][logstash.inputs.cloudwatch_logs] Unknown setting 'sincedb_write_interval' for cloudwatch_logs
[2019-06-20T06:45:46,128][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:pipeline1, :exception=>"LogStash::ConfigurationError", :message=>"Something is wrong with your configuration.", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb:86:in `config_init'", "/usr/share/logstash/logstash-core/lib/logstash/inputs/base.rb:60:in `initialize'", "org/logstash/plugins/PluginFactoryExt.java:255:in `plugin'", "org/logstash/plugins/PluginFactoryExt.java:117:in `buildInput'", "org/logstash/execution/JavaBasePipelineExt.java:50:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:23:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:325:in `block in converge_state'"]}
[2019-06-20T06:45:46,582][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-06-20T06:45:51,485][INFO ][logstash.runner ] Logstash shut down.