I have been trying to deploying logstash via helm with my setup described more HERE for two weeks now and I cannot get it to work, it keeps crashing.... I need to configure logging in such a way that I know what is going wrong.
I am currently entering the following into the helm values override yaml file to configure logging.
config:
log.level: info
Even with the above measures, I am still seeing very little logging as shown below... I have a feeling that log4j is not running with ALL loggers set to the info
level.
2019/06/20 11:19:23 Setting 'queue.max_bytes' from environment.
2019/06/20 11:19:23 Setting 'path.config' from environment.
2019/06/20 11:19:23 Setting 'queue.drain' from environment.
2019/06/20 11:19:23 Setting 'http.port' from environment.
2019/06/20 11:19:23 Setting 'http.host' from environment.
2019/06/20 11:19:23 Setting 'path.data' from environment.
2019/06/20 11:19:23 Setting 'queue.checkpoint.writes' from environment.
2019/06/20 11:19:23 Setting 'queue.type' from environment.
2019/06/20 11:19:23 Setting 'log.level' from environment.
2019/06/20 11:19:23 Setting 'config.reload.automatic' from environment.
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number o
f parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-06-20T11:19:46,185][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line
options are specified
[2019-06-20T11:19:46,206][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.1.1"}
[2019-06-20T11:20:03,489][WARN ][logstash.runner ] SIGTERM received. Shutting down.
Is there a way to set this properly?
On a slightly different note...! I manually deploy the exact same docker image helm uses to kubernetes overriding the default command with /bin/bash
using kubectl
, attach myself to the container, write a configuration file, and start logstash manually using the below proceedure, and not only does it start working.. but the log output is different than with helm.
#################
# CREATE CONTAINER
#################
override='{
"spec": {
"template": {
"metadata": {
"annotations": {
"iam.amazonaws.com/role": "s3-logger-rules"
}
}
}
}
}'
kubectl run --generator=deployment/apps.v1 test-logstash \
-n mynamespace --stdin --tty --rm \
--overrides "$override" \
--image docker.elastic.co/logstash/logstash-oss:7.1.1 bash
#############################
# CREATE CONFIG & START LOGSTASH
#############################
cat >pipeline/new-pipe.conf<<END
input {
kafka{
bootstrap_servers => "kafka.atp-system.svc.cluster.local:9092"
id => "system-logger-input"
group_id => "s3-logger"
topics => ["alert","message"]
consumer_threads => 3
codec => json { charset => "UTF-8" }
decorate_events => true
}
}
filter {
json {
source => "message"
}
mutate {
add_field => {
"kafka" => "%{[@metadata][kafka]}"
}
}
}
output {
s3 {
codec => json
id => "system-logger-output"
prefix => "kafka/%{+YYYY}/%{+MM}/%{+dd}-%{+HH}:%{+mm}"
time_file => 5
size_file => 5242880
region => "ap-northeast-1"
bucket => "logging"
canned_acl => "private"
}
}
END
/usr/share/logstash/bin/logstash --log.level=info -f usr/share/logstash/pipeline/new-pipe.conf