Let me say right away that I am not a professional. I have a problem with Logstash. A few days ago it stopped collecting data and I found an error in the logs. Before it stopped collecting data, changes were made to logstash.conf, but I had a backup copy of logstash.conf on which everything initially worked. Therefore, I checked with a backup copy of logstash.conf - but the error still remained. Please tell me what else I can check?
[docker-elk-logstash-1 |[0m [2024-02-27T20:17:38,066][INFO ][logstash.runner ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[docker-elk-logstash-1 |[0m [2024-02-27T20:17:38,161][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"8.10.2", "jruby.version"=>"jruby 9.4.2.0 (3.1.0) 2023-03-08 90d2913fda OpenJDK 64-Bit Server VM 17.0.8+7 on 17.0.8+7 +indy +jit [x86_64-linux]"}
[docker-elk-logstash-1 |[0m [2024-02-27T20:17:38,164][INFO ][logstash.runner ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8,
-Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, - Djava.security.egd=file:/dev/urandom,
-Dlog4j2.isThreadContextMapInheritable=true, -Dls.cgroup.cpuacct.path.override=/, - Dls.cgroup.cpu.path.override=/,
-Xms256m, -Xmx256m, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true,
--add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add- exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED,
--add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add- exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED,
--add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add- opens=java.base/java.security=ALL-UNNAMED,
--add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL- UNNAMED,
--add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add- opens=java.management/sun.management=ALL-UNNAMED]
[docker-elk-logstash-1 |[0m [2024-02-27T20:17:54,528][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[docker-elk-logstash-1 |[0m [2024-02-27T20:19:15,265][ERROR][logstash.agent ] Failed to execute action {
:action=>LogStash::PipelineAction::Create/pipeline_id:main,
:exception=>"LogStash::ConfigurationError",
:message=>"Expected one of [ \\t\\r\\n], \"#\", \"input\", \"filter\", \"output\" at line 1, column 1 (byte 1)",
:backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'",
"org/logstash/execution/AbstractPipelineExt.java:239:in `initialize'",
"org/logstash/execution/AbstractPipelineExt.java:173:in `initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:48:in `initialize'",
"org/jruby/RubyClass.java:931:in `new'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:49:in `execute'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:386:in `block in converge_state'"]}
[docker-elk-logstash-1 |[0m [2024-02-27T20:19:15,569][INFO ][logstash.runner ] Logstash shut down.
[docker-elk-logstash-1 |[0m [2024-02-27T20:19:15,616][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
[docker-elk-logstash-1 |[0m org.jruby.exceptions.SystemExit: (SystemExit) exit
[docker-elk-logstash-1 |[0m at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:795) ~ [jruby.jar:?]
[docker-elk-logstash-1 |[0m at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:758) ~ [jruby.jar:?]
[docker-elk-logstash-1 |[0m at usr.share.logstash.lib.bootstrap.environment.<main> (/usr/share/logstash/lib/bootstrap/environment.rb:90) ~[?:?]
[docker-elk-logstash-1 |[0m Using bundled JDK: /usr/share/logstash/jdk
[docker-elk-logstash-1 |[0m Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
Our logstash file configuration looks like this:
input {
beats {
port => 5044
}
}
filter {
if [message] =~ /Error/ {
grok {
match => { "message" => ["(?:Error:(?<error_exception>.*))"] }
}
mutate {
add_field => { "[@metadata][zabbix_host_error]" => "%{[fields][hostname]}" }
add_field => { "[@metadata][zabbix_key_error]" => "gate_error" }
add_field => { "[@metadata][zabbix_msg_error]" => "%{message}" }
}
}
}
output {
stdout { codec => rubydebug }
} else if [message] =~ /Error/ {
elasticsearch {
hosts => ["elasticsearch:9200"]
user => "logstash_internal"
password => "${LOGSTASH_INTERNAL_PASSWORD}"
index => "error-%{+yyyy.MM.dd}"
}
zabbix {
zabbix_host => "[@metadata][zabbix_host_error]"
zabbix_server_host => "my_IP"
zabbix_server_port => my_port
zabbix_key => "[@metadata][zabbix_key_error]"
zabbix_value => "[@metadata][zabbix_msg_error]"
}
}
I understand that the log says that the error is in:
message=>"Expected one of [ \\t\\r\\n], \"#\", \"input\", \"filter\", \"output\" at line 1, column 1 (byte 1)"
But I don't see any error there..(
Tried:
- launching with a known working config;
- docker restart container;
- completely restart the cluster;
- docker compose down && docker compose up --build -d.