Hi Team,
I have logstash on two servers. It seems its working as index are getting created and logstash service is running from few days but whenever i checked logstash syntax, it shows below error. Is there anything wrong with the setup which i should correct?
Below is the error,
i)
# /usr/share/logstash/bin/logstash -f --config.test_and_exit -f /etc/logstash/conf.d/logstash.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2023-01-09 14:05:23.679 [main] runner - Starting from Logstash 8.0, the minimum required version of Java is Java 11; your Java version from /opt/jre1.8.0_221 does not meet this requirement. Please reconfigure your version of Java to one that is supported. Running Logstash with the bundled JDK is recommended. The bundled JDK has been verified to work with each specific version of Logstash, and generally provides best performance and reliability. If you have compelling reasons for using your own JDK (organizational-specific compliance requirements, for example), you can configure LS_JAVA_HOME to use that version instead.
[WARN ] 2023-01-09 14:05:23.688 [main] runner - The use of JAVA_HOME has been deprecated. Logstash 8.0 and later ignores JAVA_HOME and uses the bundled JDK. Running Logstash with the bundled JDK is recommended. The bundled JDK has been verified to work with each specific version of Logstash, and generally provides best performance and reliability. If you have compelling reasons for using your own JDK (organizational-specific compliance requirements, for example), you can configure LS_JAVA_HOME to use that version instead.
[INFO ] 2023-01-09 14:05:23.690 [main] runner - Starting Logstash {"logstash.version"=>"7.16.2", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 Java HotSpot(TM) 64-Bit Server VM 25.221-b11 on 1.8.0_221-b11 +indy +jit [linux-x86_64]"}
[WARN ] 2023-01-09 14:05:24.196 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2023-01-09 14:05:26.304 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9601, :ssl_enabled=>false}
[INFO ] 2023-01-09 14:05:31.941 [Converge PipelineAction::Create<main>] Reflections - Reflections took 103 ms to scan 1 urls, producing 119 keys and 417 values
[WARN ] 2023-01-09 14:05:33.358 [Converge PipelineAction::Create<main>] plain - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2023-01-09 14:05:34.213 [Converge PipelineAction::Create<main>] elasticsearch - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[INFO ] 2023-01-09 14:05:34.755 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://es_1:5200", "http://es_2:5200", "http://es_3:5200"]}
[INFO ] 2023-01-09 14:05:35.284 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@es_1:5200/, http://elastic:xxxxxx@es_2:5200/, http://elastic:xxxxxx@es_3:5200/]}}
[WARN ] 2023-01-09 14:05:35.774 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://elastic:xxxxxx@es_1:5200/"}
[INFO ] 2023-01-09 14:05:35.801 [[main]-pipeline-manager] elasticsearch - Elasticsearch version determined (7.16.2) {:es_version=>7}
[WARN ] 2023-01-09 14:05:35.805 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[WARN ] 2023-01-09 14:05:35.865 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://elastic:xxxxxx@es_2:9200/"}
[WARN ] 2023-01-09 14:05:35.910 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://elastic:xxxxxx@es_3:9200/"}
[INFO ] 2023-01-09 14:05:36.099 [Ruby-0-Thread-10: :1] elasticsearch - Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[WARN ] 2023-01-09 14:05:38.856 [[main]-pipeline-manager] json - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[INFO ] 2023-01-09 14:05:39.058 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/etc/logstash/conf.d/logstash.conf"], :thread=>"#<Thread:0x1d675b35 run>"}
[INFO ] 2023-01-09 14:05:41.448 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>2.38}
[INFO ] 2023-01-09 14:05:41.498 [[main]-pipeline-manager] beats - Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ] 2023-01-09 14:05:41.528 [[main]-pipeline-manager] exec - Registering Exec Input {:type=>"pfclient", :command=>"curl -k -u root:xxx https://10.20.98.172:4554/root/abc/ -H 'X-Y-Header: XYZABC'", :interval=>28800, :schedule=>nil}
[INFO ] 2023-01-09 14:05:41.546 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2023-01-09 14:06:45.427 [[main]<beats] Server - Starting server on port: 5044
[ERROR] 2023-01-09 14:06:51.446 [[main]<beats] javapipeline - A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Beats port=>5044, id=>"f3388fe3051da92e3004519d1aa3fcc43f0b8e8092789c4d0bed067bf0fa41c6", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_08b65c61-00c8-40a0-b7ee-fca1e2737e2a", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>8>
Error: Address already in use
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:433)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:425)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:223)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:134)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:562)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1334)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:506)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:491)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:973)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:260)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:356)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:164)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:472)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:500)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:989)
io.netty.util.internal.ThreadExecutorMap$2.run(io/netty/util/internal/ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:748)
[INFO ] 2023-01-09 14:06:52.448 [[main]<beats] Server - Starting server on port: 5044
^C[WARN ] 2023-01-09 14:06:55.501 [SIGINT handler] runner - SIGINT received. Shutting down.
[WARN ] 2023-01-09 14:07:00.507 [Ruby-0-Thread-77: :1] runner - Received shutdown signal, but pipeline is still waiting for in-flight events
to be processed. Sending another ^C will force quit Logstash, but this may cause
data loss.
[INFO ] 2023-01-09 14:07:02.329 [[main]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>"main"}
[INFO ] 2023-01-09 14:07:02.614 [LogStash::Runner] runner - Logstash shut down.
#
ii) if add --path.data
it shows below,
# /usr/share/logstash/bin/logstash -f --config.test_and_exit -f /etc/logstash/conf.d/logstash.conf --path.settings /etc/logstash/ --path.data /elk/lib/logstash/
Sending Logstash logs to /elk/log/logstash which is now configured via log4j2.properties
[2023-01-09T17:01:46,443][INFO ][logstash.runner ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2023-01-09T17:01:46,469][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.16.2", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 Java HotSpot(TM) 64-Bit Server VM 25.221-b11 on 1.8.0_221-b11 +indy +jit [linux-x86_64]"}
[2023-01-09T17:01:46,941][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2023-01-09T17:01:46,950][FATAL][logstash.runner ] Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.
[2023-01-09T17:01:46,953][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:747) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:710) ~[jruby-complete-9.2.20.1.jar:?]
at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:94) ~[?:?]
Below is the pipeline configuration,
# cat /etc/logstash/conf.d/logstash.conf
input {
beats {
port => 9044
}
exec {
id =>app1"
command => "curl -k -u root:xxx https://10.20.98.172:4554/root/abc/ -H 'X-Y-Header: XYZABC'"
interval => 37777
type => app1
}
}
filter {
if [type] == "app1"
{
json { source => "message" } split { field => "items" } json { source => "[items][des]" target => "des" } json { source => "[des][sum]" target => "sum" }
mutate {
replace => {
"[type]" => "app1"
}
}
}
if [log_type] == "app2" and [app_id] == "app"
{
grok { match => { "message" => "%{SYSLOGBASE} %{GREEDYDATA:json_message}" } } json { source => "json_message" }
mutate {
replace => {
"[type]" => "app2"
}
}
}
}
output {
if [type] == "app1" {
elasticsearch {
hosts => ['http://es_1:5200', 'http://es_2:5200', 'http://es_3:5200']
index => "app1-%{+YYYY.MM.DD}"
user => elastic
password => xxx
}
}
if [log_type] == "app2" {
elasticsearch {
hosts => ['http://es_1:5200', 'http://es_2:5200', 'http://es_3:5200']
index => "app2"
template_name => "app2"
template_overwrite => "false"
user => elastic
password => xxx
}
}
elasticsearch {
hosts => ['http://es_1:5200', 'http://es_2:5200', 'http://es_3:5200']
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM}"
user => elastic
password => xxx
}
}
Below is the configuration file,
# cat /etc/logstash/logstash.yml
node.name: ls_1
path.data: /es/lib/logstash
path.logs: /es/log/logstash
log.level: info
#http.host: 127.0.0.1 # The bind address for the metrics REST endpoint.
#http.port: 9600 # The bind port for the metrics REST endpoint.
xpack.monitoring.enabled: True
xpack.monitoring.elasticsearch.hosts: ['http://es_1:5200', 'http://es_2:5200', 'http://es_3:5200']
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: xxx
As said above, even after above error, the service is running stable from few days.
# systemctl status logstash
logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2023-01-05 09:51:13 UTC; 4 days ago
Main PID: 9726 (java)
Can see index are getting created,
green open app1-2023.01.09 ByejtoAPQkad1zupWzDNpw 1 1 1675 0 1014.8mb 507.5mb
green open app1-2023.01.08 UH2GaSvxS7i7dhozMoy2OA 1 1 2010 0 1.1gb 608.5mb
green open app1-2023.01.07 QZ5TS8smTWeFo_CrpKSX8g 1 1 2010 0 1.1gb 607.8mb
green open %{[@metadata][beat]}-%{[@metadata][version]}-2023.01 TJ5wcSLRTXOlE5CotDvt8Q 1 1 14737 0 8.5gb 4.2gb
green open %{[@metadata][beat]}-%{[@metadata][version]}-2022.12 EN6-zUrpSN2-fooJy4ZfvQ 1 1 10 0 8.8mb 4.4mb
green open app1-2023.01.06 gvf2RhmJSECXJFVQNpM0Rw 1 1 2010 0 1.1gb 608.3mb
Can someone point out, where is the issue?
Thanks,