Logstash Beat Plugin fatal error

Hello, I'm using a docker compose configuration to run the ELK-stack and it has been working smoothly until recently and I have no Idea whats causing the error.

The problem comes in the form that when I'm trying to start logstash through my docker-compose file I will en up getting this repeating error.

logstash         | [2020-03-31T08:23:28,442][INFO ][org.logstash.beats.Server][main] Starting server on port: 5044
logstash         | [2020-03-31T08:23:34,657][ERROR][logstash.javapipeline    ][main] A plugin had an unrecoverable error. Will restart this plugin.
logstash         |   Pipeline_id:main
logstash         |   Plugin: <LogStash::Inputs::Beats port=>5044, id=>"c5c6160b9fa8b426c530e216155048d6ee6cfc8b5451e0c159a88c3ce60ca5b3", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_353b56a9-31ae-475f-934b-2914fd10febf", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>4>
logstash         |   Error: Address already in use
logstash         |   Exception: Java::JavaNet::BindException
logstash         |   Stack: sun.nio.ch.Net.bind0(Native Method)
logstash         | sun.nio.ch.Net.bind(sun/nio/ch/Net.java:455)
logstash         | sun.nio.ch.Net.bind(sun/nio/ch/Net.java:447)
logstash         | sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:227)
logstash         | io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:130)
logstash         | io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:558)
logstash         | io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1358)
logstash         | io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:501)
logstash         | io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:486)
logstash         | io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:1019)
logstash         | io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:254)
logstash         | io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:366)
logstash         | io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:163)
logstash         | io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:404)
logstash         | io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:462)
logstash         | io.netty.util.concurrent.SingleThreadEventExecutor$5.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:897)
logstash         | io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
logstash         | java.lang.Thread.run(java/lang/Thread.java:834)
logstash         | [2020-03-31T08:23:35,658][INFO ][org.logstash.beats.Server][main] Starting server on port: 5044

This is my logstash .yml file

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
xpack.monitoring.enabled: true

And this is my is my logstash.conf file

input {

	#stdin{}

	beats {
		port => "5044"
	}
}

filter {

	grok 
	{
		#The unchanged origianl message
		match => { "message" => "%{GREEDYDATA:event.original}"}
	}

	grok
	{

		break_on_match => true

		match => { "message" => "%{WORD:event.type}:%{SPACE}(?<timestamp_string>%{YEAR} %{MONTH} %{MONTHDAY}  %{TIME}:%{INT})%{SPACE}%{INT:process.type}:%{INT:log.sequence}\nSender:%{SPACE}(?<process.sender.name>[^:]*):(?<process.sender.id>[^\n]*)\nReceiver:%{SPACE}(?<process.receiver.name>[^:]*):(?<process.receiver.id>[^\n]*)\nPrimitive:%{SPACE}%{INT:message.type}\nPID:%{INT:process.pid}%{SPACE}TID:%{INT:process.thread.id}%{SPACE}L:%{INT:log.level.internal}\nSize:%{SPACE}%{INT:message.size}%{GREEDYDATA:message_string}"}

	}

	date 
	{
      match => [ "timestamp_string", "yyyy MMM dd  HH:mm:ss:SSS"]
    }

	mutate
	{
		remove_field => [ "timestamp_string"]

		replace => { type => "ss7trace"}
	}

}

output {

	elasticsearch { 
		hosts => ["http://elasticsearch:9200"] 
		index => "logstash_ss7trace_testing" 
	}

	#For debug Purposes	
	stdout{ 
		codec => rubydebug 
	}
}

As mentioned briefly earlier I only get this error when running it in docker-compose, when starting each container on its own I do not get this error. Also this error just seemingly appeared one day as the entire ELK-stack worked perfectly fine for several weeks.

If you also want to have a look at the docker-compose file or the entire logstash log tell me and it will be done.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.