While using pipeline Error: Address already in use Exception

Hello Experts,

when I was trying to run logstash using pipeline.yml I am running into below error.

I don't find any logstash instance running

[root@localhost config]# ps -ef | grep logstash
root 26227 22729 0 09:11 pts/1 00:00:00 grep --color=auto logstash

Error:

[root@localhost config]# /opt/logstash-6.1.3/bin/logstash
2018-03-21 08:54:47,125 main ERROR Unable to locate appender "${sys:ls.log.format}_console" for logger config "root"
2018-03-21 08:54:47,126 main ERROR Unable to locate appender "${sys:ls.log.format}_rolling" for logger config "root"
2018-03-21 08:54:47,126 main ERROR Unable to locate appender "${sys:ls.log.format}_rolling_slowlog" for logger config "slowlog"
2018-03-21 08:54:47,127 main ERROR Unable to locate appender "${sys:ls.log.format}_console_slowlog" for logger config "slowlog"
2018-03-21 08:54:48,957 main ERROR Unable to locate appender "${sys:ls.log.format}_console" for logger config "root"
2018-03-21 08:54:48,958 main ERROR Unable to locate appender "${sys:ls.log.format}_rolling" for logger config "root"
2018-03-21 08:54:48,958 main ERROR Unable to locate appender "${sys:ls.log.format}_rolling_slowlog" for logger config "slowlog"
2018-03-21 08:54:48,959 main ERROR Unable to locate appender "${sys:ls.log.format}_console_slowlog" for logger config "slowlog"
Sending Logstash's logs to /opt/data/logs which is now configured via log4j2.properties
[2018-03-21T08:54:49,173][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/opt/logstash-6.1.3/modules/fb_apache/configuration"}
[2018-03-21T08:54:49,184][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/opt/logstash-6.1.3/modules/netflow/configuration"}
[2018-03-21T08:54:50,035][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.1.3"}
[2018-03-21T08:54:50,744][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-03-21T08:54:56,804][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.1.27.6:9200/]}}
[2018-03-21T08:54:56,814][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.1.27.6:9200/, :path=>"/"}
[2018-03-21T08:54:57,044][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://10.1.27.6:9200/"}
[2018-03-21T08:54:57,124][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>nil}
[2018-03-21T08:54:57,128][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-03-21T08:54:57,149][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>"cass_log_sizing_2.json"}
[2018-03-21T08:54:57,165][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"cassandra_log", "index_patterns"=>["prd-log-"], "settings"=>{"index.refresh_interval"=>"5s", "index.codec"=>"best_compression", "number_of_shards"=>5, "number_of_replicas"=>0}, "aliases"=>{"logs_write_cas"=>{}}}}
[2018-03-21T08:54:57,220][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/cassandra_log
[2018-03-21T08:54:57,289][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.1.27.6:9200"]}
[2018-03-21T08:54:57,851][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"cassandra-log", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x67e6c5db run>"}
[2018-03-21T08:54:58,170][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-03-21T08:54:58,224][INFO ][logstash.pipeline ] Pipeline started {"pipeline.id"=>"cassandra-log"}
[2018-03-21T08:54:58,341][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-03-21T08:54:59,286][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.1.27.6:9200/]}}
[2018-03-21T08:54:59,289][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.1.27.6:9200/, :path=>"/"}
[2018-03-21T08:54:59,298][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://10.1.27.6:9200/"}
[2018-03-21T08:54:59,304][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>nil}
[2018-03-21T08:54:59,305][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-03-21T08:54:59,308][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>"swift_proxy_log_sizing_2.json"}
[2018-03-21T08:54:59,310][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"swift_proxy_log", "index_patterns"=>["swift-proxy-log-
"], "settings"=>{"index.refresh_interval"=>"5s", "index.codec"=>"best_compression", "number_of_shards"=>5, "number_of_replicas"=>0}, "aliases"=>{"logs_write_swift"=>{}}}}
[2018-03-21T08:54:59,316][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/swift_proxy_log
[2018-03-21T08:54:59,329][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.1.27.6:9200"]}
[2018-03-21T08:54:59,414][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"swiftproxy-log", "pipeline.workers"=>3, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>375, :thread=>"#<Thread:0x6ae56911 run>"}
[2018-03-21T08:54:59,421][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-03-21T08:54:59,425][INFO ][logstash.pipeline ] Pipeline started {"pipeline.id"=>"swiftproxy-log"}
[2018-03-21T08:54:59,434][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-03-21T08:54:59,545][INFO ][logstash.agent ] Pipelines running {:count=>2, :pipelines=>["cassandra-log", "swiftproxy-log"]}
[2018-03-21T08:55:05,782][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:swiftproxy-log
Plugin: <LogStash::Inputs::Beats port=>5044, id=>"1643e1f8253a58f0682eaed58daf10e51214458b68e6a05499533e259ad5cc5f", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_64879677-b4aa-4b54-87ad-87dd78716278", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, ssl_verify_mode=>"none",

your help appreciated!

Sorry I forgot to add my pipeline.yml config details

[root@localhost config]# cat pipelines.yml | head -n 50

Cassandra Log

  • pipeline.id: cassandra-log
    pipeline.workers: 2
    pipeline.batch.size: 125
    path.config: "/opt/logstash-6.1.3/config/cassandra-new.conf"

Swiftproxy log

  • pipeline.id: swiftproxy-log
    pipeline.workers: 3
    pipeline.batch.size: 125
    path.config: "/opt/logstash-6.1.3/config/swift_proxy.conf"

Thanks
Chandra

Your pipelines have the same beats port input, you need to change one of them, you can't have two inputs using the same port.

[2018-03-21T08:54:58,224][INFO ][logstash.pipeline ] Pipeline started {"pipeline.id"=>"cassandra-log"}
[2018-03-21T08:54:58,341][INFO ][org.logstash.beats.Server] Starting server on port: 5044
...
[2018-03-21T08:54:59,425][INFO ][logstash.pipeline ] Pipeline started {"pipeline.id"=>"swiftproxy-log"}
[2018-03-21T08:54:59,434][INFO ][org.logstash.beats.Server] Starting server on port: 5044
1 Like

Hi @leandrojmp,

How do I change input port?

I have 2 logs files on same source host but I have to use 2 different conf files to parse them.

Thanks
Chandra

How do I change input port?

Change the port number in the beats input as well as the port on the sending side (i.e. one of the Filebeat instances).

I have 2 logs files on same source host but I have to use 2 different conf files to parse them.

In your Filebeat configuration, set a field or tag to indicate the kind of log for each prospector and filename pattern. Then you can inspect that field or tag in the Logstash configuration to choose between different set of filters. Then you don't need two pipelines.

Thanks @magnusbaeck.. I appreciate your time

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.