Logstash Forwarder connection refused in docker container

Hello , i seem to follow the same rules on setting up the forwarder but it seems every time i run it i get a connection refused. The IP match as the same in the logstash conf file. I have also passed along the certs to the host name. Any help on this would be great!
thanks,

We need more details about your setup, both the logstash-forwarder configuration and the container structure.

My docker setup is grabbed from https://github.com/spujadas/elk-docker/blob/master/Dockerfile except i made minor changes to make it work for CentOS... my logstash forwarder is setup as follows
network": {
"servers": [ "(host that i want to get logs from..same as in logstash.conf:5000" ],

"timeout": 15,
"ssl ca": "/etc/certs/logstash-forwarder.crt"

},
"files": [
{

  "fields": { "type": "syslog" }
}

]
}

So, just to be clear: The servers array in the LSF configuration uses the IP address or hostname of the Docker container that runs Logstash?

No , i want to grab the logs from a remote host, So i have it set as the IP of that remote host not the docker container.

So after trying this out with out sing docker i may have figure out the problem. It had to do with the certs. I read the issue about using the ssl certs but i'm confused. If i want to have logstash forwarder grab logs from more than one host how do I go about doing that if I need to specify the IP i want the cert to be pointing at?

Maybe we're talking past each other here, but it seems like you're misunderstanding what logstash-forwarder does. It doesn't pull log messages from remote machines. It reads locally accessible files and pushes them to other servers (often a server running Logstash and Elasticsearch).

So how I have it setup now is I ahve the ELK stack( with the forwarder on Host A and i want to read the logs from Hosts C,D,E. How would i have to set the forwarder up ?

You would run logstash-forwarder on hosts C, D, and E and have it send the logs you're interested in on each host to host A (if that's where you're running Logstash).

So i have that setup like that.. Where the certs are passed and forwarder on boxes C,D,E all are pointing to host A. but still no logs showing from the other hosts.

So new error, I can see the logs in Kibana but it is showing failed tls handshake.. which is what is showing in the forwarder logs on that remote host.

Can you please show me an example of how the forwarder should be setup in my situation. i feel i'm still not understanding it completely. I want the logs from host C.D.E and send them all to host A where i have elasticsearch,kibana,logstash, and the forwarder installed. The forwarder is installed on hosts C,D,and E also. How should the forwarder be setup on host A and remote hosts?

I can't help with the TLS handshake problems.

You don't need logstash-forwarder on host A since you're running Logstash there. Logstash can do everything that logstash-forwarder can do and then some. It would help if you'd present your current configuration and the exact error message you get. Copy and paste the relevant log messages.

logstash-forwarder config: {
"network": {
"servers": [ "IP:5000" ],
"timeout": 15,
"ssl ca": "/etc/certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/log/puppet/puppet.log"
],
"fields": { "type": "puppet" }
}
]
}

input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/certs/logstash-forwarder.crt"
ssl_key => "/etc/private/logstash-forwarder.key"
}
}

filter {
if [type] == "puppet" {
grok {
break_on_match => false
match => { "message"=>"%{GREEDYDATA:logs}"}

output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }

}

The Logstash configuration snippet as posted is missing at least a couple of closing braces. Is this actually the configuration you're using?

the brackets are there. I am able to do the handhsake now per stopping forwarder on host A. Now the only issue is that the logs are not coming in. they are two days behind.

{:timestamp=>"2015-09-21T14:48:49.450000+0000", :message=>"The error reported is : \n Address already in use - bind - Address already in use"}
current error in logstash log

You can only have one process listening on port 5000 at any given time. Perhaps you have two Logstash processes running?

i only have one intance running on this box. I killed the PID of all applications and restarted them and still get the same error.

You can use lsof to check which process is keeping a port open (e.g. sudo lsof -i :5000). I suggest you carefully examine all files in /etc/logstash/conf.d as I suspect you have a backup file or similar that contains the extra input declaration. Logstash will read all files in that directory.