Tcp output not working

Hello,

I'd like to send http logs through tcp from one logstash A (output) to another (logstash B) (input) with the following configuration:-

logstash A

output {
            tcp {
              host => "xx.xx.xx.xx"
             port => 9090
           }
    }

logstash B (with ip address of xx.xx.xx.xx)

    input {
             tcp {
                  port => 9090
               }
          }

     output {

  s3 {
        access_key_id => "<%= @key %>"
        secret_access_key => "<%= @secret %>"
        endpoint_region => "eu-west-1"
        bucket => "<%= @s3bucket %>"
        format => "json"
        size_file => 500
      }
    }

I'd like the logs from logstash A to end up in an s3 bucket via tcp. I've implemented this and unfortunately it isn't working!

I'm running logstash version (1.4.2) on logstash A - Windows and logstash version (1.4.5) - Linux on logstash B.

I did a netstat -anp | grep 9090 (on logstash B machine) to ensure that the connection had indeed been established and this was the following output:-

tcp6 0 0 :::9090 :::* LISTEN 14212/java
tcp6 0 0 xx.xx.xx.xx:9090 a.a.a.a:55393 ESTABLISHED 14212/java
tcp6 0 0 xx.xx.xx.xx:9090 b.b.b.b.:52363 ESTABLISHED 14212/java
tcp6 0 0 xx.xx.xx.xx:9090 c.c.c.c:51567 ESTABLISHED 14212/java
tcp6 0 0 xx.xx.xx.xx:9090 d.d.d.d.:54160 ESTABLISHED 14212/java
unix 3 [ ] STREAM CONNECTED 9090 1033/dbus-daemon /var/run/dbus/system_bus_socket

No error messages in the log. Can someone please tell me how I can get this working? Thx!

Looks okay. Isolate things by temporarily replacing the s3 output with a simple stdout output. Does that work? It should. Only then should you move on to s3. You'll probably want to crank up the logging with --verbose or even --debug.

(I'm assuming the <%= ...> stuff is replaced with real values at some point.)