Non-Production Logstash Instance Accepts Filebeat Connection, but not Production

I'm stumped.

I have two ELK stacks running - one in production, one in non-production. Each uses the identical config. I have numerous beats reporting into each.

When attempting to connect a new filebeat to the production instance of Logstash, I am receiving the following errors:

2019-07-11T15:07:33.473-0500	ERROR	pipeline/output.go:100	Failed to connect to backoff(async(tcp://logstash.example.com:5044)): dial tcp 10.21.23.200:5044: connectex: No connection could be made because the target machine actively refused it.
2019-07-11T15:07:41.984-0500	ERROR	pipeline/output.go:100	Failed to connect to backoff(async(tcp://logstash.example.com:5044)): dial tcp 10.21.23.200:5044: connectex: No connection could be made because the target machine actively refused it.

However, when I connect to the non-production instance, it connects as expected.

The beats.conf (excluding filters) is:

input {
    beats {
        port => 5044
    }
}

filter {
    # Insert filter here...
}

output {
    elasticsearch {
        hosts => ["localhost:9200"]
        manage_template => false
        index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
        # document_type => "%{[@metadata][type]}"
        ssl => true
        cacert => "/etc/elasticsearch/root-ca.pem"
        user => "logstash"
        password => "logstash"
    }
}

What am I missing?

A firewall blocking the connection?

Ya - given that they're on separate subnets, I was thinking a possible firewall rule, but I have other beats on the same subnets reporting in as expected.

I'll check again though.

Firewall rules are OK. I started to experience this with another beat as well, which helped me to run this down more easily.

It appears that I wrote a bad filter rule, which is being applied to the logs generated from that beat. The filter survived a logstash -t check, but triggers a FATAL error when it is triggered. I was wrong about the configs being identical - I hadn't added it to my non-prod instance yet.

The rule is as follows:

grok {
    match => {
        "message" => [
            "^%{DATESTAMP:[@metadata][_timestamp]}\s+%{TZ:[@metadata][_timezone]}>\s+%{LOGLEVEL:log.level}.*$"
        ]
    }
    tag_on_failure => []
}

date {
    # Match line (below) appears to trigger the issue.
    match => ["[@metadata][_timestamp] [@metadata][_timezone]",
              "MM/dd/yyyy HH:mm:ss.SSS ZZZ"]
    target => "@timestamp"
    tag_on_failure => []
}

The error being generated is:

[2019-07-12T09:42:53,836][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>java.lang.IllegalStateException: org.logstash.FieldReference$IllegalSyntaxException: Invalid FieldReference: `[@metadata][_timestamp] [@metadata][_timezone]`, :backtrace=>["org.logstash.execution.WorkerLoop.run(org/logstash/execution/WorkerLoop.java:85)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)", "org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:440)", "org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:304)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.start_workers(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:235)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:295)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:274)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:270)", "java.lang.Thread.run(java/lang/Thread.java:748)"]}

I'll attempt to fix this and post back, once I've determined that this is(n't) the issue.

I think you will need to do a mutate+add_field with %{[fieldname]} sprintf references before the date filter.

Good catch - I determined the same thing, and that's almost exactly what I did to resolve it.

But rather than add_field, I did:

mutate { update => { "[@metadata][_timestamp] => "%{[@metadata][_timestamp]} %{[@metadata][_timezone]}" }}

Same result, I just didn't like the idea of adding yet another field, when it was going to be dropped anyway with the [@metadata][...]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.