AWS S3 Output - Failed to open TCP Connection to

Having issues getting the pipeline running

[main] Pipeline error {:pipeline_id=>"main", :exception=>#<TypeError: Failed to open TCP connection to : (no implicit conversion of nil into String)>,

Failing to understand where the best place to investigate further is.

Additional info:

[2024-05-02T14:25:02,223][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create, action_result: fal
se", :backtrace=>nil}

We have the indential configuration running in a different AWS Region without any issues.

Hi @Alex_Jamieson Welcome to the community.

We can't really help unless you post your pipeline configuration...

Also please let us know what version you are on?

How are you starting logstash?

Are you using pipelines.yml?

1 Like

Running on a Windows 2022 AWS EC2 instance

Installed via chocolatey, version 8.13.2

Due to the errors we are having I've begun starting the logstash manually, previously was as a service

PS C:\Logstash> .\bin\logstash.bat -f .\config\logstash.conf

We are not using pipelines.yml but just a conf file

PS C:\Logstash> cat .\config\logstash.conf
input {
  beats {
    port => 5044
  }
}

output {
     s3 {
        region => "${REGION}"    
        bucket => "<redacted>"  
        prefix => "/AWSLogs/${AWS_ACCOUNT}/${SHORT_REGION}/ec2logs/${INSTANCEID}/%{+YYYY}/%{+MM}/%{+dd}"
        role_arn => "arn:${PARTITION}:iam::${AWS_ACCOUNT}:role/<redacted>"
        rotation_strategy => "time"   
        time_file => 10
        validate_credentials_on_root_bucket => false
        canned_acl => "bucket-owner-full-control"
}

We are able to run with this config in us-east-1 without issues but when running in AWS GovCloud (us-gov-west-1) we get this error

I've removed the s3 plugin and logstash works correctly
I've manually, via aws cli, copied files from the host to the s3 buckets without issue

I would turn on debug logs.
Sorry not expert of Gov Cloud

Not sure typo missing last } on output

Also notice your command is half full path half relative... So it may not be running the config you think

This is running with TRACE actually, we've been debugging this for multiple days now.

More logs

[2024-05-02T17:18:50,175][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(worker_utilization) baseline -> FlowCapture{nanoTimestamp=485888497500 numerator=0.0 denominator=701332.813802}
[2024-05-02T17:18:50,180][WARN ][org.logstash.execution.AbstractPipelineExt] Metric registration error: `worker_millis_per_event` could not be registered in namespace `[:stats, :pipelines, :main, :plugins, :outputs, :d24135e86efd05e4658c
3f7a1e5b2852d1b64e5f270945e03de9f2c8e9234c26, :flow]`
[2024-05-02T17:18:50,180][WARN ][org.logstash.execution.AbstractPipelineExt] Metric registration error: `worker_utilization` could not be registered in namespace `[:stats, :pipelines, :main, :plugins, :outputs, :d24135e86efd05e4658c3f7a1
e5b2852d1b64e5f270945e03de9f2c8e9234c26, :flow]`
[2024-05-02T17:18:50,181][DEBUG][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main"}
[2024-05-02T17:18:50,215][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2024-05-02T17:18:50,230][ERROR][logstash.javapipeline    ][main] Pipeline error {:pipeline_id=>"main", :exception=>#<TypeError: Failed to open TCP connection to : (no implicit conversion of nil into String)>, :backtrace=>["C:/Logstash/v
endor/jruby/lib/ruby/stdlib/net/http.rb:1020:in `block in connect'", "C:/Logstash/vendor/bundle/jruby/3.1.0/gems/timeout-0.4.1/lib/timeout.rb:186:in `block in timeout'", "C:/Logstash/vendor/bundle/jruby/3.1.0/gems/timeout-0.4.1/lib/timeo
ut.rb:193:in `timeout'", "C:/Logstash/vendor/jruby/lib/ruby/stdlib/net/http.rb:1016:in `connect'", "C:/Logstash/vendor/jruby/lib/ruby/stdlib/net/http.rb:995:in `do_start'", "C:/Logstash/vendor/jruby/lib/ruby/stdlib/net/http.rb:990:in `st
art'", "C:/Logstash/vendor/jruby/lib/ruby/stdlib/delegate.rb:87:in `method_missing'", "C:/Logstash/vendor/bundle/jruby/3.1.0/gems/aws-sdk-core-3.191.6/lib/seahorse/client/net_http/connection_pool.rb:307:in `start_session'", "C:/Logstash/
vendor/bundle/jruby/3.1.0/gems/aws-sdk-core-3.191.6/lib/seahorse/client/net_http/connection_pool.rb:100:in `session_for'", "C:/Logstash/vendor/bundle/jruby/3.1.0/gems/aws-sdk-core-3.191.6/lib/seahorse/client/net_http/handler.rb:128:in `s
ession'", "C:/Logstash/vendor/bundle/jruby/3.1.0/gems/aws-sdk-core-3.191.6/lib/seahorse/client/net_http/handler.rb:76:in `transmit'", "C:/Logstash/vendor/bundle/jruby/3.1.0/gems/aws-sdk-core-3.191.6/lib/seahorse/client/net_http/handler.r
b:50:in `call'", "C:/Logstash/vendor/bundle/jruby/3.1.0/gems/aws-sdk-core-3.191.6/lib/seahorse/client/plugins/content_length.rb:24:in `call'", "C:/Logstash/vendor/bundle/jruby/3.1.0/gems/aws-sdk-core-3.191.6/lib/seahorse/client/plugins/r
equest_callback.rb:118:in `call'", "C:/Logstash/vendor/bundle/jruby/3.1.0/gems/aws-sdk-core-3.191.6/lib/aws-sdk-core/xml/error_handler.rb:10:in `call'", "C:/Logstash/vendor/bundle/jruby/3.
.....................
s/s3.rb:282:in `full_options'", "C:/Logstash/vendor/bundle/jruby/3.1.0/gems/logstash-integration-aws-7.1.6-java/lib/logstash/outputs/s3.rb:340:in `bucket_resource'", "C:/Logstash/vendor/bundle/jruby/3.1.0/gems/logstash-integration-aws-7.
1.6-java/lib/logstash/outputs/s3.rb:229:in `register'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:69:in `register'", "C:/Logstash/logstash-core/lib/logstash/java_pipeline.rb:237:in `block in register_plugins'", "or
g/jruby/RubyArray.java:1989:in `each'", "C:/Logstash/logstash-core/lib/logstash/java_pipeline.rb:236:in `register_plugins'", "C:/Logstash/logstash-core/lib/logstash/java_pipeline.rb:610:in `maybe_setup_out_plugins'", "C:/Logstash/logstas
h-core/lib/logstash/java_pipeline.rb:249:in `start_workers'", "C:/Logstash/logstash-core/lib/logstash/java_pipeline.rb:194:in `run'", "C:/Logstash/logstash-core/lib/logstash/java_pipeline.rb:146:in `block in start'"], "pipeline.sources"=
>["C:/Logstash/config/logstash.conf"], :thread=>"#<Thread:0x2a7dfb1a C:/Logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-05-02T17:18:50,231][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2024-05-02T17:18:50,237][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: fal
se", :backtrace=>nil}
[2024-05-02T17:18:50,237][TRACE][logstash.agent           ] Converge results {:success=>false, :failed_actions=>["id: main, action_type: LogStash::PipelineAction::Create, message: Could not execute action: PipelineAction::Create<main>, a
ction_result: false"], :successful_actions=>[]}

These are the logs from
PS C:\Logstash> cat .\logs\logstash-plain.log

Running as a service

Can confirm from the logs thats all the s3 values from the config are being used, logs are identical to when running logstash manually

I just can't understand the (no implicit conversion of nil into String)>, if there was an s3 communication issues, IAM permission issue, etc I would be able to debug and fix but this error is very difficult

From the stack trace:

C:/Logstash/vendor/jruby/lib/ruby/stdlib/net/http.rb

      debug "opening connection to #{conn_addr}:#{conn_port}..."
      s = Timeout.timeout(@open_timeout, Net::OpenTimeout) {
        begin
          TCPSocket.open(conn_addr, conn_port, @local_host, @local_port)
        rescue => e
          raise e, "Failed to open TCP connection to " +
            "#{conn_addr}:#{conn_port} (#{e.message})"
        end
      }

What error do you get if you set region => ""?

2024-05-02T17:38:48,673][ERROR][logstash.javapipeline    ][main] Pipeline error {:pipeline_id=>"main", :exception=>#<Aws::Errors::InvalidRegionError: Invalid `:region` option was provided.

* Not every service is available in every region.

* Never suffix region names with availability zones.
  Use "us-east-1", not "us-east-1a"

Known AWS regions include (not specific to this service):

af-south-1
ap-east-1
ap-northeast-1
ap-northeast-2
ap-northeast-3
ap-south-1
ap-south-2
ap-southeast-1
ap-southeast-2
ap-southeast-3
ap-southeast-4
aws-global
ca-central-1
ca-west-1
eu-central-1
eu-central-2
eu-north-1
eu-south-1
eu-south-2
eu-west-1
eu-west-2
eu-west-3
il-central-1
me-central-1
me-south-1
sa-east-1
us-east-1
us-east-2
us-west-1
us-west-2
aws-cn-global
cn-north-1
cn-northwest-1
aws-us-gov-global
us-gov-east-1
us-gov-west-1
aws-iso-global
us-iso-east-1
us-iso-west-1
aws-iso-b-global
us-isob-east-1
>, :backtrace=>["C:/Logstash/vendor/bundle/jruby/3.1.0/gems/aws-sdk-core-3.191.6/lib/aws-sdk-core/plugins/regional_endpoint.rb:165:in `validate_region!'", "C:/Logstash/vendor/bundle/jruby/3.1.0/gems/aws-sdk-core-3.191.6/lib/aws-sdk-core/
plugins/regional_endpoint.rb:117:in `resolve_endpoint'", "C:/Logstash/vendor/bundle/jruby/3.1.0/gems/aws-sdk-core-3.191.6/lib/aws-sdk-core/plugins/regional_endpoint.rb:64:in `block in RegionalEndpoint'", "C:/Logstash/vendor/bundle/jruby/
3.1.0/gems/aws-sdk-core-3.

I've also ran with the region us-east-1 to see we would at least get an error related to bucket not found or something along those lines, but we will get the Failed to open TCP connection to : (no implicit conversion of nil into String)> error even with a random region

And above you confirmed you have the closing } which is missing in the Post above with the config...

You are missing the closing brace on the S3 section

input {  
    beats {    
        port => 5044  
    }
}

output {  
    s3 {    
        region => "us-gov-west-1"    
        bucket => "<redacted>"  
        prefix => "/AWSLogs/${AWS_ACCOUNT}/ugw1/ec2logs/${INSTANCEID}/%{+YYYY}/%{+MM}/%{+dd}"
        role_arn => "arn:aws-us-gov:iam::${AWS_ACCOUNT}:role/<redacted>"
        rotation_strategy => "time"   
        time_file => 10
        validate_credentials_on_root_bucket => false
        canned_acl => "bucket-owner-full-control"
    }
}

Did you try hard coding all your substituted values?

Again, I'm not a gov cloud expert, but have you validated? It uses the exact same syntax... Since that seems to be the variable here.

Perhaps check with your AWS expert?

The error seems to indicate that a value it's expecting is not there..

Perhaps the ARN is a different format and missing a section?

Have just tried to post a file using all the creds / configs.

Is the logstash running in Gov Cloud too?

We made some progress in the testing

Without role arn...so using the instance profile/role on the host, the pipeline starts and sends logs correctly

input {
    beats {
        port => 5044
    }
}

output {
    s3 {
        region => "us-gov-west-1"
        bucket => "<redacted>"
        prefix => "/AWSLogs"
        rotation_strategy => "time"
        time_file => 1
        validate_credentials_on_root_bucket => false
        canned_acl => "bucket-owner-full-control"
    }
}

with role_arn, so trying to assume another role, we get the error Failed to open TCP connection to : (no implicit conversion of nil into String)

input {
    beats {
        port => 5044
    }
}

output {
    s3 {
        region => "us-gov-west-1"
        bucket => "<redacted>"
        role_arn => "arn:aws-us-gov:iam::<redacted>:role/<redacted>"
        prefix => "/AWSLogs"
        rotation_strategy => "time"
        time_file => 1
        validate_credentials_on_root_bucket => false
        canned_acl => "bucket-owner-full-control"
    }
}

The partition is the main different in AWS GovCloud
aws
vs
aws-us-gov

Okay progress that's good.

arn:aws-us-gov:iam::<redacted>:role/<redacted>

Not an expert but I noticed the :: is there something that between the two colons?

Seems to be pointing some things expected in the ARN...
Maybe logstash isn't parsing it correctly

Nothing is supposed to be between the ARN's, the arn was copied directly from the console

Role ARN Format: arn::iam::account :role/role-name-with-path

per AWS docs:

arn:aws:iam::account :root
arn:aws:iam::account :user/user-name-with-path
arn:aws:iam::account :group/group-name-with-path
arn:aws:iam::account :role/role-name-with-path
arn:aws:iam::account :policy/policy-name-with-path

*partition would be aws-us-gov for these example

Logstash seems to not be parsing the role correctly


We also tried to set the role_arn to the standard partition (aws), hoping to get a permission denied, etc etc but same error

Logstash does not parse it, it is passed directly to the AWS API to create the options used to create the bucket.

1 Like

Any recommendations on whether I should submit an issue on the Logstash Plugin repo or the AWS Ruby SDK repo?

vs

If you submit it to the plugin repo I would expect the ticket to be ignored because the plugin just passes the ARN to the AWS API. If you submit it to the AWS SDK I would expect the ticket to be ignored because using logstash as a minimum reproducible example will not be accepted.

As I linked to before, logstash calls Aws::S3::Bucket.new with a string and hash as the arguments, and that gets an exception. If you build the plugin yourself and figure out what the string and hash contain (i.e. add DEBUG log statements) then you might be able to create a minimum reproducible example that AWS developers will be willing to investigate.

1 Like