SQSSNSS3 plugin error queue not valid for endpoint

cannot establish connection to amazon SQS the address https://sqs.us-gov-east-1.amazonaws is not valid for this endpoint.
is there any options or suggestions to troubleshoot this further

Hi @afoster Welcome to the community.

I suspect not really the issue but the post above is missing the .com at the end.

sqs.us-gov-east-1.amazonaws.com

Typically to help we would ask you to post your config sans any confidential information.

And more of the actual logs...

If you do please format the code and logs with the </> button

I Also noticed you are using Logstash which is fine and just FYI now there is also an Filebeat S3 / SQS module / connector here

Thank you Stephen.
the queue url in config does have .com this was a typo to omit this in the post. I will look at the connector you posted as I suspect I am hitting this even though its a python issue and this is ruby.

here is the /etc/logstash/conf.d config file:

input {
 s3snssqs {
    region                     => "us-gov-east-1"
    s3_default_options         => { "endpoint_discovery" => true }
    queue                      => "name-cloudtrail-logstash"
    queue_owner_aws_account_id => ""
    endpoint                   => "https://sqs.us-gov-east-1.amazonaws.com/"
    type                       => "sqs-logs"
   # tags                       => ["pa-alb-nonlive"]
    sqs_skip_delete            => true
    codec                      => json
    s3_options_by_bucket       => [
        { bucket_name => ""
          credentials => { role => "" }
        }
    ]
  }
}
output {
microsoft-logstash-output-azure-loganalytics {
codec => "json_lines"
workspace_id => ""
workspace_key => ""
custom_log_table_name => "AWSCloudTrail"
plugin_flush_interval => 5
endpoint => "ods.opinsights.azure.us"
}
# for debug
# stdout { codec => rubydebug }
}

here is the log snippet from /var/log/logstash/logstash-plain.log

[2021-08-27T17:16:12,622][ERROR][logstash.javapipeline    ][main] Pipeline error {:pipeline_id=>"main", :exception=>#<LogStash::ConfigurationError: Verify the SQS queue name and your credentials>, :backtrace=>["/home/ubuntu/logstash-input-s3-sns-sqs/lib/logstash/inputs/sqs/poller.rb:66:in `initialize'", "/home/ubuntu/logstash-input-s3-sns-sqs/lib/logstash/inputs/s3snssqs.rb:252:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:228:in `block in register_plugins'", "org/jruby/RubyArray.java:1820:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:227:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:386:in `start_inputs'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:311:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:185:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:137:in `block in start'"], "pipeline.sources"=>["/etc/logstash/conf.d/acp4gov-sqs.conf"], :thread=>"#<Thread:0x536cd696 run>"}
[2021-08-27T17:16:12,625][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2021-08-27T17:16:12,642][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[2021-08-27T17:16:12,743][INFO ][logstash.runner          ] Logstash shut down.
[2021-08-27T17:16:12,754][FATAL][org.logstash.Logstash    ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
        at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:747) ~[jruby-complete-9.2.19.0.jar:?]
        at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:710) ~[jruby-complete-9.2.19.0.jar:?]
        at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:89) ~[?:?]
[2021-08-27T17:16:36,444][INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2021-08-27T17:16:36,468][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.14.0", "jruby.version"=>"jruby 9.2.19.0 (2.5.8) 2021-06-15 55810c552b OpenJDK 64-Bit Server VM 11.0.11+9 on 11.0.11+9 +indy +jit [linux-x86_64]"}
[2021-08-27T17:16:39,726][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2021-08-27T17:16:40,771][INFO ][org.reflections.Reflections] Reflections took 114 ms to scan 1 urls, producing 120 keys and 417 values
(END)

Interesting... and Yes that is a 3rd Party Connector so I do not have much insight perhaps someone else will.

If you end up looking at the Filebeat module (which I know works because we have users using it at scale) and want to use Logstash to send to your Architecture would be

SQS/S3 -> Filebeat -> Logstash Beats Input / Logstash MS Output -> Destination

Thank you so very much for your assistance and insight. I will be sending output to Azure so I appreciate your help. Have a wonderful weekend, Stephen.

a quick question, this uses filebeat for input and the logstash ms plugin for output to azure correct? filebeat doesnt have a ms output module from what i can research

No Filebeat does not have that output

So your architecture would be a bit more complex, you can run the filebeat and logsash on the same box.

SQS/S3 -> Filebeat AWS Module s3access input / Logstash Output -> Logstash Beats Input / Logstash MS Output -> Destination

So you would setup the filebeat output to logstash in the filebeat.yml output section (don't foget to comment out the output.elasticsearch

output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]

Then the logstash.conf Input from Beats output to MS

input {
  beats {
    port => 5044
  }
}
output {
microsoft-logstash-output-azure-loganalytics {
codec => "json_lines"
workspace_id => ""
workspace_key => ""
custom_log_table_name => "AWSCloudTrail"
plugin_flush_interval => 5
endpoint => "ods.opinsights.azure.us"
}
# for debug
# stdout { codec => rubydebug }
}

ah understand, and thank you so very much Stephen. Much appreciated.

1 Like

using the filebeat as input I face the same issue with SQS Queue not valid for endpoint. this is as its a .gov region. Is this resolved? Its not loggging so this snippet from status alerted me to the trouble.

 filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
     Loaded: loaded (/lib/systemd/system/filebeat.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2021-09-08 17:21:46 UTC; 9min ago
       Docs: https://www.elastic.co/beats/filebeat
   Main PID: 20974 (filebeat)
      Tasks: 7 (limit: 9412)
     Memory: 35.8M
     CGroup: /system.slice/filebeat.service
             └─20974 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/fil>

Sep 08 17:21:46 ip- systemd[1]: Started Filebeat sends log files to Logstash or directly to Elasticsearch..
Sep 08 17:21:47 ip- filebeat[20974]: 2021-09-08T17:21:47.086Z        ERROR        [input.aws-s3]        awss3/input.go:95        getRegionFromQueueU>
Sep 08 17:21:47 ip- filebeat[20974]: 2021-09-08T17:21:47.086Z        ERROR        [input.aws-s3]        compat/compat.go:122        Input 'aws-s3' f>

I have output to Azure Sentinel but its only for the local ubuntu syslogs not the SQS Queue.
Any thoughts or ideas would be appreciated to resolve this.

@afoster I think we are going to need to see more of the logs and the config.

journalctl -u filebeat.service

@andrewkroh

Any insight on using SQS input on the AWS .gov servic

Please share the filebeat.yml config for the input, the logs (something like journalctl -u filebeat.service --no-pager -n 1000), and the Filebeat version.

You might need to set endpoint: aws.foo.gov (or whatever domain you're connecting to) in the input's config.

okay.
` journalctl -u filebeat.service
-- Logs begin at Thu 2021-09-09 10:22:01 UTC, end at Thu 2021-09-09 16:34:01 UTC. --
-- No entries --
root@ip-:/etc/filebeat# journalctl -u filebeat.service --no-pager -n 1000
-- Logs begin at Thu 2021-09-09 10:22:01 UTC, end at Thu 2021-09-09 16:36:16 UTC. --
-- No entries --
root@ip-:/etc/filebeat#

cat filebeat.yml
###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: aws-s3
  queue_url: https://sqs.us-gov-east-redacted.
  role_arn: arn:aws-us-gov:iam::redacted
  expand_event_list_from_field: Records
  bucket_list_interval: 300s
  file_selectors:
    - regex: '/CloudTrail/'
    - regex: '/CloudTrail-Digest/'
    - regex: '/CloudTrail-Insight/'



  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after

# filestream is an input for collecting log messages from files. It is going to replace log input in the future.
- type: filestream

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
     - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.
  output.logstash:
    hosts: ["localhost:5044"]
  output.elasticsearch.password: "$[ES_PWD}"
# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  # hosts: ["localhost:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
#The Logstash hosts
   hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

`

To get the logs try running filebeat direct with the following command.

/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat

Thank you Stephen, this is quite a lot of output that i will need to redact. is there a flag to add to lessen the output to what you need.

Really looking for log lines dealing with the SQS connection....and associated errors..

Ohhh ! you can also do a test config

:/var/log/filebeat# filebeat test config
Config OK
root@ip-10-216-34-43:/var/log/filebeat#

Thats good so we need some of the logs where filebeat is trying to connect to the SQS input there should probably be several line...

Here are a couple more thoughts..

Can you take these linea out they are leftovers...

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

You also have the filestream enabled is that on purpose?

Also you put the aws-s3 input in the filebeat.yml which is fine but is the AWS module enabled as well? if so you might have conflicting settings

thank you Stephen.
I commented out the paths, changed filestream to false and I do not have the AWS module enabled.

so can you provide a few log lines... only way we will be able to help... on startup there should be some errors when filebeat tries to connect.

2021-09-09T21:26:27.058Z        DEBUG   [test]  registrar/migrate.go:304        isFile() -> false
2021-09-09T21:26:27.058Z        DEBUG   [test]  registrar/migrate.go:297        isDir(/usr/share/filebeat/data/registry/filebeat) -> true
2021-09-09T21:26:27.058Z        DEBUG   [test]  registrar/migrate.go:304        isFile(/usr/share/filebeat/data/registry/filebeat/meta.json) -> true
2021-09-09T21:26:27.058Z        DEBUG   [registrar]     registrar/migrate.go:84 Registry type '1' found
2021-09-09T21:26:27.058Z        INFO    memlog/store.go:119     Loading data file of '/usr/share/filebeat/data/registry/filebeat' succeeded. Active transaction id=0
2021-09-09T21:26:27.060Z        INFO    memlog/store.go:124     Finished loading transaction log file for '/usr/share/filebeat/data/registry/filebeat'. Active transaction id=155
2021-09-09T21:26:27.060Z        WARN    beater/filebeat.go:381  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2021-09-09T21:26:27.060Z        DEBUG   [httpjson.transforms]   v2/transform_registry.go:75     Register transform request:append
2021-09-09T21:26:27.060Z        DEBUG   [httpjson.transforms]   v2/transform_registry.go:75     Register transform request:delete
2021-09-09T21:26:27.060Z        DEBUG   [httpjson.transforms]   v2/transform_registry.go:75     Register transform request:set
2021-09-09T21:26:27.060Z        DEBUG   [httpjson.transforms]   v2/transform_registry.go:75     Register transform response:append
2021-09-09T21:26:27.060Z        DEBUG   [httpjson.transforms]   v2/transform_registry.go:75     Register transform response:delete
2021-09-09T21:26:27.060Z        DEBUG   [httpjson.transforms]   v2/transform_registry.go:75     Register transform response:set
2021-09-09T21:26:27.060Z        DEBUG   [httpjson.transforms]   v2/transform_registry.go:75     Register transform pagination:append
2021-09-09T21:26:27.060Z        DEBUG   [httpjson.transforms]   v2/transform_registry.go:75     Register transform pagination:delete
2021-09-09T21:26:27.060Z        DEBUG   [httpjson.transforms]   v2/transform_registry.go:75     Register transform pagination:set
2021-09-09T21:26:27.060Z        DEBUG   [httpjson.transforms]   v2/encoding.go:80       <nil>
2021-09-09T21:26:27.060Z        DEBUG   [httpjson.transforms]   v2/encoding.go:81       <nil>
2021-09-09T21:26:27.060Z        DEBUG   [httpjson.transforms]   v2/encoding.go:86       <nil>
2021-09-09T21:26:27.061Z        DEBUG   [httpjson.transforms]   v2/encoding.go:87       <nil>
2021-09-09T21:26:27.061Z        INFO    [registrar]     registrar/registrar.go:109      States Loaded from registrar: 0
2021-09-09T21:26:27.061Z        INFO    [crawler]       beater/crawler.go:71    Loading Inputs: 2
2021-09-09T21:26:27.061Z        DEBUG   [registrar]     registrar/registrar.go:140      Starting Registrar
2021-09-09T21:26:27.061Z        INFO    [crawler]       beater/crawler.go:141   Starting input (ID: 13907458255252998054)
2021-09-09T21:26:27.061Z        DEBUG   [cfgfile]       cfgfile/reload.go:132   Checking module configs from: /etc/filebeat/modules.d/*.yml
2021-09-09T21:26:27.061Z        INFO    [input.aws-s3]  compat/compat.go:111    Input aws-s3 starting   {"id": "C101379928EDC7A6"}
2021-09-09T21:26:27.061Z        DEBUG   [cfgfile]       cfgfile/reload.go:146   Number of module configs found: 0
2021-09-09T21:26:27.061Z        INFO    [crawler]       beater/crawler.go:108   Loading and starting Inputs completed. Enabled inputs: 1
2021-09-09T21:26:27.061Z        ERROR   [input.aws-s3]  awss3/input.go:95       getRegionFromQueueURL failed: QueueURL is not in format: https://sqs.{REGION_ENDPOINT}.{ENDPOINT}/{ACCOUNT_NUMBER}/{QUEUE_NAME}      {"id": "C101379928EDC7A6", "queue_url": "https://sqs.us-gov-east-1.amazon.com/redacted"}
2021-09-09T21:26:27.061Z        ERROR   [input.aws-s3]  compat/compat.go:122    Input 'aws-s3' failed with: getRegionFromQueueURL failed: QueueURL is not in format: https://sqs.{REGION_ENDPOINT}.{ENDPOINT}/{ACCOUNT_NUMBER}/{QUEUE_NAME}  {"id": "C101379928EDC7A6"}
2021-09-09T21:26:27.061Z        INFO    cfgfile/reload.go:164   Config reloader started
2021-09-09T21:26:27.062Z        DEBUG   [cfgfile]       cfgfile/reload.go:194   Scan for new config files
2021-09-09T21:26:27.062Z        DEBUG   [cfgfile]       cfgfile/reload.go:213   Number of module configs found: 0
2021-09-09T21:26:27.062Z        DEBUG   [reload]        cfgfile/list.go:63      Starting reload procedure, current runners: 0
2021-09-09T21:26:27.062Z        DEBUG   [reload]        cfgfile/list.go:81      Start list: 0, Stop list: 0
2021-09-09T21:26:27.062Z        INFO    cfgfile/reload.go:224   Loading of config files completed.