Output filter, Elasticsearch & cloud instance

Hi,
Hope you are all well.

I've been building a new Logstash pipeline and testing against a local instance of Elasticsearch on my network which is running fine without any security. The process has been pretty straightforward.

I'm now trying to shift the output from the local ES instance to our production cloud ES service, but I'm struggling, any advice would be appreciated.

My successful local deployment config output filter is as follows;

elasticsearch {
                hosts => [ "10.81.1.248:9200" ]
                index => "proxylogs-%{+YYYY.MM.dd}"
                action => "index"
                manage_template => true
                template => "/usr/share/logstash/template/proxylogs-template.json"
                template_name => "proxylogs"
                template_overwrite => true
            }

I've tried various configuration possibilities, such as;

            elasticsearch {
            
                hosts => "https://xxxxxxxxxxxxxx.gcp.elastic-cloud.com:9243"
                ssl => true
                ssl_certificate_verification => false
                user => "xxxxxx"
                password => "xxxxxxx"
                index => "dnslogs-%{+YYYY.MM.dd}"
                action => "index"
                manage_template => true
                template => "/usr/share/logstash/template/dnslogs-template.json"
                template_name => "dnslogs"
                template_overwrite => true
            }

The user and password are correct and should have the appropriate permissions.

I'm getting the following error;

[2022-07-22T10:12:54,763][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://user:xxxxxx@xxxxxx.gcp.elastic-cloud.com:9243/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://xxxxxxxxxx.gcp.elastic-cloud.com:9243/'"}

I'm using logstash v7.17 and ES cloud v7.16.1

Thanks for the help

Hi @Mark_Rodman

A couple things where exactly did you get the endpoint from? You should get it from the elastic cloud console.

2nd there should be a .es or .kb component of the url..

https://xxxxxxxxx.es.us-west1.gcp.cloud.es.io

3rd you don't need the :9243 anymore

Finally you should be able to curl the ES endpoint from the logstash machine

curl -u "username:password" https://xxxxxxxxx.es.us-west1.gcp.cloud.es.io

{
  "name" : "instance-0000000095",
  "cluster_name" : "asdfasdfasdfasdf",
  "cluster_uuid" : "safdasdfsadfasdf",
  "version" : {
    "number" : "8.2.3",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "9905bfb62a3f0b044948376b4f607f70a8a151b4",
    "build_date" : "2022-06-08T22:21:36.455508792Z",
    "build_snapshot" : false,
    "lucene_version" : "9.1.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

You can also use

output {
  elasticsearch { 
    cloud_id => "<cloud id>"
    cloud_auth => "<cloud auth>" 
  } 
}

thanks,
Can you confirm if the host variable xxxxxxx values should be the id of human readable name?
Cheers

I don't know what that means... my screen shot showed you exactly where to get it from.
If you mean the custom alias endpoint .. .then yes

Apologies, you are correct,
I've successfully used curl, now to try the logstash output.

{
  "name" : "instance-0000000012",
  "cluster_name" : "xxxxxxx",
  "cluster_uuid" : "2xxxxx",
  "version" : {
    "number" : "7.16.1",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "xxxxxx",
    "build_date" : "2021-12-11T00:29:38.865893768Z",
    "build_snapshot" : false,
    "lucene_version" : "8.10.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Unfortunately the config still errors as follows;

[2022-07-22T20:51:24,715][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://logstash:xxxxxx@XXXXXXXX.es.europe-west2.gcp.elastic-cloud.com:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://XXXXXX.es.europe-west2.gcp.elastic-cloud.com:9200/][Manticore::ConnectTimeout] Connect to XXXXXXXX.es.europe-west2.gcp.elastic-cloud.com:9200 [XXXXXX.es.europe-west2.gcp.elastic-cloud.com/34.89.12.12] failed: connect timed out"}

I'm using

elasticsearch {
                hosts => [ "https://XXXXX.es.europe-west2.gcp.elastic-cloud.com" ]
                user => "logstash"
                password => "***"
                index => "dnslogs-%{+YYYY.MM.dd}"
                action => "index"
                manage_template => true
                template => "/usr/share/logstash/template/dnslogs-template.json"
                template_name => "dnslogs"
                template_overwrite => true
            }

I notice the port has gone back to 9200, is that correct for the cloud? Surely it should be 9243?

Thanks

I added :9243 to the address in the hosts array, it didn't help, came back with a 401.

I then added :9243 and then :443 to the address used when testing with curl. Both worked with curl, but yet, bot refuse within logstash, producing;

[2022-07-22T21:06:37,292][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://logstash:xxxxxx@xxxxxxxx.es.europe-west2.gcp.elastic-cloud.com:9243/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://xxxxxxxxxx.es.europe-west2.gcp.elastic-cloud.com:9243/'"}

@stephenb any ideas ???? I'm totally stuck.

And you tried the curl from the same server that logsstash is running on? not some other server?

The service runs on 9243 and 443 not 9200 so that will never work

Did you try just with no port and no /

https://logstash:xxxxxx@xxxxxxxx.es.europe-west2.gcp.elastic-cloud.com

Also how are you starting logstash from command line or systemctl?

logstash is running on a linux box on my home net, connecting to the cloud.
I ran,

root@mark-linux:/etc/logstash/conf.d# curl -u "logstash:mypassword" https://xxxxxxx.es.europe-west2.gcp.elastic-cloud.com
{
  "name" : "instance-0000000001",
  "cluster_name" : "xxxxxxxx",
  "cluster_uuid" : "xxxxxxxxxxx",
  "version" : {
    "number" : "7.16.1",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "xxxxxxxxxxxx",
    "build_date" : "2021-12-11T00:29:38.865893768Z",
    "build_snapshot" : false,
    "lucene_version" : "8.10.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

I'm start Logstash with;
service logstash start

I used the same method when testing against my local ES instance.

Logstash as a service runs under a different account logstash user I believe... perhaps you have a FW rule in place that does not let the logstash user make exeternal connections...

Try starting logstash from the command line with the same user as you curl

There is nothing special about connecting to cloud except it is remote...

Ohh shoot try putting the :443 in ... yes logstash will default to try port 9200 when you leave the port off you can see that in the message above perhaps you have a FW blocking :9243

@stephenb I got the same response.
I ran as follows, which logged to the console the correct config loading, but ultimately produced the same 401 error.

/usr/share/logstash/bin/logstash --path.settings /etc/logstash/

To be honest there isn't any egress control on the network, it's my home working environment.

The cloud environment has SSO enabled, could that be a factor?

Perhaps But you successfully curl ed so no I would not think so.

401 error is bad Auth and when I can not see the actual configuration (except password) and the actual output hard for me to help.. it is something simple.

Can you show the elastic output section and the actual logs from the above command .. otherwise I am just guessing.... to help debug it is good to see that for every iteration..

Do you have a typo in the password?

also nmap'd it

root@mark-linux:/etc/logstash# nmap xxxxxxxx.es.europe-west2.gcp.elastic-cloud.com -p9243
Starting Nmap 7.80 ( https://nmap.org ) at 2022-07-22 21:49 BST
Nmap scan report for xxxxxxxxx.es.europe-west2.gcp.elastic-cloud.com (34.89.12.12)
Host is up (0.044s latency).
rDNS record for xx.xx.xx.xx: xx.xx.xx.xxx.bc.googleusercontent.com

PORT     STATE SERVICE
9243/tcp open  unknown

Nmap done: 1 IP address (1 host up) scanned in 0.45 seconds

it's not connectivity.....
:pleading_face:

My output is as follows;
Only the first if block is active for the ES cloud, the others remain local.

output {
     #stdout { codec => rubydebug }
     if [type] == "umbrella_dns" {
        if [event][kind] != "metric" {
            elasticsearch {
                hosts => [ "https://XXXXXXXXXX.es.europe-west2.gcp.elastic-cloud.com:9243" ]
                user => "logstash"
                password => "***********************"
                index => "dnslogs-%{+YYYY.MM.dd}"
                action => "index"
                manage_template => true
                template => "/usr/share/logstash/template/dnslogs-template.json"
                template_name => "dnslogs"
                template_overwrite => true
            }
        } else {
            elasticsearch {
                hosts => [ "10.81.1.248:9200" ]
                #user => "${ELASTICSEARCH_USER}"
                #password => "${ELASTICSEARCH_PASSWORD}"
                index => "metrics-dnslogs-%{+YYYY.MM.dd}"
                action => "index"
                manage_template => true
                template => "/usr/share/logstash/template/dnslogs-metrics-template.json"
                template_name => "dnslogs-metrics"
                template_overwrite => true
            }
        }
   }
     if [type] == "umbrella_proxy" {
        if [event][kind] != "metric" {
            elasticsearch {
                hosts => [ "10.81.1.248:9200" ]
                #user => "${ELASTICSEARCH_USER}"
                #password => "${ELASTICSEARCH_PASSWORD}"
                index => "proxylogs-%{+YYYY.MM.dd}"
                action => "index"
                manage_template => true
                template => "/usr/share/logstash/template/proxylogs-template.json"
                template_name => "proxylogs"
                template_overwrite => true
            }
        } else {
            elasticsearch {
                hosts => [ "10.81.1.248:9200" ]
                #user => "${ELASTICSEARCH_USER}"
                #password => "${ELASTICSEARCH_PASSWORD}"
                index => "metrics-proxylogs-%{+YYYY.MM.dd}"
                action => "index"
                manage_template => true
                template => "/usr/share/logstash/template/proxylogs-metrics-template.json"
                template_name => "proxylogs-metrics"
                template_overwrite => true
            }
        }
   }
     if [type] == "umbrella_ip" {
        if [event][kind] != "metric" {
            elasticsearch {
                hosts => [ "10.81.1.248:9200" ]
                #user => "${ELASTICSEARCH_USER}"
                #password => "${ELASTICSEARCH_PASSWORD}"
                index => "iplogs-%{+YYYY.MM.dd}"
                action => "index"
                manage_template => true
                template => "/usr/share/logstash/template/iplogs-template.json"
                template_name => "iplogs"
                template_overwrite => true
            }
        } else {
            elasticsearch {
                hosts => [ "10.81.1.248:9200" ]
                #user => "${ELASTICSEARCH_USER}"
                #password => "${ELASTICSEARCH_PASSWORD}"
                index => "metrics-iplogs-%{+YYYY.MM.dd}"
                action => "index"
                manage_template => true
                template => "/usr/share/logstash/template/iplogs-metrics-template.json"
                template_name => "iplogs-metrics"
                template_overwrite => true
            }
        }
   }


}

and the whole logs from the command line run?...

[2022-07-22T22:03:19,441][INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2022-07-22T22:03:19,446][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.17.5", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.15+10 on 11.0.15+10 +indy +jit [linux-x86_64]"}
[2022-07-22T22:03:19,447][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djdk.io.File.enableADS=true, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true]
[2022-07-22T22:03:20,150][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2022-07-22T22:03:22,198][INFO ][org.reflections.Reflections] Reflections took 39 ms to scan 1 urls, producing 119 keys and 419 values
[2022-07-22T22:03:28,381][INFO ][logstash.filters.ruby.script] Test run complete {:script_path=>"/usr/share/logstash/scripts/identities.rb", :results=>{:passed=>0, :failed=>0, :errored=>0}}
[2022-07-22T22:03:28,387][INFO ][logstash.filters.ruby.script] Test run complete {:script_path=>"/usr/share/logstash/scripts/custom_timestamp.rb", :results=>{:passed=>0, :failed=>0, :errored=>0}}
[2022-07-22T22:03:28,455][INFO ][logstash.filters.ruby.script] Test run complete {:script_path=>"/usr/share/logstash/scripts/custom_timestamp.rb", :results=>{:passed=>0, :failed=>0, :errored=>0}}
[2022-07-22T22:03:28,517][INFO ][logstash.filters.ruby.script] Test run complete {:script_path=>"/usr/share/logstash/scripts/custom_timestamp.rb", :results=>{:passed=>0, :failed=>0, :errored=>0}}
[2022-07-22T22:03:28,650][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.81.1.248:9200"]}
[2022-07-22T22:03:28,834][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.81.1.248:9200/]}}
[2022-07-22T22:03:28,934][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://10.81.1.248:9200/"}
[2022-07-22T22:03:28,941][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.16.2) {:es_version=>7}
[2022-07-22T22:03:28,942][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2022-07-22T22:03:28,977][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-22T22:03:28,978][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.81.1.248:9200"]}
[2022-07-22T22:03:28,983][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.81.1.248:9200/]}}
[2022-07-22T22:03:28,990][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://10.81.1.248:9200/"}
[2022-07-22T22:03:28,994][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.16.2) {:es_version=>7}
[2022-07-22T22:03:28,994][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2022-07-22T22:03:28,998][INFO ][logstash.outputs.elasticsearch][main] Using mapping template from {:path=>"/usr/share/logstash/template/iplogs-metrics-template.json"}
[2022-07-22T22:03:29,010][INFO ][logstash.outputs.elasticsearch][main] Installing Elasticsearch template {:name=>"iplogs-metrics"}
[2022-07-22T22:03:29,014][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-22T22:03:29,014][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-22T22:03:29,015][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.81.1.248:9200"]}
[2022-07-22T22:03:29,021][INFO ][logstash.outputs.elasticsearch][main] Using mapping template from {:path=>"/usr/share/logstash/template/proxylogs-template.json"}
[2022-07-22T22:03:29,023][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.81.1.248:9200/]}}
[2022-07-22T22:03:29,032][INFO ][logstash.outputs.elasticsearch][main] Installing Elasticsearch template {:name=>"proxylogs"}
[2022-07-22T22:03:29,032][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://10.81.1.248:9200/"}
[2022-07-22T22:03:29,036][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.16.2) {:es_version=>7}
[2022-07-22T22:03:29,036][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2022-07-22T22:03:29,053][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-22T22:03:29,053][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-22T22:03:29,053][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.81.1.248:9200"]}
[2022-07-22T22:03:29,057][INFO ][logstash.outputs.elasticsearch][main] Using mapping template from {:path=>"/usr/share/logstash/template/proxylogs-metrics-template.json"}
[2022-07-22T22:03:29,058][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.81.1.248:9200/]}}
[2022-07-22T22:03:29,061][INFO ][logstash.outputs.elasticsearch][main] Installing Elasticsearch template {:name=>"proxylogs-metrics"}
[2022-07-22T22:03:29,061][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://10.81.1.248:9200/"}
[2022-07-22T22:03:29,063][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.16.2) {:es_version=>7}
[2022-07-22T22:03:29,063][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2022-07-22T22:03:29,080][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-22T22:03:29,080][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-22T22:03:29,080][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://*************.es.europe-west2.gcp.elastic-cloud.com:9243"]}
[2022-07-22T22:03:29,085][INFO ][logstash.outputs.elasticsearch][main] Using mapping template from {:path=>"/usr/share/logstash/template/iplogs-template.json"}
[2022-07-22T22:03:29,089][INFO ][logstash.outputs.elasticsearch][main] Installing Elasticsearch template {:name=>"iplogs"}
[2022-07-22T22:03:29,090][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://logstash:xxxxxx@*************.es.europe-west2.gcp.elastic-cloud.com:9243/]}}
[2022-07-22T22:03:29,568][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://logstash:xxxxxx@*************.es.europe-west2.gcp.elastic-cloud.com:9243/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://*************.es.europe-west2.gcp.elastic-cloud.com:9243/'"}
[2022-07-22T22:03:29,569][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-22T22:03:29,570][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.81.1.248:9200"]}
[2022-07-22T22:03:29,574][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.81.1.248:9200/]}}
[2022-07-22T22:03:29,578][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://10.81.1.248:9200/"}
[2022-07-22T22:03:29,580][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.16.2) {:es_version=>7}
[2022-07-22T22:03:29,580][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2022-07-22T22:03:29,599][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-22T22:03:29,599][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-22T22:03:29,603][INFO ][logstash.outputs.elasticsearch][main] Using mapping template from {:path=>"/usr/share/logstash/template/dnslogs-metrics-template.json"}
[2022-07-22T22:03:29,606][INFO ][logstash.outputs.elasticsearch][main] Installing Elasticsearch template {:name=>"dnslogs-metrics"}
[2022-07-22T22:03:29,670][INFO ][logstash.filters.geoip.databasemanager][main] GeoIP database path is configured manually so the plugin will not check for update. Keep in mind that if you are not using the database shipped with this plugin, please go to https://www.maxmind.com/en/geolite2/eula and understand the terms and conditions.
[2022-07-22T22:03:29,670][INFO ][logstash.filters.geoip   ][main] Using geoip database {:path=>"/usr/share/logstash/maxmind/GeoLite2-ASN.mmdb"}
[2022-07-22T22:03:29,686][INFO ][logstash.filters.geoip.databasemanager][main] GeoIP database path is configured manually so the plugin will not check for update. Keep in mind that if you are not using the database shipped with this plugin, please go to https://www.maxmind.com/en/geolite2/eula and understand the terms and conditions.
[2022-07-22T22:03:29,686][INFO ][logstash.filters.geoip   ][main] Using geoip database {:path=>"/usr/share/logstash/maxmind/GeoLite2-ASN.mmdb"}
[2022-07-22T22:03:29,687][INFO ][logstash.filters.geoip.databasemanager][main] GeoIP database path is configured manually so the plugin will not check for update. Keep in mind that if you are not using the database shipped with this plugin, please go to https://www.maxmind.com/en/geolite2/eula and understand the terms and conditions.
[2022-07-22T22:03:29,687][INFO ][logstash.filters.geoip   ][main] Using geoip database {:path=>"/usr/share/logstash/maxmind/GeoLite2-ASN.mmdb"}
[2022-07-22T22:03:29,688][INFO ][logstash.filters.geoip.databasemanager][main] GeoIP database path is configured manually so the plugin will not check for update. Keep in mind that if you are not using the database shipped with this plugin, please go to https://www.maxmind.com/en/geolite2/eula and understand the terms and conditions.
[2022-07-22T22:03:29,688][INFO ][logstash.filters.geoip   ][main] Using geoip database {:path=>"/usr/share/logstash/maxmind/GeoLite2-ASN.mmdb"}
[2022-07-22T22:03:29,693][INFO ][logstash.filters.geoip.databasemanager][main] GeoIP database path is configured manually so the plugin will not check for update. Keep in mind that if you are not using the database shipped with this plugin, please go to https://www.maxmind.com/en/geolite2/eula and understand the terms and conditions.
[2022-07-22T22:03:29,693][INFO ][logstash.filters.geoip   ][main] Using geoip database {:path=>"/usr/share/logstash/maxmind/GeoLite2-City.mmdb"}
[2022-07-22T22:03:29,694][INFO ][logstash.filters.geoip.databasemanager][main] GeoIP database path is configured manually so the plugin will not check for update. Keep in mind that if you are not using the database shipped with this plugin, please go to https://www.maxmind.com/en/geolite2/eula and understand the terms and conditions.
[2022-07-22T22:03:29,694][INFO ][logstash.filters.geoip   ][main] Using geoip database {:path=>"/usr/share/logstash/maxmind/GeoLite2-City.mmdb"}
[2022-07-22T22:03:29,695][INFO ][logstash.filters.geoip.databasemanager][main] GeoIP database path is configured manually so the plugin will not check for update. Keep in mind that if you are not using the database shipped with this plugin, please go to https://www.maxmind.com/en/geolite2/eula and understand the terms and conditions.
[2022-07-22T22:03:29,695][INFO ][logstash.filters.geoip   ][main] Using geoip database {:path=>"/usr/share/logstash/maxmind/GeoLite2-City.mmdb"}
[2022-07-22T22:03:29,695][INFO ][logstash.filters.geoip.databasemanager][main] GeoIP database path is configured manually so the plugin will not check for update. Keep in mind that if you are not using the database shipped with this plugin, please go to https://www.maxmind.com/en/geolite2/eula and understand the terms and conditions.
[2022-07-22T22:03:29,695][INFO ][logstash.filters.geoip   ][main] Using geoip database {:path=>"/usr/share/logstash/maxmind/GeoLite2-ASN.mmdb"}
[2022-07-22T22:03:29,832][INFO ][logstash.filters.geoip.databasemanager][main] GeoIP database path is configured manually so the plugin will not check for update. Keep in mind that if you are not using the database shipped with this plugin, please go to https://www.maxmind.com/en/geolite2/eula and understand the terms and conditions.
[2022-07-22T22:03:29,832][INFO ][logstash.filters.geoip   ][main] Using geoip database {:path=>"/usr/share/logstash/maxmind/GeoLite2-City.mmdb"}
[2022-07-22T22:03:29,834][INFO ][logstash.filters.geoip.databasemanager][main] GeoIP database path is configured manually so the plugin will not check for update. Keep in mind that if you are not using the database shipped with this plugin, please go to https://www.maxmind.com/en/geolite2/eula and understand the terms and conditions.
[2022-07-22T22:03:29,834][INFO ][logstash.filters.geoip   ][main] Using geoip database {:path=>"/usr/share/logstash/maxmind/GeoLite2-City.mmdb"}
[2022-07-22T22:03:29,881][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, "pipeline.sources"=>["/etc/logstash/conf.d/01_input_dnslogs.conf", "/etc/logstash/conf.d/02_input_proxylogs.conf", "/etc/logstash/conf.d/03_input_iplogs.conf", "/etc/logstash/conf.d/51_filter_dnslogs.conf", "/etc/logstash/conf.d/52_filter_proxylogs.conf", "/etc/logstash/conf.d/53_filter_iplogs.conf", "/etc/logstash/conf.d/99_output.conf"], :thread=>"#<Thread:0x4d9c9e45 run>"}
[2022-07-22T22:03:31,823][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.94}
[2022-07-22T22:03:31,838][INFO ][logstash.inputs.s3       ][main] Registering {:bucket=>"cisco-managed-*******", :region=>"******"}
[2022-07-22T22:03:31,952][INFO ][logstash.inputs.s3       ][main] Registering {:bucket=>"cisco-managed-*******", :region=>"******"}
[2022-07-22T22:03:31,961][INFO ][logstash.inputs.s3       ][main] Registering {:bucket=>"cisco-managed-********", :region=>"******"}
[2022-07-22T22:03:31,970][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-07-22T22:03:31,989][INFO ][logstash.inputs.s3       ][main][468c429eca8dd78eebdededa5ace043f0453472ea3fba4ebbe5f12a9c79157e1] Using default generated file for the sincedb {:filename=>"/var/lib/logstash/plugins/inputs/s3/sincedb_1be05263bc0fb504884d9860942b5f01"}
[2022-07-22T22:03:31,991][INFO ][logstash.inputs.s3       ][main][febfd5b152196b1756ad2b0a7db18570d3a1ef1ee0ba82b4ce87dd4ce6aa0f40] Using default generated file for the sincedb {:filename=>"/var/lib/logstash/plugins/inputs/s3/sincedb_cc72d6a45356caa18eac5393d6c9f68c"}
[2022-07-22T22:03:31,991][INFO ][logstash.inputs.s3       ][main][bd2a85a59fffcade0bd0fb5d4f09c8f575ade2a8b338f6ebe85ff380e3992dac] Using default generated file for the sincedb {:filename=>"/var/lib/logstash/plugins/inputs/s3/sincedb_91dafa8ad9874faca39520059022a500"}
[2022-07-22T22:03:32,019][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-07-22T22:03:33,013][INFO ][logstash.inputs.s3       ][main][bd2a85a59fffcade0bd0fb5d4f09c8f575ade2a8b338f6ebe85ff380e3992dac] No files found in bucket {:prefix=>"6159501_f5ac0eb9cfe7d0cb2b04bdf98b066b670a38b763/iplogs/"}
[2022-07-22T22:03:34,708][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://logstash:xxxxxx@*************.es.europe-west2.gcp.elastic-cloud.com:9243/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://*************.es.europe-west2.gcp.elastic-cloud.com:9243/'"}
[2022-07-22T22:03:39,859][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://logstash:xxxxxx@*************.es.europe-west2.gcp.elastic-cloud.com:9243/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://*************.es.europe-west2.gcp.elastic-cloud.com:9243/'"}
[2022-07-22T22:03:42,194][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2022-07-22T22:03:44,995][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://logstash:xxxxxx@*************.es.europe-west2.gcp.elastic-cloud.com:9243/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://*************.es.europe-west2.gcp.elastic-cloud.com:9243/'"}
[2022-07-22T22:03:47,324][ERROR][org.logstash.execution.ShutdownWatcherExt] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2022-07-22T22:03:47,325][INFO ][org.logstash.execution.ShutdownWatcherExt] The queue is draining before shutdown.
[2022-07-22T22:03:50,143][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://logstash:xxxxxx@*************.es.europe-west2.gcp.elastic-cloud.com:9243/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://*************.es.europe-west2.gcp.elastic-cloud.com:9243/'"}

Thanks.. OK here is what I got...

401 Is Bad Auth (as I am sure you know)... not connectivity (as you demonstrated), something wrong with your username / password / role combination. Its is connecting and Auth is denied.

SSO ... hmm... That should not get in the way since you can curl but I am not positive... but I do not think it is that. I assume your logstash creds are "native" realm I assume so they cascade to the native realm, that is how the curl is working.

I think I have an idea... your logstash user may not have the correct roles / privileges for logstash to authenticate and properly connect but it does have enough privileges to do the simple curl command.

Per the docs here These are the roles that logstash user will need.

POST _security/role/logstash_writer
{
  "cluster": ["manage_index_templates", "monitor", "manage_ilm"], 
  "indices": [
    {
      "names": [ "logstash-*" ], <--- Add your other indices here 
      "privileges": ["write","create","create_index","manage","manage_ilm"]  
    }
  ]
}

Do you have another user to try and / or you will need to check the roles / privileges on that logstash user.

I suspect you are using the elastic superuser on your local host so that is why they work.

I would take look at the roles / permissions for that logstash user.

Thanks.
I did go a bit crazy adding every permission under the sun trying to fix, but I've reduce the perms to match your recommendation;
Output as followsi

GET _security/role/logstash_writer
{
  "logstash_writer" : {
    "cluster" : [
      "manage_index_templates",
      "manage_ingest_pipelines",
      "manage_ilm",
      "monitor"
    ],
    "indices" : [
      {
        "names" : [
          "*",
          "logs-*",
          "dnslogs-*",
          "proxylogs-*",
          "iplogs-*",
          "logstash-*"
        ],
        "privileges" : [
          "create_doc",
          "create_index",
          "create",
          "index",
          "write",
          "delete",
          "manage",
          "manage_ilm"
        ],
        "field_security" : {
          "grant" : [
            "*"
          ],
          "except" : [ ]
        },
        "allow_restricted_indices" : false
      }
    ],
    "applications" : [
      {
        "application" : "kibana-.kibana",
        "privileges" : [
          "all"
        ],
        "resources" : [
          "*"
        ]
      }
    ],
    "run_as" : [ ],
    "metadata" : { },
    "transient_metadata" : {
      "enabled" : true
    }
  }
}

So I should be good....

Does it Work?

You can always do a test with the elastic user first then work on your writer role

Sadly not, still the same,
I don't have the Elastic account password either. :unamused: