Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable

Hi I am not able to send the data to my AWS -ES . I am getting the below error

[logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://search-mb-production-app-*********.us-west-2.es.amazonaws.com:80/][Manticore::SocketTimeout] Read timed out {:url=>http://search-mb-production-app-******us-west-2.es.amazonaws.com:80/, :error_message=>"Elasticsearch Unreachable: [http://search-mb-production-app-******.us-west-2.es.amazonaws.com:80/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}

.

I believe you need to use the amazon_es output plugin to send data to AWS ES.

Yes I have used the AWS es output plugin . Below is the output plugin used

output {
file{
path => "/var/log/logstash/alb-accesslogs.log"
}

elasticsearch {
		hosts => "http://search-mb-production-app-*******.us-west-2.es.amazonaws.com:80"
		index => "alb-accesslog-%{+YYYY.MM.dd}"
	      }

}

That is the wrong plugin. You need to install and use the amazon_es plugin supplied by AWS.

Same output plugin is used for another logstash server and its working fine. Even this was working fine till morning and from past 1hr its not working and getting error.

I am not very familiar with AWS ES so will probably not be able to help much in that case. You may want to take it up with their support.

I changed output plugin .
output {
file{
path => "/var/log/logstash/alb-accesslogs.log"
}

     amazon_es {
                    hosts => "http://search-mb-production-app-*********.us-west-2.es.amazonaws.com:80"
                    region => "us-west-2"
                    index => "alb-accesslog-%{+YYYY.MM.dd}"
                  }

}
but it failed to execute the plugin itself.

'''[2019-01-03T10:20:40,342][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"alb-accesslogs", :plugin=>"#LogStash::OutputDelegator:0x55486400", :error=>"Explicit value for 'port' was declared, but it is different in one of the URLs given! Please make sure your URLs are inline with explicit values. The URLs have the property set to '80', but it was also set to '443' explicitly", :thread=>"#<Thread:0x2ce47965 run>"}
[2019-01-03T10:20:41,662][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"askadoc-accesslogs", :plugin=>"#LogStash::OutputDelegator:0x7448ae6c", :error=>"undefined method match' for nil:NilClass\nDid you mean? catch", :thread=>"#<Thread:0x6e4d102c run>"} [2019-01-03T10:20:41,671][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"askadoc-accesslogs", :exception=>#<NoMethodError: undefined methodmatch' for nil:NilClass
Did you mean? catch>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-core-2.11.177/lib/aws-sdk-core/endpoint_provider.rb:72:in block in partition_matching_region'", "org/jruby/RubyEnumerable.java:643:infind'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-core-2.11.177/lib/aws-sdk-core/endpoint_provider.rb:71:in partition_matching_region'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-core-2.11.177/lib/aws-sdk-core/endpoint_provider.rb:60:inget_partition'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-core-2.11.177/lib/aws-sdk-core/endpoint_provider.rb:14:in signing_region'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-core-2.11.177/lib/aws-sdk-core/endpoint_provider.rb:89:insigning_region'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-core-2.11.177/lib/aws-sdk-core/signers/v4.rb:46:in initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-amazon_es-6.4.0-java/lib/logstash/outputs/amazon_es/http_client/manticore_adapter.rb:111:inperform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-amazon_es-6.4.0-java/lib/logstash/outputs/amazon_es/http_client/pool.rb:291:in perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-amazon_es-6.4.0-java/lib/logstash/outputs/amazon_es/http_client/pool.rb:245:inblock in healthcheck!'", "org/jruby/RubyHash.java:1343:in each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-amazon_es-6.4.0-java/lib/logstash/outputs/amazon_es/http_client/pool.rb:241:inhealthcheck!'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-amazon_es-6.4.0-java/lib/logstash/outputs/amazon_es/http_client/pool.rb:341:in update_urls'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-amazon_es-6.4.0-java/lib/logstash/outputs/amazon_es/http_client/pool.rb:71:instart'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client.rb:302:in build_pool'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client.rb:64:ininitialize'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:103:in create_http_client'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:99:inbuild'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch.rb:234:in build_client'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/common.rb:25:inregister'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:102:in register'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:46:inregister'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:242:in register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:253:inblock in register_plugins'", "org/jruby/RubyArray.java:1734:in each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:253:inregister_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:594:in maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:263:instart_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:200:in run'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:160:inblock in start'"], :thread=>"#<Thread:0x6e4d102c run>"}
[2019-01-03T10:20:41,693][ERROR][logstash.agent ] Failed to execute action {:id=>:"askadoc-accesslogs", :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create, action_result: false", :backtrace=>nil}
[2019-01-03T10:20:42,101][ERROR][logstash.p'''

I have never used AWS ES nor the amazon_es plugin, so will not be able to help much there.

I generally use the our Elasticsearch Service instead, which works well with the standard elasticsearch output plugin. It also supports hot/warm architectures and provides access to our commercial features.

HI Christian can you help with this please .
I am trying to start the logstash and getting the below logs

Jan 03 11:49:52 ip-172-31-40-86.us-west-2.compute.internal systemd[1]: Unit logstash.service entered failed state.
Jan 03 11:49:52 ip-172-31-40-86.us-west-2.compute.internal systemd[1]: logstash.service failed.
Jan 03 11:49:52 ip-172-31-40-86.us-west-2.compute.internal systemd[1]: Started logstash.
Jan 03 11:49:52 ip-172-31-40-86.us-west-2.compute.internal systemd[1]: Starting logstash...
Jan 03 12:32:16 ip-172-31-40-86.us-west-2.compute.internal systemd[1]: Stopping logstash...
Jan 03 12:33:38 ip-172-31-40-86.us-west-2.compute.internal systemd[1]: logstash.service: main process exited, code=exited, status=143/n/a
Jan 03 12:33:38 ip-172-31-40-86.us-west-2.compute.internal systemd[1]: Unit logstash.service entered failed state.
Jan 03 12:33:38 ip-172-31-40-86.us-west-2.compute.internal systemd[1]: logstash.service failed.
Jan 03 12:33:38 ip-172-31-40-86.us-west-2.compute.internal systemd[1]: Started logstash.
Jan 03 12:33:38 ip-172-31-40-86.us-west-2.compute.internal systemd[1]: Starting logstash...

What does the Logstash logs say?

I am trying to start the logstash with sudo service logstash start and getting logs

● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2019-01-03 12:48:05 UTC; 10s ago
Main PID: 14852 (logstash)
CGroup: /system.slice/logstash.service
├─14852 /bin/bash /usr/share/logstash/bin/logstash --path.settings /etc/logstash
└─14853 /bin/bash /usr/share/logstash/bin/logstash --path.settings /etc/logstash

Jan 03 12:48:05 ip-172-31-40-86.us-west-2.compute.internal systemd[1]: logstash.service: main process exited, code=exited, status=143/n/a
Jan 03 12:48:05 ip-172-31-40-86.us-west-2.compute.internal systemd[1]: Unit logstash.service entered failed state.
Jan 03 12:48:05 ip-172-31-40-86.us-west-2.compute.internal systemd[1]: logstash.service failed.
Jan 03 12:48:05 ip-172-31-40-86.us-west-2.compute.internal systemd[1]: Started logstash.
Jan 03 12:48:05 ip-172-31-40-86.us-west-2.compute.internal systemd[1]: Starting logstash..

its active but logstash.service:main process exited, code=exited, status=143/n/a

Did you resolve the issue with the plugin you mentioned earlier? Does it start when not started as a service?

yes I tired installing the amazon_es plugin but it failed to install

Then that is probably what is causing the problems then.

yeah that is why I am uninstalling the logstash and reinstalling it

How did you install the plugin? Did you look at the documentation about how to use it? As far as I know it is not a drop-in replacement for the elasticsearch plugin, and is configured differently.

yes i referred the same documentation

What does your config look like? Does it match the example in the documentation?

sorry for late reply. This is my conf file . Now i am not using the elastic search output plugin still i am getting same error.

input {
udp{
port => 5978
}
}

filter {
grok{
match => {"message" => '(?:%{SYSLOGTIMESTAMP:timestamp}|%{TIMESTAMP_ISO8601:timestamp8601}) %{WORD:appName}/%{WORD:containerId}[%{INT:randomId:int}]: %{GREEDYDATA:logMessage}'}
}
}

output {

file{
	path => "/var/log/logstash/application/docker-%{appName}-%{containerId}.log"
}
   
    file{
           path => "/var/log/logstash/services/docker-%{appName}.log"
    }

}