output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "logall12"
user => "elastic"
password => "my pass"
}
stdout {
codec => rubydebug
}
}
outgoing message
'
[2023-08-11T18:12:35,716][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"localhost:9200 failed to respond", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::OrgApacheHttp::NoHttpResponseException: localhost:9200 failed to respond>}
[2023-08-11T18:12:35,717]
[WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::Elasticsearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
Is ES up and running? Is port 9200 opened on firewall?
Are you using http or https?
What is setting for network.host in elasticsearch.yml? Try with network.host: 0.0.0.0
Yes it is running and so is Kibana;
{
"name" : "myname",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "CmAuozqtSLi6buf-i-IXlQ",
"version" : {
"number" : "8.8.2",
"build_flavor" : "default",
"build_type" : "zip",
"build_hash" : "98e1271edf932a480e4262a471281f1ee295ce6b",
"build_date" : "2023-06-26T05:16:16.196344851Z",
"build_snapshot" : false,
"lucene_version" : "9.6.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
Menssage:
[2023-08-11T18:49:19,241][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"localhost:9200 failed to respond", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::OrgApacheHttp::NoHttpResponseException: localhost:9200 failed to respond>}
[2023-08-11T18:49:19,243][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::Elasticsearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
Sorry for the formatting... but it didn't work, the error persists:
My elasticsearch.yml:
# ---------------------------------- Cluster -----------------------------------
# Use a descriptive name for your cluster:
#cluster.name: my-application
# ------------------------------------ Node ------------------------------------
#node.name: node-1
#node.attr.rack: r1
# ----------------------------------- Paths ------------------------------------
# Path to directory where to store the data (separate multiple locations by comma):
#path.data: /path/to/data
# Path to log files:
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
# Lock the memory on startup:
#bootstrap.memory_lock: true
# ---------------------------------- Network -----------------------------------
network.host: 0.0.0.0
http.port: 9200
# --------------------------------- Discovery ----------------------------------
discovery.seed_hosts: ["https://localhost:9200"]
# ---------------------------------- Various -----------------------------------
#action.destructive_requires_name: false
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 19-07-2023 16:25:23
#
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: none
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["PROGRA02"]
http.host: 0.0.0.0
transport.host: 0.0.0.0
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
The Logstash:
[2023-08-12T18:12:07,692][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"localhost:9200 failed to respond", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::OrgApacheHttp::NoHttpResponseException: localhost:9200 failed to respond>}
[2023-08-12T18:12:07,693]
[WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
Tranquilo, mas como mencionado os erros que você recebeu foram devidos a configurações incompletas.
Se no seu caso de uso o MongoDB pode ser uma alternativa, talvez seja melhor usar o Mongo mesmo que tem uma configuração/gestão mais simples, mas embora tanto o Elasticsearch quanto o MongoDB façam a mesma coisa pra alguns casos de uso, nem sempre isso vai ser válido.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.