Erro ao executar o logstash

This error persists and I don't know how to solve it anymore. Can someone help me. Follow my .conf files

input {
file {
path => "C:/Elastic/logstash-8.9.0/config/logs.log"
start_position => "beginning"
}
}

filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:loglevel} %{GREEDYDATA:message}" }
}
}

output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "logall12"
user => "elastic"
password => "my pass"
}
stdout {
codec => rubydebug
}
}

outgoing message
'
[2023-08-11T18:12:35,716][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"localhost:9200 failed to respond", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::OrgApacheHttp::NoHttpResponseException: localhost:9200 failed to respond>}
[2023-08-11T18:12:35,717]
[WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::Elasticsearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}

Is ES up and running? Is port 9200 opened on firewall?
Are you using http or https?
What is setting for network.host in elasticsearch.yml? Try with network.host: 0.0.0.0

Yes it is running and so is Kibana;
{
"name" : "myname",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "CmAuozqtSLi6buf-i-IXlQ",
"version" : {
"number" : "8.8.2",
"build_flavor" : "default",
"build_type" : "zip",
"build_hash" : "98e1271edf932a480e4262a471281f1ee295ce6b",
"build_date" : "2023-06-26T05:16:16.196344851Z",
"build_snapshot" : false,
"lucene_version" : "9.6.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}

Menssage:

[2023-08-11T18:49:19,241][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"localhost:9200 failed to respond", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::OrgApacheHttp::NoHttpResponseException: localhost:9200 failed to respond>}
[2023-08-11T18:49:19,243][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::Elasticsearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}

Can you share your elasticsearch.yml file?

Thank you for your attention. Follow the file.

---------------------------------- Cluster -----------------------------------

Use a descriptive name for your cluster:

#cluster.name: my-application

------------------------------------ Node ------------------------------------

Use a descriptive name for the node:

#node.name: node-1

Add custom attributes to the node:

#node.attr.rack: r1

----------------------------------- Paths ------------------------------------

Path to directory where to store the data (separate multiple locations by comma):

#path.data: /path/to/data

Path to log files:

#path.logs: /path/to/logs

----------------------------------- Memory -----------------------------------

Lock the memory on startup:

#bootstrap.memory_lock: true

---------------------------------- Network -----------------------------------

#network.host: 192.168.0.1
network.host: 0.0.0.0

http.port: 9200

--------------------------------- Discovery ----------------------------------

discovery.seed_hosts: ["http://localhost:9200", "0.0.0.0" ]

---------------------------------- Various -----------------------------------

Allow wildcard deletion of indices:

#action.destructive_requires_name: false

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------

The following settings, TLS certificates, and keys have been automatically

generated to configure Elasticsearch security features on 19-07-2023 16:25:23

--------------------------------------------------------------------------------

Enable security features

xpack.security.enabled: true
xpack.security.enrollment.enabled: true

Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents

xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12

Enable encryption and mutual authentication between cluster nodes

xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12

Create a new cluster with the current node only

Additional nodes can still join the cluster later

cluster.initial_master_nodes: ["PROGRA02"]

Allow HTTP API connections from anywhere

Connections are encrypted and require user authentication

http.host: 0.0.0.0

Allow other nodes to join the cluster from anywhere

Connections are encrypted and mutually authenticated

#transport.host: 0.0.0.0

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

Please use < /> icon for formatting.
Change to HTTPS in the output

hosts => ["https://localhost:9200"]
ssl => true
cacert => "/path/to/http_ca.crt"
ssl_verify_mode => "none" # or full

Sorry for the formatting... but it didn't work, the error persists:

My elasticsearch.yml:

# ---------------------------------- Cluster -----------------------------------
# Use a descriptive name for your cluster:
#cluster.name: my-application
# ------------------------------------ Node ------------------------------------
#node.name: node-1
#node.attr.rack: r1
# ----------------------------------- Paths ------------------------------------
# Path to directory where to store the data (separate multiple locations by comma):
#path.data: /path/to/data
# Path to log files:
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
# Lock the memory on startup:
#bootstrap.memory_lock: true
# ---------------------------------- Network -----------------------------------
network.host: 0.0.0.0
http.port: 9200
# --------------------------------- Discovery ----------------------------------
discovery.seed_hosts: ["https://localhost:9200"]
# ---------------------------------- Various -----------------------------------
#action.destructive_requires_name: false

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 19-07-2023 16:25:23
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: none
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["PROGRA02"]
http.host: 0.0.0.0
transport.host: 0.0.0.0
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

The Logstash:

[2023-08-12T18:12:07,692][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"localhost:9200 failed to respond", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::OrgApacheHttp::NoHttpResponseException: localhost:9200 failed to respond>}
[2023-08-12T18:12:07,693]
[WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}

What you set in output?

logstash output?

You have xpack.security.http.ssl.enabled set to true, so your logstash output needs to https, not http.

Your Logstash output is still using http, you need to change it to https and configure the cacert option as mentioned on a previous answer.

1 Like

Thanks for the answer; but errors persist.
my logstahs output:

input {
  file {
    path => "C:/Elastic/logstash-8.9.0/config/logs.log"
    start_position => "beginning"
  }
}

filter {
  grok {
    match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:loglevel} %{GREEDYDATA:message}" }
  }
}

output {
  elasticsearch {
    hosts => ["https://localhost:9200"]
    index => "logall"
    user => "elastic"
    password => "my pass"
  }
  stdout {
    codec => rubydebug
  }
}

my test:

C:\Elastic\logstash-8.9.0\bin>logstash --config.test_and_exit -f C:\Elastic\logstash-8.9.0\config\logstash.conf
Configuration OK
[2023-08-14T14:12:49,643][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

C:\Elastic\logstash-8.9.0\bin>logstash -f C:\Elastic\logstash-8.9.0\config\logstash.conf
[2023-08-14T14:23:04,902][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::JavaxNetSsl::SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target>}

This is a different error, it is related to the certificate.

As mentioned in previous answers you need to configure the path for the certificate authority you used with the cacert option.

Your elasticsearch output does not have the cacert option configured.

Obrigado Leandro pela ajuda. Estamos desistindo do projeto. Vamos partir para o Mongodb.

Tranquilo, mas como mencionado os erros que você recebeu foram devidos a configurações incompletas.

Se no seu caso de uso o MongoDB pode ser uma alternativa, talvez seja melhor usar o Mongo mesmo que tem uma configuração/gestão mais simples, mas embora tanto o Elasticsearch quanto o MongoDB façam a mesma coisa pra alguns casos de uso, nem sempre isso vai ser válido.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.