Extract data from Elastic search using pem cert

Hello,

I'm new to ELK stack and we are trying to extract data from ES via logstash pipeline from existing system.

Below is the logstash input pipeline being used for extracting data and my output pipeline targeted to Kusto.

input {
elasticsearch {
hosts => ["https://ipaddress/"]
index => ""
user => ""
password => ""
ssl => true
ca_file => "path/cert.pem"
}

My Elasticsearch is accessible through public IP address with username, pwd and ssl cert that is used for authentication. Also, this infra is in private cloud.

I'm trying to run this logstash pipeline from my sandbox to extract the data and seeing below error.

Can someone throw some light on this error?

E:\ELK\logstash\bin>logstash -f e:\elk\logstash\config\logstash.conf
"Using bundled JDK: E:\ELK\logstash\jdk\bin\java.exe"
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to E:/ELK/logstash/logs which is now configured via log4j2.properties
[2022-04-20T11:49:17,400][INFO ][logstash.runner          ] Log4j configuration path used is: E:\ELK\logstash\config\log4j2.properties
[2022-04-20T11:49:17,400][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.1.2", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.14.1+1 on 11.0.14.1+1 +indy +jit [mswin32-x86_64]"}
[2022-04-20T11:49:17,415][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2022-04-20T11:49:17,479][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-04-20T11:49:18,819][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2022-04-20T11:49:19,270][INFO ][org.reflections.Reflections] Reflections took 50 ms to scan 1 urls, producing 120 keys and 419 values
[2022-04-20T11:49:19,600][INFO ][logstash.codecs.jsonlines] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2022-04-20T11:49:19,631][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2022-04-20T11:49:19,930][INFO ][com.microsoft.azure.kusto.ingest.QueuedIngestClient][main] Creating a new IngestClient
[2022-04-20T11:49:19,961][INFO ][com.microsoft.azure.kusto.ingest.ResourceManager][main] Refreshing Ingestion Auth Token
[2022-04-20T11:49:19,992][INFO ][logstash.outputs.kusto   ][main] Going to recover old files in path
[2022-04-20T11:49:20,008][INFO ][logstash.outputs.kusto   ][main] Found 0 old file(s), sending them now...
[2022-04-20T11:49:20,087][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, "pipeline.sources"=>["e:/elk/logstash/config/logstash.conf"], :thread=>"#<Thread:0x2b42665d run>"}
[2022-04-20T11:49:20,824][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.74}
[2022-04-20T11:49:20,934][INFO ][com.microsoft.azure.kusto.ingest.ResourceManager][main] Refreshing Ingestion Resources
[2022-04-20T11:49:22,049][ERROR][logstash.javapipeline    ][main] Pipeline error {:pipeline_id=>"main", :exception=>#<Manticore::UnknownException: Certificate for <"ipaddress removed here"> doesn't match any of the subject alternative names: ["DNS Name removed here"]>, :backtrace=>["E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.8.0-java/lib/manticore/response.rb:36:in `block in initialize'", "E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.8.0-java/lib/manticore/response.rb:79:in `call'", "E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.8.0-java/lib/manticore/response.rb:274:in `call_once'", "E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.8.0-java/lib/manticore/response.rb:158:in `code'", "E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-7.17.1/lib/elasticsearch/transport/transport/http/manticore.rb:112:in `block in perform_request'", "E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-7.17.1/lib/elasticsearch/transport/transport/base.rb:288:in `perform_request'", "E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-7.17.1/lib/elasticsearch/transport/transport/http/manticore.rb:91:in `perform_request'", "E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-7.17.1/lib/elasticsearch/transport/client.rb:197:in `perform_request'", "E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-7.17.1/lib/elasticsearch.rb:93:in `elasticsearch_validation_request'", "E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-7.17.1/lib/elasticsearch.rb:51:in `verify_elasticsearch'", "E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-7.17.1/lib/elasticsearch.rb:40:in `method_missing'", "E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-api-7.17.1/lib/elasticsearch/api/actions/ping.rb:38:in `ping'", "E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-elasticsearch-4.12.2/lib/logstash/inputs/elasticsearch.rb:479:in `test_connection!'", "E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-elasticsearch-4.12.2/lib/logstash/inputs/elasticsearch.rb:243:in `register'", "E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-mixin-ecs_compatibility_support-1.3.0-java/lib/logstash/plugin_mixins/ecs_compatibility_support/target_check.rb:48:in `register'", "E:/ELK/logstash/logstash-core/lib/logstash/java_pipeline.rb:232:in `block in register_plugins'", "org/jruby/RubyArray.java:1821:in `each'", "E:/ELK/logstash/logstash-core/lib/logstash/java_pipeline.rb:231:in `register_plugins'", "E:/ELK/logstash/logstash-core/lib/logstash/java_pipeline.rb:390:in `start_inputs'", "E:/ELK/logstash/logstash-core/lib/logstash/java_pipeline.rb:315:in `start_workers'", "E:/ELK/logstash/logstash-core/lib/logstash/java_pipeline.rb:189:in `run'", "E:/ELK/logstash/logstash-core/lib/logstash/java_pipeline.rb:141:in `block in start'"], "pipeline.sources"=>["e:/elk/logstash/config/logstash.conf"], :thread=>"#<Thread:0x2b42665d run>"}
[2022-04-20T11:49:22,049][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2022-04-20T11:49:22,096][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[2022-04-20T11:49:22,191][INFO ][logstash.runner          ] Logstash shut down.
[2022-04-20T11:49:22,206][FATAL][org.logstash.Logstash    ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
        at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:747) ~[jruby.jar:?]
        at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:710) ~[jruby.jar:?]
        at E_3a_.ELK.logstash.lib.bootstrap.environment.<main>(E:\ELK\logstash\lib\bootstrap\environment.rb:94) ~[?:?]

You have used an IP address in the elasticsearch input. The IP or name used to connect to elasticsearch has to match the CN or one of the SANs on the cert. You cannot use a mismatched certificate.

I suggest you change your configuration to use one of the names in the cert.

Thanks very much for quick response. The certificate (PEM) I was using for ES auth is currently used in my higher environment system which is working for ES integration. I was trying to look up SAN for the PEM using the below command

openssl x509 -in cert.pem -noout -text

However, this isn't showing me any SAN details. Do you have any recommendation on how to check SAN for this PEM?

That suggests that there are no SANs in that certificate. In that case you are going to have to get the hostname in the CN to resolve to the IP address you want to use. (Or generate a matching cert.)

1 Like

Is there a way to disable ssl auth when connecting to ES? I removed ssl => true and ca_file properties. But logstash expecting to have cert based on below error

[2022-04-20T14:33:53,072][ERROR][logstash.javapipeline    ][main] Pipeline error {:pipeline_id=>"main", :exception=>#<Manticore::ClientProtocolException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target>, :backtrace=>["E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.8.0-java/lib/manticore/response.rb:36:in `block in initialize'", "E:/ELK/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.8.0-java/lib/manticore/response.rb:79:in `call'", 

You would need to reconfigure the Elasticsearch server.

My bad, I was incorrectly used a wrong pem file. Actually, I do see SAN Names in the output and it matches with the SAN Name in the original exception that was throwing by logstash.

Even though my PEM file has right SAN Name, its still the same issue. :frowning:

Also, is there any way to pass .PFX file ?

Does the "ipaddress" here exactly match the SAN?

No the ipaddress/DNS of Elasticsearch is different. That doesn't even match in my prod environment. My logstash VM is outputting the data to ES using the same cert and it's working fine. I'm trying to replicate the scenario on my sandbox and not working.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.