Logstash throws out 'Host name does not match the certificate subject provided by the peer' error

Hi All,

I recently installed a three-node cluster in a vagrant box to test out the security features of Elasticsearch:

I followed the guides in the below order:

  1. Set up basic security for the Elastic Stack | Elasticsearch Guide [7.12] | Elastic
  2. Set up minimal security for Elasticsearch | Elasticsearch Guide [7.12] | Elastic
  3. Set up basic security for the Elastic Stack plus secured HTTPS traffic | Elasticsearch Guide [7.12] | Elastic

After successfully setting up the security in the cluster including Metricbeat which is to push elasticsearch nodes' data - as a part of testing, I jumped on to trying logstash to push a simple Apache log using the below code:

    input {
      file {
             path => "/vagrant/apache_test.log"
        start_position => "beginning"
        sincedb_path => "/dev/null"
      }
    }
    filter {
        grok {
          match => { "message" => "%{COMMONAPACHELOG}" }
        }
        date {
        match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
      }
      geoip {
          source => "clientip"
        }
    }
    output {
      elasticsearch {
        hosts => ["https://node1.elasticsearch.com:9200", "https://node2.elasticsearch.com:9200", "https://node3.elasticsearch.com:9200"]
            user => "logstash_system"
            password => "XXXXXX"
            cacert => "/opt/elk/logstash-7.12.0/config/elasticsearch-ca.pem"
       }
        stdout { codec => rubydebug }
    }

But I ended up with the below error:

[2021-04-19T19:33:30,554][ERROR][logstash.javapipeline ][main] Pipeline error {:pipeline_id=>"main", :exception=>#<Manticore::UnknownException: Host name 'node1.elasticsearch.com' does not match the certificate subject provided by the peer (CN=node1, DC=elasticsearch, DC=com)>

When I redirect the output to a file like below it works:

    output {
    	file {
    			path => "/vagrant/output.log"
    	}
    }

I suspect that it should be something to do with the way the certificate was created as per the guides specified above. I followed it by the books and the ES cluster is fine where I could manage it without any issues using Kibana and also Metricbeat is able to push the data to the cluster by specifying the certificate information along with the credentials in the respective YAML file of the module.

Any help would be greatly appreciated :slight_smile:

Thanks in advance!

1 Like

Able to solve this issue by making the below change to output.

    output {
          elasticsearch {
            hosts => ["https://node1.elasticsearch.com:9200", "https://node2.elasticsearch.com:9200", "https://node3.elasticsearch.com:9200"]
                user => "logstash_writer"
                password => "changeme"
                ssl_certificate_verification => false
                cacert => "/opt/elk/logstash-7.12.0/config/elasticsearch-ca.pem"
           }
            stdout { codec => rubydebug }
        }

I included ssl_certificate_verification => false to solve the issue, but without that, it is impossible to go past the error.

Is there any way to solve this issue without disabling certificate verification?

Found out the root cause:

When we create the node certificates we are supposed to include the IP address and DNS names like below:

./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --ip 10.0.0.5,10.0.0.6,10.0.0.7 --dns node1.es.com,node2.es.com,logstash.es.com

Ensure to include the DNS name/IP of the logstash node as well while creating the node certificates.

This solved it and even without ssl_certificate_verification => false logstash pushes the data now.

We could use the same elasticsearch-ca.pem certificate created by ./bin/elasticsearch-certutil http process for Kibana, Logstash & Beats.

Hope this helps many like me who were struggling to get past this error :slight_smile:

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.