Thank you very very much, indeed.
I followed your suggestion, and now things are a lot of better.
Here is what I did:
On Elasticsearch node 1:
- bin/elasticsearch-certutil ca
- bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --dns<elasticnode1.com>
- bin /elasticsearch-certutil cert --pem -ca /path to/elastic-stack-ca.p12 --dns
- openssl pkcs12 -in elastic-stack-ca.p12 -clcerts -nokeys -chain -out elastic-stack-ca.pem
- Copied the certs/pem/crts to kibana and logstash node (they are co-located on the same server).
- Modified the kibnana.yml, it started fine
- Mofdified the logstash.yml and pipeline.conf.
Here I have an issue. I have 2 elasticserch nodes, node1 and node2, I generated all the certs and crts on node1. Now, in the logstash.yml, if I put 2 nodes for xpack.monitoring.elasticsearch.url, logstash will complain about the node2, says it can connect to node2. If I remove only put node1 for xpack.monitoring.elasticsearch.url, it will work fine...
I tried the set xpack.monitoring.elasticsearch.ssl.verification_mode to none, still the same
In the pipleline.conf for the value to 'Hosts", I can only use node1 there... cannot put node2 (didn't work for node2 either).
Here is the errors I got:
[2018-10-04T19:24:52,958][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://logstash_ingest:xxxxxx@elasticnode1.com:9200/"}
[2018-10-04T19:24:53,015][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type
event field won't be used to determine the document _type {:es_version=>6}
[2018-10-04T19:25:03,047][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://logstash_ingest:xxxxxx@elasticnode2.com:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://logstash_ingest:xxxxxx@elasticnode2.com:9200/][Manticore::ConnectTimeout] Read timed out"}
...
[2018-10-04T20:21:50,389][WARN ][logstash.outputs.elasticsearch] Error while performing resurrection {:error_message=>"Host name 'elasticnode2.com' does not match the certificate subject provided by the peer (CN=instance)", :class=>"Manticore::UnknownException", :backtrace=>
Here is the logstash.yml:
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.elasticsearch.url: ["https://elasticnode1.com:9200", "https://elasticnode2.com:9200"]
#xpack.monitoring.elasticsearch.url: ["https://elasticnode1.com:9200"]
xpack.monitoring.elasticsearch.ssl.ca: "/etc/logstash/keys/elastic-stack-ca.pem"
xpack.monitoring.elasticsearch.ssl.verification_mode: none
xpack.monitoring.elasticsearch.sniffing: false
xpack.monitoring.collection.interval: 60s
#xpack.monitoring.collection.pipeline.details.enabled: true
Here is the pipeline.conf:
output {
elasticsearch {
user => "logstash_system"
password => "changeme"
ssl => true
ssl_certificate_verification => true
cacert => "/etc/logstash/keys/elastic-stack-ca.pem"
action => "index"
hosts => ["elasticnode1.com","elasticnode2.com"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
Here is the elasticsearch.yml (both nodes are same):
discovery.zen.ping.unicast.hosts: ["elasticnode1.com", "elasticnode2.com"]
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true
xpack.monitoring.collection.interval: 60s
xpack.monitoring.collection.cluster.stats.timeout: 60s
xpack.monitoring.history.duration: 90d
xpack.watcher.history.cleaner_service.enabled: true
xpack.http.proxy.host: 'ourproxyhostname.com'
xpack.http.proxy.port: 3128
xpack.watcher.enabled: true
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: /etc/elasticsearch/keys/elastic-certificates.p12
xpack.security.http.ssl.truststore.path: /etc/elasticsearch/keys/elastic-certificates.p12
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/keys/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/keys/elastic-certificates.p12
xpack:
security:
authc:
realms:
active_directory:
type: active_directory
order: 0
domain_name: xxx.yyy.com
files.role_mapping: /etc/elasticsearch/role_mapping.yml
bind_dn: CN=admin,CN=Users,DC=xxx,DC=yyy,DC=com
bind_password: password
The problem is that if elasticnode1 goes down, we will be losing connection between logstash and elasticsearch cluster (all the beats come in via the logstash). Could you please take a look and help?
Again, thank you very much for your help, indeed
Li