I have been trying to configure the ELK stack to use our wildcard certificate however this has been impossible until now.
Logstash and kibana are unable to talk to elasticsearch however i can curl with no issues.
Result from inside logstash container
curl https://logstash_internal:XXXXXXXXX@watch.test.com:9200
{
"name" : "elasticsearch",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "NLHeQ64XXaGxf6Yo6yvgzQ",
"version" : {
"number" : "8.6.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "2d58d0f136141f03239816a4e360a8d17b6d8f29",
"build_date" : "2023-02-13T09:35:20.314882762Z",
"build_snapshot" : false,
"lucene_version" : "9.4.2",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
At this point i don't what more to try to make it work.
If i access https://watch.test.com:9200
and put in the credentials or even use the url defined in the curl
command i get the same result even from my homw pc. The browser says the ssl is valid.
Here are the errors mentioned by logstash
[2023-03-30T18:15:01,766][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://logstash_internal:xxxxxx@watch.test.com:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://watch.test.com:9200/][Manticore::SocketException] Connect to watch.test.com:9200 [watch.test.com/54.254.XXX.XXX] failed: Connection refused"}
[2023-03-30T18:15:01,788][INFO ][logstash.outputs.elasticsearch][main] Data streams auto configuration (`data_stream => auto` or unset) resolved to `true`
[2023-03-30T18:15:01,795][WARN ][logstash.outputs.elasticsearch][main] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
[2023-03-30T18:15:01,828][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x3817802b@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:131 run>"}
[2023-03-30T18:15:03,321][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.49}
[2023-03-30T18:15:03,658][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2023-03-30T18:15:03,679][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2023-03-30T18:15:03,691][INFO ][logstash.inputs.tcp ][main][b9cea7c297207df5881de58c925b55cd8d1d788bf29e11d3ac8e34a4ae41f88b] Starting tcp input listener {:address=>"0.0.0.0:50000", :ssl_enable=>false}
[2023-03-30T18:15:03,698][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2023-03-30T18:15:03,706][INFO ][org.logstash.beats.Server][main][03c85e1d2b86bde6aa278cdfa45f032538413a667971de97c249eb8e1e1ea574] Starting server on port: 5044
[2023-03-30T18:15:06,799][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to watch.test.com:9200 [watch.test.com/54.254.XXX.XXX] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to watch.test.com:9200 [watch.test.com/54.254.XXX.XXX] failed: Connection refused>}
[2023-03-30T18:15:06,801][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://logstash_internal:xxxxxx@watch.test.com:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://watch.test.com:9200/][Manticore::SocketException] Connect to watch.test.com:9200 [watch.test.com/54.254.XXX.XXX] failed: Connection refused"}
And yes all the firewall ports are open:
- tcp:50000
- udp:50000
- tcp:9200
- tcp:9600
- tcp:9300
- tcp:5044
- tcp:5601
- tcp:8220
Below the docker-compose used
version: "2.3"
services:
# The 'tls' service runs a one-off script which initializes TLS certificates and
# private keys for all components of the stack inside the local tls/ directory.
#
# This task only needs to be performed once, *before* the first stack startup.
#
# By default, it is excluded from the services started by 'docker compose up'
# due to the non-default profile it belongs to. To run it, either provide the
# '--profile=setup' CLI flag to Compose commands, or "up" the service by name
# such as 'docker compose up tls'.
tls:
profiles:
- setup
build:
context: tls/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
user: root # ensures we can write to the local tls/ directory.
init: true
volumes:
- ./tls/entrypoint.sh:/entrypoint.sh:ro,Z
- ./tls/instances.yml:/usr/share/elasticsearch/tls/instances.yml:ro,Z
- ./tls/certs:/usr/share/elasticsearch/tls/certs:z
# The 'setup' service runs a one-off script which initializes users inside
# Elasticsearch — such as 'logstash_internal' and 'kibana_system' — with the
# values of the passwords defined in the '.env' file.
#
# This task is only performed during the *initial* startup of the stack. On all
# subsequent runs, the service simply returns immediately, without performing
# any modification to existing users.
setup:
build:
context: setup/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
init: true
volumes:
- ./setup/entrypoint.sh:/entrypoint.sh:ro,Z
- ./setup/lib.sh:/lib.sh:ro,Z
- ./setup/roles:/roles:ro,Z
- setup:/state:Z
# (!) CA certificate. Generate using the 'tls' service.
- ./tls/certs/ca/ca.crt:/ca.crt:ro,z
environment:
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-}
METRICBEAT_INTERNAL_PASSWORD: ${METRICBEAT_INTERNAL_PASSWORD:-}
FILEBEAT_INTERNAL_PASSWORD: ${FILEBEAT_INTERNAL_PASSWORD:-}
HEARTBEAT_INTERNAL_PASSWORD: ${HEARTBEAT_INTERNAL_PASSWORD:-}
MONITORING_INTERNAL_PASSWORD: ${MONITORING_INTERNAL_PASSWORD:-}
BEATS_SYSTEM_PASSWORD: ${BEATS_SYSTEM_PASSWORD:-}
networks:
- elk
depends_on:
- elasticsearch
elasticsearch:
build:
context: elasticsearch/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro,Z
- elasticsearch:/usr/share/elasticsearch/data:Z
# (!) TLS certificates. Generate using the 'tls' service.
- ./tls/certs/ca/ca.crt:/usr/share/elasticsearch/config/ca.crt:ro,z
- ./tls/certs/cert/wildcard.crt:/usr/share/elasticsearch/config/elasticsearch.crt:ro,z
- ./tls/certs/cert/wildcard.key:/usr/share/elasticsearch/config/elasticsearch.key:ro,z
ports:
- 9200:9200
- 9300:9300
environment:
node.name: elasticsearch
ES_JAVA_OPTS: -Xms512m -Xmx512m
# Bootstrap password.
# Used to initialize the keystore during the initial startup of
# Elasticsearch. Ignored on subsequent runs.
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
# Use single node discovery in order to disable production mode and avoid bootstrap checks.
# see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- elk
restart: unless-stopped
mem_limit: 1g
logstash:
build:
context: logstash/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro,Z
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro,Z
# (!) CA certificate. Generate using the 'tls' service.
- ./tls/certs/ca/ca.crt:/usr/share/logstash/config/ca.crt:ro,z
ports:
- 5044:5044
- 50000:50000/tcp
- 50000:50000/udp
- 9600:9600
environment:
LS_JAVA_OPTS: -Xms256m -Xmx256m
LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
networks:
- elk
depends_on:
- elasticsearch
restart: unless-stopped
mem_limit: 450m
kibana:
build:
context: kibana/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
volumes:
- ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro,Z
# (!) TLS certificates. Generate using the 'tls' service.
- ./tls/certs/ca/ca.crt:/usr/share/kibana/config/ca.crt:ro,z
- ./tls/certs/cert/wildcard.crt:/usr/share/kibana/config/kibana.crt:ro,Z
- ./tls/certs/cert/wildcard.key:/usr/share/kibana/config/kibana.key:ro,Z
#- ./kibana/config/kibana.crt:/usr/share/kibana/config/kibana.crt:ro,Z
#- ./kibana/config/kibana.key:/usr/share/kibana/config/kibana.key:ro,Z
ports:
- 5601:5601
environment:
KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-}
networks:
- elk
depends_on:
- elasticsearch
restart: unless-stopped
mem_limit: 450m
networks:
elk:
driver: bridge
volumes:
setup:
elasticsearch:
Below the logstash/config/logstash.yml
---
## Default Logstash configuration from Logstash base image.
## https://github.com/elastic/logstash/blob/main/docker/data/logstash/config/logstash-full.yml
#
http.host: 0.0.0.0
monitoring.elasticsearch.hosts: https://watch.test.com:9200
node.name: logstash
Below the logstash/pipeline/logstash.conf
input {
beats {
port => 5044
}
tcp {
port => 50000
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["https://watch.test.com:9200"]
user => "logstash_internal"
password => "${LOGSTASH_INTERNAL_PASSWORD}"
ssl => true
cacert => "config/ca.crt"
}
}
Below the elasticsearch/config/elasticsearch.yml
---
## Default Elasticsearch configuration from Elasticsearch base image.
## https://github.com/elastic/elasticsearch/blob/main/distribution/docker/src/docker/config/elasticsearch.yml
#
cluster.name: docker-cluster
network.host: 0.0.0.0
## X-Pack settings
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html
#
xpack.license.self_generated.type: trial
xpack.security.enabled: true
##
## TLS configuration
## See instructions from README to enable.
##
## Communications between nodes in a cluster
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-tls.html#tls-transport
#
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.certificate_authorities: [ ca.crt ]
xpack.security.transport.ssl.certificate: elasticsearch.crt
xpack.security.transport.ssl.key: elasticsearch.key
## HTTP client communications
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-tls.html#tls-http
#
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.certificate_authorities: [ ca.crt ]
xpack.security.http.ssl.certificate: elasticsearch.crt
xpack.security.http.ssl.key: elasticsearch.key
Below the kibana/config/kibana.yml
---
## Default Kibana configuration from Kibana base image.
## https://github.com/elastic/kibana/blob/main/src/dev/build/tasks/os_packages/docker_generator/templates/kibana_yml.template.ts
#
server.name: kibana
server.host: 0.0.0.0
server.publicBaseUrl: https://watch.test.com:5601
elasticsearch.hosts: [ https://watch.test.com:9200 ]
monitoring.ui.container.elasticsearch.enabled: true
monitoring.ui.container.logstash.enabled: true
## X-Pack security credentials
#
elasticsearch.username: kibana_system
elasticsearch.password: ${KIBANA_SYSTEM_PASSWORD}
##
## TLS configuration
## See instructions from README to enable.
##
## Communications between Kibana and Elasticsearch
## see https://www.elastic.co/guide/en/kibana/current/configuring-tls.html#configuring-tls-kib-es
#
elasticsearch.ssl.certificateAuthorities: [ config/ca.crt ]
## Communications between web browsers and Kibana
## see https://www.elastic.co/guide/en/kibana/current/configuring-tls.html#configuring-tls-browser-kib
#
server.ssl.enabled: true
server.ssl.certificate: config/kibana.crt
server.ssl.key: config/kibana.key
## Fleet
## https://www.elastic.co/guide/en/kibana/current/fleet-settings-kb.html
#
xpack.fleet.agents.fleet_server.hosts: [ https://watch.test.com:8220 ]
xpack.fleet.outputs:
- id: fleet-default-output
name: default
type: elasticsearch
hosts: [ https://watch.test.com:9200 ]
# Set to output of 'docker-compose up tls'. Example:
is_default: true
is_default_monitoring: true
xpack.fleet.packages:
- name: fleet_server
version: latest
- name: system
version: latest
- name: elastic_agent
version: latest
- name: apm
version: latest
xpack.fleet.agentPolicies:
- name: Fleet Server Policy
id: fleet-server-policy
description: Static agent policy for Fleet Server
monitoring_enabled:
- logs
- metrics
package_policies:
- name: fleet_server-1
package:
name: fleet_server
- name: system-1
package:
name: system
- name: elastic_agent-1
package:
name: elastic_agent
- name: Agent Policy APM Server
id: agent-policy-apm-server
description: Static agent policy for the APM Server integration
monitoring_enabled:
- logs
- metrics
package_policies:
- name: system-1
package:
name: system
- name: elastic_agent-1
package:
name: elastic_agent
- name: apm-1
package:
name: apm
# See the APM package manifest for a list of possible inputs.
# https://github.com/elastic/apm-server/blob/v8.5.0/apmpackage/apm/manifest.yml#L41-L168
inputs:
- type: apm
vars:
- name: host
value: 0.0.0.0:8200
- name: url
value: https://apm-server:8200
- name: tls_enabled
value: true
- name: tls_certificate
value: /usr/share/elastic-agent/apm-server.crt
- name: tls_key
value: /usr/share/elastic-agent/apm-server.key
Update 31/03/2023
I have tried adding in kibana the following elasticsearch.ssl.verificationMode: none
to check if it was an issue with the certificate but I still get the same issue:
kibana logs:
[2023-03-31T07:48:47.752+00:00][DEBUG][metrics.ops] memory: 220.1MB uptime: 0:00:46 load: [4.02,1.91,1.20] delay histogram: { 50: 0.000; 95: 0.000; 99: 0.000 }
[2023-03-31T07:48:47.772+00:00][DEBUG][elasticsearch.query.data] [ConnectionError]: connect ECONNREFUSED 54.254.XXX.XXX:9200
[2023-03-31T07:48:47.775+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 54.254.XXX.XXX:9200
[2023-03-31T07:48:47.776+00:00][DEBUG][elasticsearch.query.data] [ConnectionError]: connect ECONNREFUSED 54.254.XXX.XXX:9200
[2023-03-31T07:48:47.786+00:00][DEBUG][plugins.taskManager] status core.status.derivedStatus now set to critical
[2023-03-31T07:48:47.799+00:00][DEBUG][status] Recalculated core overall status
[2023-03-31T07:48:48.957+00:00][DEBUG][elasticsearch.query.data] [ConnectionError]: connect ECONNREFUSED 54.254.XXX.XXX:9200
[2023-03-31T07:48:50.137+00:00][DEBUG][elasticsearch.query.data] [ConnectionError]: connect ECONNREFUSED 54.254.XXX.XXX:9200
[2023-03-31T07:48:50.248+00:00][DEBUG][elasticsearch.query.data] [ConnectionError]: connect ECONNREFUSED 54.254.XXX.XXX:9200
[2023-03-31T07:48:51.313+00:00][DEBUG][elasticsearch.query.data] [ConnectionError]: connect ECONNREFUSED 54.254.XXX.XXX:9200
[2023-03-31T07:48:52.451+00:00][DEBUG][elasticsearch.query.data] [ConnectionError]: connect ECONNREFUSED 54.254.XXX.XXX:9200
[2023-03-31T07:48:52.843+00:00][DEBUG][metrics.ops] memory: 203.1MB uptime: 0:00:51 load: [4.42,2.03,1.24] mean delay: 19.576 delay histogram: { 50: 13.017; 95: 37.847; 99: 64.127 }
logstash logs:
[2023-03-31T07:49:29,380][DEBUG][org.apache.http.impl.execchain.MainClientExec][main] Executing request GET / HTTP/1.1
[2023-03-31T07:49:29,383][DEBUG][org.apache.http.impl.execchain.MainClientExec][main] Target auth state: UNCHALLENGED
[2023-03-31T07:49:29,383][DEBUG][org.apache.http.impl.execchain.MainClientExec][main] Proxy auth state: UNCHALLENGED
[2023-03-31T07:49:29,384][DEBUG][org.apache.http.headers ][main] http-outgoing-3 >> GET / HTTP/1.1
[2023-03-31T07:49:29,384][DEBUG][org.apache.http.headers ][main] http-outgoing-3 >> Connection: Keep-Alive
[2023-03-31T07:49:29,384][DEBUG][org.apache.http.headers ][main] http-outgoing-3 >> Content-Type: application/json
[2023-03-31T07:49:29,384][DEBUG][org.apache.http.headers ][main] http-outgoing-3 >> Content-Length: 0
[2023-03-31T07:49:29,384][DEBUG][org.apache.http.headers ][main] http-outgoing-3 >> Host: watch.test.com:9200
[2023-03-31T07:49:29,384][DEBUG][org.apache.http.headers ][main] http-outgoing-3 >> User-Agent: Logstash/8.6.2 (OS=Linux-4.14.305-227.531.amzn2.x86_64-amd64; JVM=Eclipse Adoptium-17.0.6) logstash-output-elasticsearch/11.12.4
[2023-03-31T07:49:29,384][DEBUG][org.apache.http.headers ][main] http-outgoing-3 >> Accept-Encoding: gzip,deflate
[2023-03-31T07:49:29,384][DEBUG][org.apache.http.headers ][main] http-outgoing-3 >> Authorization: Basic bG9nc3Rhc2hfaW50ZXJuXXXXXXXT1jbXh3Yy12RmFmeDlyOFMwbDM=
[2023-03-31T07:49:29,384][DEBUG][org.apache.http.wire ][main] http-outgoing-3 >> "GET / HTTP/1.1[\r][\n]"
[2023-03-31T07:49:29,384][DEBUG][org.apache.http.wire ][main] http-outgoing-3 >> "Connection: Keep-Alive[\r][\n]"
[2023-03-31T07:49:29,388][DEBUG][org.apache.http.wire ][main] http-outgoing-3 >> "Content-Type: application/json[\r][\n]"
[2023-03-31T07:49:29,388][DEBUG][org.apache.http.wire ][main] http-outgoing-3 >> "Content-Length: 0[\r][\n]"
[2023-03-31T07:49:29,388][DEBUG][org.apache.http.wire ][main] http-outgoing-3 >> "Host: watch.test.com:9200[\r][\n]"
[2023-03-31T07:49:29,388][DEBUG][org.apache.http.wire ][main] http-outgoing-3 >> "User-Agent: Logstash/8.6.2 (OS=Linux-4.14.305-227.531.amzn2.x86_64-amd64; JVM=Eclipse Adoptium-17.0.6) logstash-output-elasticsearch/11.12.4[\r][\n]"
[2023-03-31T07:49:29,389][DEBUG][org.apache.http.wire ][main] http-outgoing-3 >> "Accept-Encoding: gzip,deflate[\r][\n]"
[2023-03-31T07:49:29,389][DEBUG][org.apache.http.wire ][main] http-outgoing-3 >> "Authorization: Basic bG9nc3Rhc2hfaW50ZXJXXXXXXXXT1jbXh3Yy12RmFmeDlyOFMwbDM=[\r][\n]"
[2023-03-31T07:49:29,389][DEBUG][org.apache.http.wire ][main] http-outgoing-3 >> "[\r][\n]"
[2023-03-31T07:49:29,400][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << "HTTP/1.1 200 OK[\r][\n]"
[2023-03-31T07:49:29,401][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << "X-elastic-product: Elasticsearch[\r][\n]"
[2023-03-31T07:49:29,402][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << "content-type: application/json[\r][\n]"
[2023-03-31T07:49:29,402][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << "content-length: 540[\r][\n]"
[2023-03-31T07:49:29,402][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << "[\r][\n]"
[2023-03-31T07:49:29,402][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << "{[\n]"
[2023-03-31T07:49:29,402][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << " "name" : "elasticsearch",[\n]"
[2023-03-31T07:49:29,402][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << " "cluster_name" : "docker-cluster",[\n]"
[2023-03-31T07:49:29,402][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << " "cluster_uuid" : "NLHeQ64eRaXXXXYo6yvgzQ",[\n]"
[2023-03-31T07:49:29,402][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << " "version" : {[\n]"
[2023-03-31T07:49:29,402][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << " "number" : "8.6.2",[\n]"
[2023-03-31T07:49:29,403][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << " "build_flavor" : "default",[\n]"
[2023-03-31T07:49:29,403][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << " "build_type" : "docker",[\n]"
[2023-03-31T07:49:29,403][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << " "build_hash" : "2d58d0f136141f03239816a4e360a8d17b6d8f29",[\n]"
[2023-03-31T07:49:29,403][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << " "build_date" : "2023-02-13T09:35:20.314882762Z",[\n]"
[2023-03-31T07:49:29,403][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << " "build_snapshot" : false,[\n]"
[2023-03-31T07:49:29,403][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << " "lucene_version" : "9.4.2",[\n]"
[2023-03-31T07:49:29,404][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << " "minimum_wire_compatibility_version" : "7.17.0",[\n]"
[2023-03-31T07:49:29,404][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << " "minimum_index_compatibility_version" : "7.0.0"[\n]"
[2023-03-31T07:49:29,404][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << " },[\n]"
[2023-03-31T07:49:29,404][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << " "tagline" : "You Know, for Search"[\n]"
[2023-03-31T07:49:29,404][DEBUG][org.apache.http.wire ][main] http-outgoing-3 << "}[\n]"
[2023-03-31T07:49:29,404][DEBUG][org.apache.http.headers ][main] http-outgoing-3 << HTTP/1.1 200 OK
[2023-03-31T07:49:29,404][DEBUG][org.apache.http.headers ][main] http-outgoing-3 << X-elastic-product: Elasticsearch
[2023-03-31T07:49:29,404][DEBUG][org.apache.http.headers ][main] http-outgoing-3 << content-type: application/json
[2023-03-31T07:49:29,405][DEBUG][org.apache.http.headers ][main] http-outgoing-3 << content-length: 540
[2023-03-31T07:49:29,406][DEBUG][org.apache.http.impl.execchain.MainClientExec][main] Connection can be kept alive indefinitely
[2023-03-31T07:49:29,407][DEBUG][org.apache.http.impl.conn.PoolingHttpClientConnectionManager][main] Connection [id: 3][route: {s}->https://watch.test.com:9200] can be kept alive indefinitely
[2023-03-31T07:49:29,407][DEBUG][org.apache.http.impl.conn.DefaultManagedHttpClientConnection][main] http-outgoing-3: set socket timeout to 0
[2023-03-31T07:49:29,407][DEBUG][org.apache.http.impl.conn.PoolingHttpClientConnectionManager][main] Connection released: [id: 3][route: {s}->https://watch.test.com:9200][total available: 1; route allocated: 1 of 100; total allocated: 1 of 1000]
[2023-03-31T07:49:30,689][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"Copy"}
[2023-03-31T07:49:30,697][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"MarkSweepCompact"}