We have a working pipeline configuration file which we are porting to Docker. We are seeing the following warnings in the logs. We do not have any security configured on our Elastic Search.
What is very puzzling is that logstash is trying to reach an ElasticSearch environment on my PC via local host. But the logstash pipeline configuration specifies hosts => ["monatee-loggy-master-tpc.dev.bnymellon.net:80"] . Why would logstash try to connect to Elasticsearch locally?
Docker Command
docker run --rm
--mount type=bind,source=/c/Users/Public/logstash/config_pipeline,destination=/usr/share/logstash/pipeline
--mount type=bind,source=/c/Users/Public/logstash/config,destination=/usr/share/logstash/config
--mount type=bind,source=/c/Users/Public/logstash/config_kafka,destination=/usr/share/logstash/config_kafka
--mount type=bind,source=/c/Users/Public/logstash/logs,destination=/usr/share/logstash/logs
docker.elastic.co/logstash/logstash:6.1.2
LS of Directories and files
drwxr-xr-x 1 xeccgtt 1052689 0 Jan 23 13:21 /c/Users/Public/
drwxr-xr-x 1 xeccgtt 1052689 0 Jan 23 13:46 /c/Users/Public/logstash/
drwxr-xr-x 1 xeccgtt 1052689 0 Jan 23 13:39 /c/Users/Public/logstash/config/
drwxr-xr-x 1 xeccgtt 1052689 0 Jan 23 13:07 /c/Users/Public/logstash/config_kafka/
-rw-r--r-- 1 xeccgtt 1052689 133 Nov 13 08:28 /c/Users/Public/logstash/config_kafka/kafka_client_jaas_logstash.conf
drwxr-xr-x 1 xeccgtt 1052689 0 Jan 23 11:57 /c/Users/Public/logstash/config_pipeline/
-rw-r--r-- 1 xeccgtt 1052689 1743 Jan 23 16:30 /c/Users/Public/logstash/config_pipeline/pipeline.conf
drwxr-xr-x 1 xeccgtt 1052689 0 Jan 23 15:25 /c/Users/Public/logstash/logs/
Logs:
[2018-01-23T21:14:35,208][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-01-23T21:14:35,216][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-01-23T21:14:35,362][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
PIPELINE CONFIGURATION
input {
kafka {
bootstrap_servers => "rsomtapae182.bnymellon.net:9092,rsomtapae183.bnymellon.net:9092,rsomtapae184.bnymellon.net:9092"
topics => [ "monatee_loggy_tpc" ]
jaas_path => "/usr/share/logstash/config_kafka/kafka_client_jaas_logstash.conf"
security_protocol => "SASL_PLAINTEXT"
sasl_kerberos_service_name => "kafka"
sasl_mechanism => "PLAIN"
group_id => "monatee_loggy_tpc"
decorate_events => true
codec => json
client_id => loggy_albert_pc
add_field => { "filebeat_timestamp" => "%{@timestamp}" }
}
}
The filter part of this file is commented out to indicate that it is
optional.
filter {
grok {
match => { "message" => "%{SYSLOG5424PRI}*%{SYSLOGTIMESTAMP:syslog_time} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
}
date {
match => [ "syslog_time",
"MMM d HH:mm:ss",
"MMM dd HH:mm:ss",
"MMM d HH:mm:ss",
"ISO8601" ]
}
ruby {
code => "event.set('logstash_2_received_time', Time.now.utc.strftime('%FT%T.%L') )"
}
mutate {
add_field => [ "logstash_2_server", "albert_pc" ]
}
}
output {
# stdout { codec => rubydebug }
elasticsearch {
hosts => ["monatee-loggy-master-tpc.dev.bnymellon.net:80"]
index => "monatee_loggy_syslog_tpc-%{+YYYY.MM.dd}"
document_type => "syslog"
}
}