version 5.6.1 does not send logs to ElasticSearch on a different server

The image is not sending logs to an ElasticSearch cluster that is running on different servers.

We are running this docket image in docker toolbox on a windows 7.0 PC. The ElasticSearch cluster is running RedHat Linux servers. The Docker Image needs to connect via tcp/ip to the ElasticSearch servers.

The Docker Image is able to receive logs filebeat and write the logs to standard out. But it is unable to send the logs to ElasticCluster.

Using the same configuration files as the docker image, a logstash 5.6.0 distribution on my PC, running outside of docker can send logs to the ElasticSearch Cluster. This implies that:
1) there is an issue with the image
2) the docker image needs a change to the configuration files.
I believe the bullet 2 is the most likely case.

docker run -p 5045:5045 -p 9600:9600 --rm
--mount type=bind,source=/c/Users/Public/logstash/config_pipeline,destination=/usr/share/logstash/pipeline
--mount type=bind,source=/c/Users/Public/logstash/config,destination=/usr/share/logstash/config
--mount type=bind,source=/c/Users/Public/logstash/config_kafka,destination=/usr/share/logstash/config_kafka
--mount type=bind,source=/c/Users/Public/logstash/logs,destination=/usr/share/logstash/logs >/c/Users/Public/logstash/logs/stdout_err.txt 2>&1 &

windows command to run logstash

    set JAVA_HOME=C:\Program Files\Java\jre1.8.0_152

    cd C:\Users\Public\logstash\logstash-5.6.0

    bin\logstash L101CTS06E315 --http.port 9905 C:/Users/Public/logstash/data -f C:\Users\Public\logstash\config_pipeline --path.logs C:/Users/Public/logstash/logs --path.settings C:/Users/Public/logstash/config >C:/Users/Public/logstash/logs/logstash_stdout.txt 2>&1  &

logstash.yml configurations /usr/share/logstash/data
path.config: /usr/share/logstash/pipeline ""
log.level: info
path.logs: /usr/share/logstash/logs
xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.enabled: false

pipeline configuration

The # character at the beginning of a line indicates a comment. Use

comments to describe your configuration.

input {
beats {
port => "5045"
#port => "5055"
type => syslog

# tcp {
#    port => "5045"
#    type => syslog
# }
#tcp {
#    port => "514"
#    type => syslog


The filter part of this file is commented out to indicate that it is


filter {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{GREEDYDATA:syslog_message}" }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]

ruby {
code => "event.set('logstash_1_received_time','%FT%T.%L') )"
mutate {
add_field => [ "logstash_1_server", "albert_pc" ]
output {
# stdout { codec => rubydebug }
elasticsearch {
hosts => [""]
# hosts => [ "", "", "" ]
# index => "monatee_loggy_tpc_1-%{+YYYY.MM.dd}"

#kafka {
#  bootstrap_servers          => ",,"
#  topic_id                   => "monatee_loggy_tpc"
#  jaas_path                  => "/opt/pki/logstash_config_kafka/kafka_client_jaas_logstash.conf"
#  security_protocol          => "SASL_PLAINTEXT"
#  sasl_kerberos_service_name => "kafka"
#  sasl_mechanism             => "PLAIN"

#  message_key => plain {
#     format => "%{beats_message_key}"
#     id     => "loggy_kafka_output_message_key"
#  } 

#  codec => plain {
#    format => "%{logstash_1_received_time} %{logstash_1_server} %{message} %{ip} %{netmask} %{subnet} %{partition} %{beats_message_key}"
#    id     => "loggy_kafka_output_codec"
#  }


Have you looked in the Logstash log for clues? Comment out your elasticsearch output and un-comment your stdout output. Are you getting any events in the log?

I have un-commented out the standard out logging before submitting this topic. I did not see anything note worthy in the logstash logs.

I will put the standard out back and take another look at the logs.

I suspect that that the issue has something to do with xpack authorization settings.

The ElasticSearch cluster is version 5.5

Standard out is printing the logs. But nothing is being sent to ElasticSearch cluster.

Here is a copy of the logstash logs. It looks typical of a halthy application.

[2018-02-07T13:43:23,281][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-02-07T13:43:23,298][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-02-07T13:43:23,352][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"arcsight", :directory=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/x-pack-5.6.1-java/modules/arcsight/configuration"}
[2018-02-07T13:43:23,375][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2018-02-07T13:43:23,377][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2018-02-07T13:43:23,436][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"21c2b704-a179-4fe1-9726-8ec6ad0bb85f", :path=>"/usr/share/logstash/data/uuid"}
[2018-02-07T13:43:25,163][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[]}}
[2018-02-07T13:43:25,170][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>, :path=>"/"}
[2018-02-07T13:43:25,441][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>""}
[2018-02-07T13:43:25,457][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-02-07T13:43:25,543][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-02-07T13:43:25,589][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//"]}
[2018-02-07T13:43:25,709][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2018-02-07T13:43:26,797][INFO ][ ] Beats inputs: Starting input listener {:address=>""}
[2018-02-07T13:43:27,116][INFO ][logstash.pipeline ] Pipeline main started
[2018-02-07T13:43:27,142][INFO ][] Starting server on port: 5045
[2018-02-07T13:43:27,241][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

Output of one log message that went to standard out

"source" => "/var/log/messages",
"type" => "log",
"log_type" => "syslog",
"syslog_timestamp" => "Feb 7 03:15:01",
"@version" => "1",
"beat" => {
"name" => "r00j9un0c",
"hostname" => "r00j9un0c",
"version" => "5.5.2"
"host" => "r00j9un0c",
"logstash_1_server" => "albert_pc",
"beats_message_key" => 106160,
"app" => "monitoring",
"logstash_1_received_time" => "2018-02-07T13:44:51.339",
"offset" => 71,
"input_type" => "log",
"datacenter" => "tpc",
"message" => "Feb 7 03:15:01 r00j9un0c systemd: Started Session 35524 of user root.",
"env" => "dev",
"syslog_message" => "systemd: Started Session 35524 of user root.",
"tags" => [
[0] "beats_input_codec_plain_applied"
"@timestamp" => 2018-02-07T03:15:01.000Z,
"syslog_hostname" => "r00j9un0c",
"service" => "loggy",
"family" => "pki"

How do you know nothing ends up in ES?

I am using Kibana to view the data. When I run within docker I do not see data in Kibana.

When I run logstash outside of docker, I see data in Kibana.

I am using the same pipe line configuration in both cases.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.