Can't get Filebeat log data into Logstash container. Version 7.3.2

have a single CENTOS 7 server (let's call it DOCKERHOST with an IP address of 1.2.3.4 - but I am not using the actual IP addresses). It is hosting multiple Docker containers that includes almost all of the Elastic stack.

docker.elastic.co/logstash/logstash:7.3.2 "/usr/local/bin/dock…" 0.0.0.0:5044->5044/tcp, 0.0.0.0:9600->9600/tcp logstash
docker.elastic.co/apm/apm-server:7.3.2 "/usr/local/bin/dock…" 4 0.0.0.0:8200->8200/tcp apm-server
docker.elastic.co/beats/packetbeat:7.3.2 "/usr/local/bin/dock…" packetbeat
docker.elastic.co/beats/heartbeat:7.3.2 "/usr/local/bin/dock…" heartbeat
docker.elastic.co/beats/metricbeat:7.3.2 "/usr/local/bin/dock…" metricbeat
docker.elastic.co/kibana/kibana:7.3.2 "/usr/local/bin/dumb…" kibana
docker.elastic.co/elasticsearch/elasticsearch:7.3.2 "/usr/local/bin/dock…" 0.0.0.0:9200->9200/tcp, 9300/tcp elasticsearch

Everything appears to be working fine, with the exception of Logstash. There is a Logstash container on the DOCKERHOST machine . I also have a Filebeat RPM package install on a separate CENTOS 7 machine. This is a web server named "web-jrzv" with an IP address of 5.6.7.8 - which also has metricbeat running. Metricbeat is fine but it sends its data directly to elasticsearch. My goal is to have Filebeat scrape logs and report the data into Logstash on DOCKERHOST. Logstash should then store the data into ElasticSearch.

I go into Kibana and check for syslog data from web-jrzy. I don't see anything listed:

Screen Shot 2019-10-02 at 9.26.47 AM.png

I am not sure what else to do to. It must be a configuration issue, and I suspect with Logstash. I just don't know where.

Here is the filebeat.yml on the web server:

#=========================== Filebeat inputs =============================

filebeat.inputs:

  • type: log

Change to true to enable this input configuration.

enabled: true

Paths that should be crawled and fetched. Glob based paths.

paths:

  • /var/log/*.log
  • /var/log/messages
  • /synthesys/current/logs/server.log
  • /synthesys/csurv-ntier-web-4.8.0/logs/*.log
    #- c:\programdata\elasticsearch\logs*

Exclude files. A list of regular expressions to match. Filebeat drops the files that

are matching any regular expression from the list. By default, no files are dropped.

exclude_files: ['.gz$']

#============================= Filebeat modules ===============================

filebeat.config.modules:

Glob pattern for configuration loading

path: ${path.config}/modules.d/*.yml

Set to true to enable config reloading

reload.enabled: true

Period on which files under path should be checked for changes

reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false

#================================ General =====================================

name: "synthesis_web_server"

#============================== Kibana =====================================
setup.kibana:

Kibana Host

host: "1.2.3.4:5601"

#================================ Outputs =====================================

Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:

Array of hosts to connect to.

hosts: ["1.2.3.4:9200"]

#----------------------------- Logstash output --------------------------------
#output.logstash:

The Logstash hosts

hosts: ["1.2.3.4:5044"]

#================================ Processors =====================================

Configure processors to enhance or manipulate events generated by the beat.

processors:

- add_host_metadata: ~

- add_cloud_metadata: ~

#================================ Logging =====================================

Sets log level. The default log level is info.

Available log levels are: error, warning, info, debug

logging.level: debug

#================================= Migration ==================================

This allows to enable 6.7 migration aliases

#migration.6_to_7.enabled: true

When I tail the /var/log/messages file on the web server:

Oct 2 15:33:36 web-jrzv filebeat: 2019-10-02T15:33:36.269Z#011DEBUG#011[registrar]#011registrar/registrar.go:346#011Registrar states cleaned up. Before: 12, After: 12, Pending: 0
Oct 2 15:33:36 web-jrzv filebeat: 2019-10-02T15:33:36.269Z#011DEBUG#011[registrar]#011registrar/registrar.go:411#011Write registry file: /var/lib/filebeat/registry/filebeat/data.json (12)
Oct 2 15:33:36 web-jrzv filebeat: 2019-10-02T15:33:36.272Z#011DEBUG#011[registrar]#011registrar/registrar.go:404#011Registry file updated. 12 states written.
Oct 2 15:33:36 web-jrzv filebeat: 2019-10-02T15:33:36.272Z#011DEBUG#011[registrar]#011registrar/registrar.go:356#011Processing 1 events
Oct 2 15:33:36 web-jrzv filebeat: 2019-10-02T15:33:36.272Z#011DEBUG#011[registrar]#011registrar/registrar.go:326#011Registrar state updates processed. Count: 1
Oct 2 15:33:36 web-jrzv filebeat: 2019-10-02T15:33:36.272Z#011DEBUG#011[registrar]#011registrar/registrar.go:346#011Registrar states cleaned up. Before: 12, After: 12, Pending: 0
Oct 2 15:33:36 web-jrzv filebeat: 2019-10-02T15:33:36.272Z#011DEBUG#011[registrar]#011registrar/registrar.go:411#011Write registry file: /var/lib/filebeat/registry/filebeat/data.json (12)
Oct 2 15:33:36 web-jrzv filebeat: 2019-10-02T15:33:36.276Z#011DEBUG#011[registrar]#011registrar/registrar.go:404#011Registry file updated. 12 states written.

So I -THINK- it is trying to talk to Logstash. I am seeing metricbeat data from this same server, but it sends its output directly to ElasticSearch.

On the Logstash container, here is the logstash.docker.yml

pipeline.batch.size: 125
pipeline.batch.delay: 50
log.format: json
node.name: logstash

Here is the logstash.conf

[root@elasticapm pipeline]# cat logstash.conf
input {
beats {
port => 5044
}
}

output {
elasticsearch {
hosts => "1.2.3.4:9200"
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}

My understanding is that this means it is accepting input from the web server filebeat on port 5044 and spiting out the output to elasticsearch on port 9200.

When I run the logstash docker container, here is what I run:

docker run -d -p 9600:9600 -p 5044:5044 --rm --name=logstash --link elasticsearch:elasticsearch --network elastic-network -v $PWD/logstash.docker.yml:/usr/share/logstash/config/logstash.yml -v $PWD/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf docker.elastic.co/logstash/logstash:7.3.2

This should allow it to accept input on port 5044 and map the yml and conf files to the local container, overwriting the default.

docker logs logstash:

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
{"level":"INFO","loggerName":"logstash.setting.writabledirectory","timeMillis":1570031097370,"thread":"main","logEvent":{"message":"Creating directory","setting":"path.queue","path":"/usr/share/logstash/data/queue"}}
{"level":"INFO","loggerName":"logstash.setting.writabledirectory","timeMillis":1570031097460,"thread":"main","logEvent":{"message":"Creating directory","setting":"path.dead_letter_queue","path":"/usr/share/logstash/data/dead_letter_queue"}}
{"level":"INFO","loggerName":"logstash.runner","timeMillis":1570031097806,"thread":"LogStash::Runner","logEvent":{"message":"Starting Logstash","logstash.version":"7.3.2"}}
{"level":"INFO","loggerName":"logstash.agent","timeMillis":1570031097825,"thread":"LogStash::Runner","logEvent":{"message":"No persistent UUID file found. Generating new UUID","uuid":"7062e2e7-ed1c-4865-b482-97f8faecd6f4","path":"/usr/share/logstash/data/uuid"}}
{"level":"INFO","loggerName":"org.reflections.Reflections","timeMillis":1570031098759,"thread":"Converge PipelineAction::Create","logEvent":{"message":"Reflections took 31 ms to scan 1 urls, producing 19 keys and 39 values "}}
{"level":"INFO","loggerName":"logstash.outputs.elasticsearch","timeMillis":1570031099634,"thread":"[main]-pipeline-manager","logEvent":{"message":"Elasticsearch pool URLs updated","changes":{"added":[{"metaClass":{"metaClass":{"metaClass":{"changes":"{:removed=>, :added=>[http://1.2.3.4:9200/]}"}}}}]}}}
{"level":"WARN","loggerName":"logstash.outputs.elasticsearch","timeMillis":1570031099900,"thread":"[main]-pipeline-manager","logEvent":{"message":"Restored connection to ES instance","url":"http://10.160.65.33:9200/"}}
{"level":"INFO","loggerName":"logstash.outputs.elasticsearch","timeMillis":1570031099935,"thread":"[main]-pipeline-manager","logEvent":{"message":"ES Output version determined","es_version":7}}
{"level":"WARN","loggerName":"logstash.outputs.elasticsearch","timeMillis":1570031099937,"thread":"[main]-pipeline-manager","logEvent":{"message":"Detected a 6.x and above cluster: the type event field won't be used to determine the document _type","es_version":7}}
{"level":"INFO","loggerName":"logstash.outputs.elasticsearch","timeMillis":1570031099952,"thread":"[main]-pipeline-manager","logEvent":{"message":"New Elasticsearch output","class":"LogStash::Outputs::ElasticSearch","hosts":["//1.2.3.4:9200"]}}
{"level":"INFO","loggerName":"logstash.outputs.elasticsearch","timeMillis":1570031100029,"thread":"Ruby-0-Thread-5: :1","logEvent":{"message":"Using default mapping template"}}
{"level":"WARN","loggerName":"org.logstash.instrument.metrics.gauge.LazyDelegatingGauge","timeMillis":1570031100053,"thread":"[main]-pipeline-manager","logEvent":{"message":"A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team."}}
{"level":"INFO","loggerName":"logstash.javapipeline","timeMillis":1570031100056,"thread":"[main]-pipeline-manager","logEvent":{"message":"Starting pipeline","pipeline_id":"main","pipeline.workers":8,"pipeline.batch.size":125,"pipeline.batch.delay":50,"pipeline.max_inflight":1000,"thread":"#<Thread:0x161fa23e run>"}}
{"level":"INFO","loggerName":"logstash.outputs.elasticsearch","timeMillis":1570031100113,"thread":"Ruby-0-Thread-5: :1","logEvent":{"message":"Attempting to install template","manage_template":{"index_patterns":"logstash-","version":60001,"settings":{"index.refresh_interval":"5s","number_of_shards":1},"mappings":{"dynamic_templates":[{"message_field":{"path_match":"message","match_mapping_type":"string","mapping":{"type":"text","norms":false}}},{"string_fields":{"match":"","match_mapping_type":"string","mapping":{"type":"text","norms":false,"fields":{"keyword":{"type":"keyword","ignore_above":256}}}}}],"properties":{"@timestamp":{"type":"date"},"@version":{"type":"keyword"},"geoip":{"dynamic":true,"properties":{"ip":{"type":"ip"},"location":{"type":"geo_point"},"latitude":{"type":"half_float"},"longitude":{"type":"half_float"}}}}}}}}
{"level":"INFO","loggerName":"logstash.inputs.beats","timeMillis":1570031100547,"thread":"[main]-pipeline-manager","logEvent":{"message":"Beats inputs: Starting input listener","address":"0.0.0.0:5044"}}
{"level":"INFO","loggerName":"logstash.javapipeline","timeMillis":1570031100557,"thread":"[main]-pipeline-manager","logEvent":{"message":"Pipeline started","pipeline.id":"main"}}
{"level":"INFO","loggerName":"org.logstash.beats.Server","timeMillis":1570031100628,"thread":"[main]<beats","logEvent":{"message":"Starting server on port: 5044"}}
{"level":"INFO","loggerName":"logstash.agent","timeMillis":1570031100628,"thread":"Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6","logEvent":{"message":"Pipelines running","count":1,"running_pipelines":[{"metaClass":{"metaClass":{"metaClass":{"running_pipelines":"[:main]","non_running_pipelines":}}}}]}}
{"level":"INFO","loggerName":"logstash.agent","timeMillis":1570031100827,"thread":"Api Webserver","logEvent":{"message":"Successfully started Logstash API endpoint","port":9600}}

If I go to http://1.2.3.4:5044 or http://1.2.3.4:9600 I get an empty response in the browser.

On the web server I have run

filebeat modules enable system

I get back:

"Module system is already enabled"

If I bypass logstash and go directly from Filebeat on the web server to ElasticSearch, I start seeing the logs being parsed in Kibana, so it is definitely a Logstash issue.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.