Configuring Filebeat to send the logs to remote Logstash(ELK)

Hello,

This is the first time I use Filebeat, and I learned some basic knowledge of ELK from the online tutorial.

Basically, I would like to pass the log information from the web-tier instance to the ELK server instance on Amazon Web Service EC2.

First, I set up my Tomcat server on a web-tier instance (AWS EC2), and the Tomcat server has generated several txt log files stored in /opt/tomcat/logs/

I installed Filebeat based on the instruction https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation.html; and the filebeat.yml is:


filebeat.prospectors:
- type: log
  paths:
    - /opt/tomcat/logs/*.txt
    #- /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*
    
#== Filebeat modules ==

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false
  
#== Elasticsearch template setting ==

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false
  
#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts, elk_server_ip is the public ip address of the instance (ELK server)  
  hosts: ["elk_server_ip:5044"]
  index: filebeat_logs

Next, I installed Kibana, Logstash & ElasticSearch on another AWS EC2 instance. And I enable the ports:
|5044|tcp|0.0.0.0/0|✔|
|5601|tcp|0.0.0.0/0|✔|
|9200|tcp|0.0.0.0/0|✔|

And then I set up a config file for Logstash:

input {
  beats {
    port => 5044
  }
}
filter {
  grok {
    match => {
      "message" => "/EventAdv/search\?user_id=%{NUMBER:user_id}&lat=%{NUMBER:latitude}&lon=%{NUMBER:longitude}"
    }
  }
  if "_grokparsefailure" in [tags] {
    drop { }
  }
  mutate
  {
    add_field => ["[geoip][location]","%{[longitude]}"]
    add_field => ["[geoip][location]","%{[latitude]}"]
  }
  mutate
  {
    convert => ["[geoip][location]","float"]
  }
}
output {
   stdout {
      codec => rubydebug
   }
   elasticsearch {
      hosts => "localhost"
   }
}

After I started Logstash (sudo /usr/share/logstash/bin/logstash -f eventadv.conf), the starting process stuck and I got following logs

Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2018-02-24 22:46:14.724 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[INFO ] 2018-02-24 22:46:14.732 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[WARN ] 2018-02-24 22:46:15.201 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2018-02-24 22:46:15.406 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.2.2"}
[INFO ] 2018-02-24 22:46:15.537 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2018-02-24 22:46:16.430 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2018-02-24 22:46:16.641 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[INFO ] 2018-02-24 22:46:16.644 [[main]-pipeline-manager] elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[WARN ] 2018-02-24 22:46:16.729 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:9200/"}
[INFO ] 2018-02-24 22:46:16.886 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>nil}
[WARN ] 2018-02-24 22:46:16.886 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[INFO ] 2018-02-24 22:46:16.887 [[main]-pipeline-manager] elasticsearch - Using mapping template from {:path=>nil}
[INFO ] 2018-02-24 22:46:16.890 [[main]-pipeline-manager] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[INFO ] 2018-02-24 22:46:16.903 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
[INFO ] 2018-02-24 22:46:17.319 [[main]-pipeline-manager] beats - Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ] 2018-02-24 22:46:17.402 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x2264daf8@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:246 sleep>"}
[INFO ] 2018-02-24 22:46:17.423 [[main]<beats] Server - Starting server on port: 5044
[INFO ] 2018-02-24 22:46:17.442 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Pipelines running {:count=>1, :pipelines=>["main"]}"

Could you guys tell me what did I miss or what did I set up wrong?

Thank you so much!

Henry

All I see in the logs is Logstash starting up. Beats config looks good. Have you checked filebeat logs for errors?

If you think connection filebeat->LS is the problem, then only test these two components. E.g. have Logstash run only this config:

input {
  beats {
    port => 5044
  }
}

output {
  stdout {
    codec => rubydebug
  }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.