Filebeat to send logs to Logstash and ES

Hello everyone,

So i have filebeat configured in an apache server AWS EC2 instance and another EC2 instance which has logstash and elasticsearch. i can send log files to the ec2 instance to logstash but i can only display them on the console. How can i forward those files to elasticsearch and actually be able to see them or go through them i can't figure it out. Because i also have kibana setup and i need to view all the logs from the apache server and also a bunch of other servers as well.

Any help or guidance is GREATLY APPRECIATED!

Cheers!

This shouldn't be hard, in your Logstash configuration just add an ES output. The index name will be the field you specify in Filebeat's fields/index.

output {
  elasticsearch {
    hosts => [
      "elasticsearch01.example.com:9200",
      "elasticsearch02.example.com:9200" ]
    index => "%{[index]}"

  }
}

ok after i do that how can i know for sure that logstash and es recieved the files?

wait just to double check, what do u mean by the index/fields in filebeat? like in my filebeat output config it just goes to logstash server on port 5044 i didnt specify any fields

fields

Optional fields that you can specify to add additional information to the output.

https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-log.html#filebeat-input-log-fields

fields:
  index: myindexname

ok but where is myindexname configured? is it done at filebeat first or what exactly because honestly im very confused and i can't find any decent tutorials or guides on this thank you for your help!

Bit late to respond, but... anyway...

myindexname is just a name you come up with. If you collect system logs, you can call your index syslog. If you collect all kind of logs, you can call it logs or logstash.

Tiny tutorial

You need to set up Filebeat and Logstash. These are by default configured in /etc/filebeat
/filebeat.yml and /etc/logstash/logstash.yml.

For starters, delete everything from these files, because 95% of these are comments anyway. If you need the default contents later (e.g. the explanations), it's all out there on GitHub.

Now in Filebeat's yml (still /etc/filebeat/filebeat.yml :slight_smile: ) you need to specify an input and an output, e.g. Logstash if you fancy that:

filebeat.inputs:
- type: log
  - /var/log/messages
  fields:
    index: syslog

output.logstash
  hosts: ["mylogstash.example.com:5044"]

That's all for a very very basic FB setup.

In Logstash's yml (at /etc/logstash/logstash.yml) specify the data and log directories:

path.data: /var/lib/logstash
path.log: /var/log/logstash

Create a config file for your inputs/outputs, e.g.

input {
  beats {
    port => "5044"
  }
}
output {
  elasticsearch {
    hosts => [
      "elasticsearch01.example.com:9200",
      "elasticsearch02.example.com:9200" ]
    index => "%{[index]}"

  }
}

Again, this is a quite basic approach, but should work as soon as you restart Filebeat and Logstash. Your server's /var/log/messages will end up in a syslog index.

Be aware that a single syslog index will not be sufficient if you start collecting logs from many servers. There are techniques to handle this (e.g daily/weekly/monthly indexes, aliasing/rollover, etc.), you just need to read the Elastic documentation to learn about them.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.