I'm new to elk and i want to and I want to connect my elk stack with filebeat in docker, which I already have created in docker and it is creating the records for me, because I can see them in kibana.
Now what I want is to connect multiple servers with apache, which are going to be the clients, in this filebeat that I previously said was in docker, and it works correctly.
I guess I have to install filebeat on the client and connect it to the server, this on every client I want to have, but how do I configure the filebeat.yml file?
Can I connect them even though one of the filebeats is in docker and the other is not?
about the server, do I just have to modify the filebeat.yml file? the file docker-compose?
I have looked at several guides but it has not been clear to me.
Thanks
Filebeat will need access to the apache log files so as to collect them. So you need somehow to make apache's logs available to filebeat.
How is apache running? As a container or natively on the host?
If it runs natively on the host then you can just mount its logging directory inside Filebeat container and point filebeat to collect from this directory.
I was wrong about Filebeat, i only have it on my client.
Yes, as I have explained I have ELK running in a container and this one, on a server. I want to connect it with smultiple clients so that they collect some Apache logs.
On the client I have installed filebeat, and I have a connection with the server, and with logstash because it listens for telnet on port 5044
But kibana, it does not collect the logs that I indicate in the filebeat.yml file of the client
I don't know if I have to change something in the server's logstash.conf file, but I'll leave you as I have both files, in case it can help you
To answer your answer, apache is in the client
If you need more conf files, i would show whatever you need
Thank you very much, regards
filebeat.yml
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /path/to/log-1.log
#input_type: log
#document_type: syslog
- /server/www/html/miweb/logs/*.log
# filestream is an experimental input. It is going to replace log input in the future.
- type: filestream
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /server/www/html/miweb/logs/*.log
#- c:\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
hosts: ["192.168.14.66:5044"]
#bulk_max_size: 1024
#index: filebeat
#tls:
# certificate_authorities: ["/etc/pki/tls/certs/logstash-beats.crt"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
losgtash.conf
input {
beats {
port => 5044
}
tcp {
port => 5000
}
}
## Add your filters / logstash plugins configuration here
output {
if filebeat == "192.168.14.15" {
elasticsearch {
hosts => "localhost:9200"
user => "elastic"
password => "changeme"
ecs_compatibility => disabled
manage_template => false
index => "logsclient-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
}
Can you run filebeat in debug mode and check if logs are collected from apache logs' directory? Btw, do you enable apache module? In the config you sent (filebeat.yml) it's not clear if you actually enable any modules.
If filebeat is properly configured and is able to collect logs from apache then we need to check if it is able to send the events to Logstash. In this you will need to check in Filebeat's/Logstash's logs to check if any errors occur (mostly network errors).
Let's start with these steps and keep iterating on this.
I'm little bit confused with how you run Filebeat. Is it a linux service, a single binary or running in a container? Could you please provide full information of your setup/environment?
Ok, so you follow Linux version of the installation guide. I'm afraid this does not install it as a linux service. RPM and DEB packages do install it as a linux service , since the package is actually an installer script and creates the linux service in your system. With your approach you just unzip the artifacts' folder of Filebeat.
So, how you start Filebeat? Does ./filebeat -e work if you execute it inside the "inzipped" directory?
Ok, so I have done the installation wrong, right?
I installed it in home, unzipped it, and entered the folder and edited
the filebeat.yml file that was inside.
So, I already unzipped before
If I do ./filebeat inside the folder I get this
-bash: ./filebeat cannot execute binary file: Exec format error
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.