I want to output data from filebeat to a file in NFS on remote server. Please suggest how we can do that.
I have found the option to forward data using following configuration . But how we can we forward the data to file remote server.
# File as output
# Options:
# path: where to save the files
# filename: name of the files
# rotate_every_kb: maximum size of the files in path
# number of files: maximum number of files in path
file:
path: "/tmp/filebeat"
filename: filebeat
rotate_every_kb: 1000
number_of_files: 7
Once Filebeat forwards logs to this file , I understand that logstash data from this file by configure the file path as input and also whether logstash can ouput to a file after applying the filters and then elastic read it from a particular fiel.
How I can configure elastic search to read the input from file. I hope my query is clear.
Filebeat --> file in NFS volume --> Logstash --> File --> Elastic Search . I'm trying this is due to some firewall issues which doesn't allow to ouput directly to logstash and elastic search .
If you mount an NFS volume any program can access the files in the volume just like they were on a local disk (well, file permissions still come into play). In other words yes, Filebeat can write files there and there's no port number involved.
Thanks for the information. To be more specific on my question , We are not able to mount filebeat to a server where logs are present . The NFS would be mounted on a different server and fiebeat should be able to write to that server where NFS mount is available . Would it be possible.
One more question , would it be possible for logstash to read the input from file where filebeat has written the data in beats format. If so what should be the input format.
Sorry , Your support is very beneficial you , As i got your initial response very fast i thought you will be responding soon. Please take you own time and respond to us. I was expected some one in the group to respond to the query and it is very helpful for us.
NFS is something supported/provided by the OS and fully transparent to it's users/processes. You can not 'mount filebeat'. You have to mount the disk on remote via NFS into local directory tree. Filebeat will be configured like writing to local disk, but using the remote mount point (your config is already correct, just update the path to your mount point).
With NFS being so common just google for 'linux mount nfs'. Better, add your distribution name to the search to find some administrator resources/handbook. Some results from redhat/centos, ubuntu wiki, archlinux.
There is a typo error , what i meant is we are not able to mount NFS on the server where File beat is installed.
Thanks a lot for the information . So what i understand is Filebeat can write to NFS file mounted on a remote server by configuring the "path" to the mount point. I'll go through the links to understand more on the NFS.
maybe your firewall is blocking NFS? Wrong credentials? Which NFS version are you using in your environment. Pre v4 does not encrypt network traffic.
Shared disk are kinda tricky. While it's possible you have to check in some test environment what happens if NFS server becomes unavailable (e.g. start dropping packets via firewall rule and restart much later). What happens if NFS becomes unavailable, machine running filebeat is restarted (where does buffering happen)? Any data loss in any of these scenarios? When writing to NFS, filebeat fully relies on NFS for not dropping any data as filebeat can not detect any network problems. Advantage of filebeat->logstash is support for encryption via TLS + detection of network failures with support for re-sending lost log lines.
Thanks for the information. The filebeat is installed in a chassis from where logs has to be forwarded and there the NFS mount is not possible on chassis. I'm planning to come up the design for forwarding logs from chassis logs from chassis to Elastic search as there is firewall blocking direct file transfer from logstash to elastic search. So the plan was to write to NFS from there elasticsearch reads it.
Again, Logstash doesn't care if an input file resides on an NFS-mounted volume. That said, networked file systems are notorious for having different edge case behavior and Logstash is usually not used for reading files from NFS. I'd try it out thoroughly before committing to anything.
File handling reading/writing are pretty similar in filebeat/logstash and any other product on market you will find. NFS is no 'communication channel' per se, but remote 'disk'. If communication is done via files one can use NFS (as service is provided transparently by OS and services running on OS).
Why have another logstash instance inbetween writing to disk?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.