Filebeat HA servers

I have a requirement where the log file that I am trying to pull can be on 1 of 4 servers at any given time. It is for my application that is in a HA cluster on 4 servers. Can i install filebeat on all 4 servers and make it keep track and still send updates from where it last sent from a different filebeat instance after a fail over?

Hi @madhan0618, welcome to the Elastic community forums!

Let me see if I have understood your setup correctly. You have 4 servers, each of which has a log file. But at any given time only one of those 4 log files is being written to. When there's a fail over, another server's log file will start being written to instead of the previous one.

I think you can simply install Filebeat on all 4 servers and have it read from their respective log files. Let's say there's a fail over from server A to server B. Then the Filebeat on server A will finish processing the log file on that server. Meanwhile the Filebeat on server B will resume processing the log file (from it's last read offset) on that server.

Should have been more clear. The log file is located on a SAN mount path that is only mounted on 1 of the 4 servers in a cluster at any given time.

example

current:
server1 - /app/mq/qmgr/logs/AMQERR01.log [ this file system is mounted on server 1]
server2
server3
server4

when mQ fails over this file system and along with it the log file will be on a different server
server1
server2
server3 - - /app/mq/qmgr/logs/AMQERR01.log [ this file system is mounted on server 3, now]
server4

it could be on any of the 4 servers.

if i have file beat on all 4 servers will it error out when the log file is not available on current server anymore? and will file beat on the failed over server start pushing changes from where the current server filebeat left off?

Thanks for the clarification, it helps.

Given that the file can only be present on one of the servers at any given time, when it "disappears" from a server Filebeat on that server will remove its entry from the internal registry. Then, if the file "appears" on that server again, Filebeat will create a new entry for the file in the internal registry and start reading it from the start --- which is not what we want. So the solution I suggested earlier will not work for your use case.

Just thinking out of the box a bit, since this file is available via a mount anyway, could you have a separate server that this file is always mounted on (as read-only) and run Filebeat just on this server all the time?

Shaunak

no that is not possible, the mounting of this is maintained by veritas. During failover, the log file gets moved to a different server.

lets say on server 1, filebeat read the logs file and offset it at 10, then the files moves to a different server and then comes back to server1. Will filebeat only send me new entries that are going to start generating after the switch over? or will it try to read from offset 10, for all we know the log file would have completely rolled. If it will only send latest entries, that will work.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.