I have a suite of custom beats that are all configured to output to logstash and enable the internal monitoring collection. The monitoring data being collected is currently of two types (as supported out of the box by libbeat); system level metrics and beat/pipeline metrics.
We have a need to send additional custom beat metrics that include business level log file processing health status metrics but there is no off-the-shelf support for this, but maybe with some beats golang monitoring API I can do this?
For example, we have application logs being harvested that are customer specific. The log file name includes a customerId.serverId and the custom beat successfully logs those events to logstash. There are hundreds of such logs being harvested by the custom beat (it does much more that filebeat but that's another story). The monitoring cluster successfully logs the system and beat level metrics that are provided out of the box.
We need to send additional custom health events, ideally to the monitoring cluster. Each of the customerId,serverId harvester workers have health status
fields that need to be sent ultimately to elastic.
A brute force approach is to log those health metrics to a file and have a separate filebeat send those ideally to an index in the monitoring cluster.
Having to configure and deploy and pay the io/cpu cost of having a sidecar filebeat to me is a gap in libbeat.
I imagine that having my beat extend the events delivered to the monitoring cluster would be best. I can develop this enhancement myself and contrib it back to the community.
Let's hear some thoughts. Thanks