Hi, I have a two-fold question. I will try to keep it as simple as possible. I have searched a lot and found examples in both ways, but still havent found any pros/cons to be decisive?
- Multi-server system, with multiple physical and virtual setups.
- Centrailize metrics and logs from several applications and system applications into ELK stack.
- Log app events and metrics from within our own application. This is mostly php-based. We are using Monolog logging class.
1 - Should we use a centralized Redis instance (with high availability/failover) as a log collector? (PHP>Redis>Logstash>ES) Would other apps such as system apps also feed into Redis?
1b - should I ditch redis in favor of a local logstash-forwarder on each host to hold the logs and fwd to another central host?
Basically App writes to Redis, Logstash Reads from Redis, writes to ES, or App wirtes to Logstash. Logstash-fwd writes to Logstash (centralized)> Logstash (centralized) writes to ES... What is the most common architecture? We have Redis experience inhouse, and use it a lot, but I dont want to add moving parts unless its a good idea. I was told the Redis layer is a way of protecting against dataloss, but if I have logstash-fwd on each instance, I understand logstahs-fwd will also hold the logs until the logstash(centralized/aggregator) is back up?
2 - what is the correct way of setting the type for the entry from within the PHP app? I tried modifying the logstash input to something like type => [fields][ctxt_event] or type => [ctxt_event] the former fails on startup, the latter gives me a fixed type with the string 'ctxt_event'... When I ommit this i'm getting all types='logs' which I'm not setting anywhere... (at least not advertently). I guess this may also be partly a monolog implementation issue but maybe not?
Thanks for all help. If you have just articles or refrences from ppl who have gone down this line that would be a great response too... I've just not really found a lot of info, and a lot of it is quite old in what I find...