Best practice for logging in an microservice-based architecture


My current environment has multiple applications on multiple servers that logs to a local file and then uses Filebeat to send logs to Logstash for processing. This works fine and I like that the logs will show up eventually, even though ES might be down for some reason, and in a worse case scenario, the logs are still available on the servers.

This is all about to change because every new service from now on will be running as microservices in K8s.

What is the best practice for logging into ES from microservices? Most microservices wont have any storage, hence local logging isn't available. I also don't want the services to suffer from ES or Logstash not being available. (Our ES-cluster has almost 100% uptime, I just don't want the services to suffer if ES is unavailable during whatever, Log4J/CVE-patching/corrupt indices/powerloss and so on).

The current plan is to send most logs over http to Logstash with some form of serilog-implementation and if the log is important, the log will be written to a Kafka-topic and read to Logstash instead (I'm aware that this just moves the uptime-issue to Kafka).

Obviously I'm lost in this and don't know what I'm talking about. What are your thoughts? Are there any best practices for this on place already? I'd like to use Logstash so that I can control which index the logs end up in a little better.

Have you seen this article? Monitoring Kubernetes the Elastic way using Filebeat and Metricbeat | Elastic Blog

Do you need Logstash in the architecture? If you need to transform the data, you can alternatively do it in ingest pipelines.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.