Hello,
I’m working on a .NET application where I’m using Serilog to send logs to Elasticsearch. I want to apply different Index Lifecycle Management (ILM) policies based on the log level — for example, separate ILM policies for debug, info, and error logs.
Currently, my setup uses multiple sub-loggers in Serilog, each targeting a different data stream with its own ILM policy, like this:
.WriteTo.Logger(lc => lc
.WriteTo.Elasticsearch(opts =>
{
opts.DataStream = new DataStreamName("logs", "debug", "ex");
opts.IlmPolicy = "logs-debug-ilm-policy";
opts.MinimumLevel = LogEventLevel.Debug;
})
.Filter.ByIncludingOnly(e => e.Level == LogEventLevel.Debug)
)
.WriteTo.Logger(lc => lc
.WriteTo.Elasticsearch(opts =>
{
opts.DataStream = new DataStreamName("logs", "error", "ex");
opts.IlmPolicy = "logs-error-ilm-policy";
opts.MinimumLevel = LogEventLevel.Error;
})
.Filter.ByIncludingOnly(e => e.Level >= LogEventLevel.Error)
)
My questions:
- Is this multiple sub-logger approach considered efficient and scalable for Elasticsearch in a .NET app?
- Is there a recommended or better way to handle ILM policies per log level when using Serilog with Elasticsearch?
- Are there any known pitfalls or performance impacts from using multiple data streams and ILM policies this way?
Thanks for any advice or best practices you can share!