Best Practice for Using Multiple ILM Policies with Serilog and Elasticsearch Based on Log Levels in a .NET Application?

Hello,
I’m working on a .NET application where I’m using Serilog to send logs to Elasticsearch. I want to apply different Index Lifecycle Management (ILM) policies based on the log level — for example, separate ILM policies for debug, info, and error logs.

Currently, my setup uses multiple sub-loggers in Serilog, each targeting a different data stream with its own ILM policy, like this:

.WriteTo.Logger(lc => lc
    .WriteTo.Elasticsearch(opts =>
    {
        opts.DataStream = new DataStreamName("logs", "debug", "ex");
        opts.IlmPolicy = "logs-debug-ilm-policy";
        opts.MinimumLevel = LogEventLevel.Debug;
    })
    .Filter.ByIncludingOnly(e => e.Level == LogEventLevel.Debug)
)
.WriteTo.Logger(lc => lc
    .WriteTo.Elasticsearch(opts =>
    {
        opts.DataStream = new DataStreamName("logs", "error", "ex");
        opts.IlmPolicy = "logs-error-ilm-policy";
        opts.MinimumLevel = LogEventLevel.Error;
    })
    .Filter.ByIncludingOnly(e => e.Level >= LogEventLevel.Error)
)

My questions:

  • Is this multiple sub-logger approach considered efficient and scalable for Elasticsearch in a .NET app?
  • Is there a recommended or better way to handle ILM policies per log level when using Serilog with Elasticsearch?
  • Are there any known pitfalls or performance impacts from using multiple data streams and ILM policies this way?

Thanks for any advice or best practices you can share!

Hi,
I’d really appreciate it if anyone could share some insights or experiences on this topic. Any help would be greatly appreciated!

Thanks in advance!

Hi @a_mandel :waving_hand:

I can't comment on the .NET/Serilog side of things, but from an Elasticsearch perspective separating the log levels into different data streams to differentiate their configuration sounds reasonable.

One idea came to mind in case it turns out that splitting the log streams on the Serilog side is not desirable or if you want to encapsulate the complexity on the Elasticsearch-side: The reroute ingest processor can conditionally route your documents to different data streams too. The condition could check for the log level field, for example.

Hi @weltenwort,

I'll give it a try!
Thank you very much!