Configure Serilog in ASP.NET Core – few practical tips

on

In one of the recent projects I worked we were using Serilog as the logging solution. The default configuration was not what worked for us, so we needed to tweak it a bit to get the configuration we need. We needed a quick and clean way to log requests to the API and the responses from the API.

Make Serilog available in Startup.cs

It is not enough to add Serilog just as a middleware in Startup.cs because then you cannot use the logger in the Startup.cs file. It is much better to add it directly to the Program.cs file.

Good resources that additionally explain this can be found in the Nicholas Blumhardt’s blog post, as well as in the Serilog GitHub repo.

In short, you should add the logger directly in the Program.cs file:

using Serilog;

public class Program
{
    public static int Main(string[] args)
    {
        Log.Logger = new LoggerConfiguration()
            .CreateLogger();

        try
        {
            Log.Information("Starting web host");
            CreateHostBuilder(args).Build().Run();
            return 0;
        }
        catch (Exception ex)
        {
            Log.Fatal(ex, "Host terminated unexpectedly");
            return 1;
        }
        finally
        {
            Log.CloseAndFlush();
        }
    }

And then add the UseSerilog() to the GenericHost in the CreateHostBuilder():

public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
        .UseSerilog() // <-- Add this line
        .ConfigureWebHostDefaults(webBuilder =>
        {
            webBuilder.UseStartup<Startup>();
        });
}

Now you can use the Logger anywhere through dependency injection, including the Startup.cs class. This allowed us to use the Logger in a configuration methods so we can log if everything was always configured properly on application launch.

Output Serilog to console

Serilog uses a bunch of “sinks” that you can add through NuGet. Our desired tool for checking logs was DataDog. However, when discussing with our system architect and our DevOps engineer, we decided that we will not be using the premade DataDog sink that Serilog already has, but that we will be outputting the logs to the console.

This allows the DevOps engineer to take the console output and connect it to any tool they want. In the future, if the logging tool needs to be changed, it can be changed without having to do any changes in the code (e.g. changing to a different sink). Making systems flexible and modular is important so we decided to do it this way.

Configuring Serilog to output to the console is literally a one liner in Program.cs:

Log.Logger = new LoggerConfiguration()
        .WriteTo.Console()
        .CreateLogger();

Customize Log message format

Each logger has a default template for the message that is to be logged. We needed to redefine this abit, so we did it by passing the outputTemplate parameter to the .WriteTo.Console() as in this example:

Log.Logger = new LoggerConfiguration()
    .WriteTo.Console(outputTemplate: "[{Timestamp:HH:mm:ss} {Level:u3}] {Message:lj}{NewLine}{Exception}")

The parameters for the outputTemplate can be found in the Serilog’s Formatting output documentation.

One of the reasons we did this is because of the Level parameter. We agreed that the default full words that Serilog is using to display log entry level are too long, and that it is more common in the industry to use 3 letter abbreviations (INF, ERR…). Hence the u3 value of the Level parameter has been provided.

Configuring minimum log level

As we all know, log messages can have levels of severity. Control of what is logged is needed. Ideally control that can be easily configured, so you can have different log levels for your development environment, staging environment and production environment.

Configuring the minimum log level is easy and requires just one method on the logger (with the GetLogLevelFromAppSettings() being a dummy method that needs to be implemented to get the level, my suggestion is to read it from ENV variables or appsettings.json):

Log.Logger = new LoggerConfiguration()
    .MinimumLevel.Is(GetLogLevelFromAppSettings())

Remove “extra” log messages that come in by default

Once you add a logger, the ASP.NET Core adds some default log messages that help debug your applications. These are really good and provide a lot of useful information, so they are a great starting point. If you decide not to customize your logging, be sure to have at least these in place.

However, if you go into a larger project, you want to have full control of the log messages, and then these default log messages can become clutter that just gets in the way and does not bring value. And this is especially an issue if you are in a “pay by amount of log entries” tier of platforms like DataDog.

We wanted to remove the default messages that Microsoft that ASP.NET Core outputs by default. Luckily, the configuration of the logger allows this easily:

Log.Logger = new LoggerConfiguration()
    .MinimumLevel.Override("Microsoft", Serilog.Events.LogEventLevel.Warning)

The first argument of the MinimumLevel.Override() is the context, usually associated with the namespace-qualified type name of the class associated with the logger.

The second parameter is the level at which and above the messages will be logged. So, in the example given, the Warning messages (and messages more severe) will be logged, however, the Information messages won’t. But this only applies to messages from Microsoft context!

It is possible to add as many MinimumLevel.Override() methods one after another, so it would be possible to have a configuration like:

    .MinimumLevel.Override("Microsoft", LogEventLevel.Warning)
    .MinimumLevel.Override("System", LogEventLevel.Error)

Final configuration of the logger

So, in the end, what we needed to configure was:

  • Availability in Startup.cs
  • Console output
  • Custom output format (with 3 letter abbreviations for log level)
  • Configurable minimum log level to be logged
  • Overriding default Microsoft’s log messages

Our final configuration looked like this:

Log.Logger = new LoggerConfiguration()
    .MinimumLevel.Is(AppSettings.GetLogEventLevel(Configuration))
    .MinimumLevel.Override("Microsoft", Serilog.Events.LogEventLevel.Warning)
    .WriteTo.Console(outputTemplate:
        "[{Timestamp:HH:mm:ss} {Level:u3}] {Message:lj}{NewLine}{Exception}")
    .CreateLogger();

Middleware for logging requests and responses

When developing a web API it is very practical to log every HTTP request and every response code. In my practice it has been shown that request parameters or body content are very useful for potential debugging, while logging the response makes no sense. If you have the full request, you should be able to easily recreate the response that request got and troubleshoot any issues.

While it is possible to add a line for logging into every controller and every method you have in the code, it just feels really wrong. The correct approach to this would be to develop a middleware that does this.

A middleware, in short, does some logic with the HTTP request coming in, passes it on to following middlewares, and, once it gets the response, optionally does some more logic with the response and passes the response further along.

Middlewares are explained in the Microsoft documentation, and this schema of how middlewares work is provided:

MIcrosoft documentation - Middleware schema

In order develop our own middleware, we created a class RequestResponseLoggingMiddleware. This class needs to have a constructor with some stuff being injected into it via dependency injection:

public RequestResponseLoggingMiddleware(RequestDelegate next, 
    IHttpContextAccessor accessor, 
    ILogger<RequestResponseLoggingMiddleware> logger)
{
    _next = next;
    _accessor = accessor;
    _logger = logger;
}

The RequestDelegate allowes us to pass the request on to the next middleware in line. The ILogger is, obviously, to be used for logging. I am passing in IHttpContextAccessor here so I can get information about the user who is logged in.

The second very important thing that needs to be done is writing the Invoke method that automatically (by definition of a middleware) takes in the HttpContext as an input parameter:

public async Task Invoke(HttpContext context)
{
    // Some logic on the request before it being passed on to the next middleware
    var requestResponseLog = new RequestResponseLog
    {
        RequestTime = DateTimeOffset.UtcNow,
        Request = await FormatRequest(context)
    };

    // Passing the request on to the next middleware
    await _next(context);

    // Some logic after all other middlewares have finished and we have the response
    requestResponseLog.ResponseTime = DateTimeOffset.UtcNow;
    requestResponseLog.Response = await FormatResponse(context);

    _logger.LogInformation(requestResponseLog.Request);
}

With the RequestResponseLog just being a class representing information used for the log entry:

/// <summary>
/// Class used to keep request and response information for logging purposes.
/// </summary>
private class RequestResponseLog
{
    public string Request { get; set; }
    public DateTimeOffset RequestTime { get; set; }
    public string Response { get; set; }
    public DateTimeOffset ResponseTime { get; set; }
}

And, of course, you need to define your own FormatRequest and FormatResponse messages to format your own request and response lo entry.

A basic example for a FormatRequest is given here, FormatResponse can be written in a similar way :

protected async Task<string> FormatRequest(HttpContext context, bool isDevelopmentEnvironment)
{
    HttpRequest request = context.Request;
    var emailClaim = _accessor.HttpContext.User.FindFirst(c => c.Type == ClaimTypes.Email);
    string userEmail = emailClaim != null ? emailClaim.Value : "No user";
    var logMessage = $"IP: {request.HttpContext.Connection.RemoteIpAddress.MapToIPv4()}, " +
        $"Schema: {request.Scheme}, Method: {request.Method}, " +
        $"User: {userEmail}, " +
        $"Host: {request.Host}, " +
        $"Path: {request.Path}";
    return logMessage;
}

Once you have all this set up, you need to go to your Startup.cs file and set up your newly written middleware in the Configure method like this:

app.UseMiddleware<RequestResponseLoggingMiddleware>();

This will now automatically log each and every request and response to your API.

Conclusion

Proper logging is something the DevOps team and the development team need to work on together, at least on defining what will be logged and where. However, the importance of this step is great, because having all the logs in place from the get go will allow several things:

  • understanding possible errors your clients might be having in production
  • understanding which parts of the application you built are being used and which are being neglected for some reason
  • understanding possible bottlenecks
  • understanding if the system is performing (regardless whether it is understanding if it is performing at all or if it is performing optimally)

So, one of the things my team has added that we do at the start of every project is to set up logging properly so we can get as much info as possible early on.

4 thoughts on “Configure Serilog in ASP.NET Core – few practical tips

  1. Hi Mario,

    Thanks for article. One question though 🙂

    Have you considered using structured logging for your request logger. This way you can send “parameters” to the logger and logging provider (e. g. Seq) can index these fields and allow you to search through them.
    Also, see this post for more info on how to set up serilog request logging much easier.
    https://andrewlock.net/using-serilog-aspnetcore-in-asp-net-core-3-logging-the-selected-endpoint-name-with-serilog/

    1. Hey, Alan,

      Thanks for the comment!
      The article from Andrew looks great, will definitely analyze it deeper and (perhaps) add an addition to this particular article. He is tackling some additional stuff in there that we did not need for this project, but it’s definitely great to have the resources to do more.

      Regarding structured logging – DataDog (which we used) has a parser that was easily configurable to parse our strings, so we did not do it. It could be an approach. However, if it is possible to easily write a parser that works with a humanly readable string, then I think that a humanly readable string is the way to go, because if you ever need to open the files directly, you can find your way more easily. And the machine can parse it anyway.

Leave a Reply

Your email address will not be published. Required fields are marked *

You are currently offline