I have an application that will write logs to elasticsearch using serilog. I configured the APM server using docker-compose. Once I start the application up and perform an operation (navigate through pages in the browser), then close the application. Those logs are then recorded to elasticsearch. I came across this article that talks about correlating logs with APM. I selected a few steps to follow since I am not using python in this application, and noticed that there are transactions that are inside of APM.
How can I tie these logs together is there a unique variable/id/key that will tie all the logs that were recorded in one single transaction (when I started the application, performed operations, then closed the application)?
When I looked into each of the transactions, I noticed that they have a transcation_id and a trace_id. But, they are changing per each operation that I am performing. I am wanting to know if it is possible and if it is, how can I gather all the logs that pertain to that single transaction? For instance, if I query by a single id, then all of those logs will be returned.
docker-compose.yml
version: '2.2'
services:
apm-server:
image: docker.elastic.co/apm/apm-server:7.13.0
depends_on:
elasticsearch:
condition: service_healthy
kibana:
condition: service_healthy
cap_add: ["CHOWN", "DAC_OVERRIDE", "SETGID", "SETUID"]
cap_drop: ["ALL"]
ports:
- 8200:8200
networks:
- elastic
command: >
apm-server -e
-E apm-server.rum.enabled=true
-E setup.kibana.host=kibana:5601
-E setup.template.settings.index.number_of_replicas=0
-E apm-server.kibana.enabled=true
-E apm-server.kibana.host=kibana:5601
-E output.elasticsearch.hosts=["elasticsearch:9200"]
healthcheck:
interval: 10s
retries: 12
test: curl --write-out 'HTTP %{http_code}' --fail --silent --output /dev/null http://localhost:8200/
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.0
environment:
- bootstrap.memory_lock=true
- cluster.name=docker-cluster
- cluster.routing.allocation.disk.threshold_enabled=false
- discovery.type=single-node
- ES_JAVA_OPTS=-XX:UseAVX=2 -Xms1g -Xmx1g
ulimits:
memlock:
hard: -1
soft: -1
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
healthcheck:
interval: 20s
retries: 10
test: curl -s http://localhost:9200/_cluster/health | grep -vq '"status":"red"'
kibana:
image: docker.elastic.co/kibana/kibana:7.13.0
depends_on:
elasticsearch:
condition: service_healthy
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
ports:
- 5601:5601
networks:
- elastic
healthcheck:
interval: 10s
retries: 20
test: curl --write-out 'HTTP %{http_code}' --fail --silent --output /dev/null http://localhost:5601/api/status
volumes:
esdata:
driver: local
networks:
elastic:
driver: bridge
I ran into this [documentation] (Serilog | APM .NET Agent Reference [1.10] | Elastic) for Elastic.Apm.SerilogEnricher
, I went ahead and included it to my Startup.cs
file and my Program.cs
file.
Startup.cs
:
namespace CustomerSimulatorApp
{
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
var logger = new LoggerConfiguration()
.Enrich.WithElasticApmCorrelationInfo()
.WriteTo.Console(outputTemplate: "[{ElasticApmTraceId} {ElasticApmTransactionId} {Message:lj} {NewLine}{Exception}")
.CreateLogger();
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddControllersWithViews();
// create a new node instance
var node = new Uri("http://localhost:9200");
// settings instance for the node
var settings = new ConnectionSettings(node);
settings.DefaultFieldNameInferrer(p => p);
services.AddSingleton<IElasticClient>(new ElasticClient(settings));
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseAllElasticApm(Configuration);
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllerRoute(
name: "default",
pattern: "{controller=Home}/{action=Index}/{id?}");
});
}
}
}
Program.cs
:
namespace CustomerSimulatorApp
{
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.UseSerilog((context, configuration) =>
{
configuration.Enrich.FromLogContext()
.Enrich.WithElasticApmCorrelationInfo()
.Enrich.WithMachineName()
.WriteTo.Console()
.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri(context.Configuration["ElasticConfiguration:Uri"]))
{
IndexFormat = $"{context.Configuration["ApplicationName"]}-logs-{context.HostingEnvironment.EnvironmentName?.ToLower().Replace(".", "-")}-{DateTime.UtcNow:yyyy-MM}",
AutoRegisterTemplate = true
})
.Enrich.WithProperty("Environment", context.HostingEnvironment.EnvironmentName)
.ReadFrom.Configuration(context.Configuration);
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
})
.UseAllElasticApm();
}
}
This is what I see in my console...
However, when I run the program, I get different transaction id's and trace id's. Is there something I am doing incorrectly when I implement the serilog to correlate my logs from the documentation that I read?