One index to rule them all... or separate ones on a microservice architecture

Hi guys!

We are migrating our monolith into a new microservice architecture and want to start using ELK just from the beginning.

We are using Spring Boot to create the microservices and logging using Logback with this encoder: net.logstash.logback.encoder.LogstashEncoder. It produces log entries like:

{"@timestamp":"2019-11-09T00:58:24.317+00:00","@version":1,"message":"Starting MiscApplication v1.0.0-SNAPSHOT on misc-6766fd867c-g4plh with PID 1 (/misc.jar started by root in /)","logger_name":"com.enterprise.MiscApplication","thread_name":"main","level":"INFO","level_value":20000}

Wich is perfect for Logstash processing.

In our API Gateway there are some log entries that has in the message field a whole access log string and we want to parse it to get the client ip, the http method, http status code, etc.

In other pieces among their log entries there are some that also has some application data that we want to parse form the message field like: user identifier, application return code, money involved in the request, etc.

Here is the question: Can we use just one index for the three different kind of entries?

We are not so sure about the sparse fields and their consequences... a general entry will have the client ip and the user field empty, an access log entry is going to have the user identifier field empty... and finally an application log entry will have the http status field empty... and so on... is these a good approach or should we use three different index?

Thanks in advance!

Hi @nicolas.orbes,

If you have similar mapping it's better to use one index <--- usual recommendation from the forum, but after it depend on your case.
Usually I go with one index with an alias (alias is important) and after if the index is too fat you can split or merge without downtime as you will use alias :grinning:.

Thanks @gabriel_tessier for your answer, we are going to create one index, try to follow your "alias" recommendation and see how it goes...