Merging rails log lines

Hi,

I am looking at setting up some log aggregation using Filebeats and Elastisearch. We currently have around 200 servers that logs will be aggregated from, with aspirations for growth - the attraction to Elastisearch is that it will be able to scale.

Some of the logs to be aggregated are from Rails applications. Ideally all log lines for a particular HTTP request should be logged into one record in Elastisearch.

On a busy server log lines from different HTTP requests will be interleaved, so Filebeats multiline merging will not work. Fortunately Rails can prefix log lines with a request identifier like this:

[a00c54e0-874f-4358-a7b9-accef0c407c6] Started GET "/dashboard" for 127.0.0.1 at 2014-11-24 15:48:30 +0000

So, it theory it should be pretty straightforward to aggregate the associated lines. But how exactly with Filebeats and Elastisearch?

One thought is to send logs through Logstash and use the aggregate filter (https://www.elastic.co/guide/en/logstash/current/plugins-filters-aggregate.html) to merge the related lines. However if/when we need to scale to multiple machines running logstash we will have problems as all the related log lines will not go through the same aggregation point. I.e. some log lines for a particular request will go through one instance of Logstash, others will go through another instance ..., so lines for the request will be aggregated into two or more separate records in Elastisearch.

I could aggregate the logs on Elastisearche using the update API, however this would not be performant.

Ideally aggregation would be done on the origin server. I was looking to see if Filebeats could pipe log lines to an external process to perform this specialised processing, but there does not appear to be such an option.

Does anyone hear have any ideas on how to solve this problem?

Regards,

Richard

To me aggregate filter is the most natural solution to this problem. You basically want to implement some (time-window based) join operation on the ID. Problem with the ID is, you need some parsing as well, in order to extract the ID.

On the one hand, do you really need to correlate/join the data before indexing? Sometimes it's good enough to just parse the contents in order to build dashboards. One can still use the ID for filtering in ES/kibana.

For implementing a distributed join you need a stable partitioning based on the ID. Using the Elastic Stack, this might require two layers of Logstash. The first layer parsing the messages (also extracting the ID) and the second layer doing the correlation (using the aggregate filter). The first layer will need to choose the output based on a hash on the ID (unfortunately no LS output supports some stable partitioning -> requires quite some config in LS).

output {
  if [@metadata][hash] == 0 {
    // LS 0
  } 
  if [@metadata][hash] == 1 {
    // LS 1
  } 
  ...
}

Hi Steffen,

You might be right about the need to correlate/join before indexing. This could well be a premature optimisation. I was hopping someone would say, have you seen the 'such and such option' which does this, which I had missed in the documentation.

Thanks for the suggestion to layer Logstash.

Regards,

Richard

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.