Best practice for log filter for multiple log

HI,
I am working on setting up ELK stack for aggregating my IIS and IVR logs. Both services run on different servers. I have already set up logstash pipeline for IIS logs. i need to set it up for IVR logs. Shall i use the same pipeline and add the filter for IVR logs? or create a new pipeline with new index?

I use file beat to ship the logs and using fields prospectors to identify the IIS and ivr logs.

fields: {log_type: iis}

Thanks.

How you do your pipeline is up to you. This is how I have mine set up:

In /logstash/conf.d directory, I have "01-input.conf" as my file that handles all of my inputs.

Then, I have multiple filter.conf files numbered 10-20 like 11-filter-iislogs.conf and 12-filter-winevens.conf. Each of those files handles only those types of logs.

Lastly, I have 99-output.conf that handles all the output options like:

output {
    if [type] == "iislogs" {
      elasticsearch {
        hosts => ["localhost:9200"]
        index => "iislogs-%{+YYYY-MM-dd}"
      }
    }
    if [type] == "winevents" {
      elasticsearch {
        hosts => ["localhost:9200"]
        index => "winevents-%{+YYYY-MM-dd}"
      }
    }
}

and so on. For me, this makes it easier to modify a particular section rather than having one huge file to deal with.

I personally would recommend creating an index for each type of data you are using because by default there is a limit to the number of fields you can have per index, which depending on your version of Elastic may be 1000. You can change this default value, but I find the less I have to do manually, the better.

Having separate indices also improves on Kibana/Elasticsearch performance when creating visualizations because it will only be looking in the specified index for data and not everything you have.

I hope this helps.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.