Running logstash on all nodes in a production cluster

In my app server cluster i have around 60+ nodes, where logs of the applications are written under below directory structure on all the nodes

/apps/project-name/services/environment/service-name/

and it contains below log files:

  1. XXXX-srv-express-debug.log
  2. XXXX-srv-express-error.log
  3. default-error.log
  4. default-warn.log

we use logstash because, we can filter the folders under log location and mutate the field and parse the logs accordingly.
my question is that, is it good idea to run logstash service on all the 60+ servers or can filebeat or any other apps can help me do achieve the above. Current architecture flow:

logstash -> kafka -> logstash -> elastic -> kibana

does filebeat has the feature of filtering the logs ? currently i observe that logstash consumes much resource on the servers also went through multiple blog which says the same that logstash consumes much resources.

Any suggestion will be appreciated!!!

Regards,
Harsha

It depends :wink:

Some architectures like yours can become either:
filebeat -> kafka -> logstash -> elasticsearch -> kibana
or
filebeat -> logstash -> kafka -> logstash -> elasticsearch -> kibana
etc. (Not going to list every possibility as they're not necessarly in the scope for your question.)

Important thing is, yes, if you can, it is often a good idea (and it is recommended) to run Filebeat on the app server instead of Logstash, for exactly the reason you mentioned which is performance. That's not because Logstash is bad and Filebeat is good, it's because they are different.

The question of whether you can do it in your usecase or whether the benefits would outweigh the cost is the "it depends" part that you'll need to look into, research and test.

Filebeat has a ton of features to fulfill the same usecases as logstash it terms of harvesting files on your app servers and doing some pre-processing and filtering. You should review the filebeat documentation, especially the section about the inputs definitions and the processors.

Then you'll see if you can use Filebeat to replace Logstash on the app servers in a beneficial way for your usecase. Most of the time, the answer is a resounding yes because even if filebeat cannot do everything you're currently doing with Logstash, it can most often do enough and you can move the tasks it cannot do to a Logstash further down in the pipeline and still end up with exactly what you had before.

I hope that helps.

1 Like

Thanks Martin for the update!!! I am a new bee to ELK. i need much help from you!!! below is my conf file on logstash,

input {
file {
path => "/apps/volume1//services///logs/.log"
codec => "json"
sincedb_path => "/dev/null"
start_position => "end"
ignore_older => 0
}
}

filter {
mutate {
copy => {"path" => "path_tmp"}
}

    mutate {
            split => ["path_tmp" , "/"]
            copy => {"path_tmp[3]" => "project_tmp"}
          }

    if [project_tmp] =~ "tracker-*"  {
    mutate {
             split => ["path_tmp" , "/"]
             add_field => {"project" => "%{[path_tmp][3]}"}
             add_field => {"version" => "%{[path_tmp][5]}"}
             add_field => {"service" => "%{[path_tmp][6]}"}
             remove_field => ["[tags][0]"]
             add_tag => ["%{[path_tmp][3]}", "%{[path_tmp][5]}" , "%{[path_tmp][6]}"]
           }
     }

     mutate {
              copy => {"path_tmp[8]" => "service_log_tmp"}
            }


     if [service_log_tmp] =~ "info" and [service_log_tmp] !~ "default"
     {
       mutate {
                add_field => {"loglevel" => "info"}
              }
     }

     else if [service_log_tmp] =~ "debug" and [service_log_tmp] !~ "-express-" and [service_log_tmp] !~ "default-debug"
     {
       mutate {
                add_field => {"loglevel" => "debug"}
              }

else if [service_log_tmp] =~ "-express-"
{
mutate {
add_field => {"loglevel" => "express"}
}
}

     else if [service_log_tmp] =~ "warn"
     {
       mutate {
               add_field => {"loglevel" => "warn"}
              }
     }

     else if [service_log_tmp] =~ "error" and [service_log_tmp] !~ "-express-"
     {
       mutate {
                add_field => {"loglevel" => "error"}
              }
     }

     else if [service_log_tmp] =~ "default-debug"
     {
        mutate {
                 add_field => {"loglevel" => "default-debug"}
               }
      }

      else if [service_log_tmp] =~ "default-info"
      {
        mutate {
                 add_field => {"loglevel" => "default-info"}
               }
      }

      mutate {
                remove_field => ["path_tmp"]
                remove_field => ["service_log_tmp"]
                remove_field => ["project_tmp"]
             }

can i achieve this, if i use filebeat to copy logs and put kafka and then take it as an input in logstash ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.