High CPU on logstash cluster


(Nikhil Pawar) #1

Hi ,

I am struggling with High CPU on logstash cluster . I have 4 nodes with 8 CPUs and 8 GB memory . Memory seems to used half of it. but CPU is always above 90% .

Below is my logstash config

input {
    beats {
    client_inactivity_timeout => 86400
    port => 5044
    }
}
filter {
  mutate {
    gsub => [
      # replace all forward slashes with underscore
      #"fieldname", "/", "_",
      # replace backslashes, question marks, hashes, and minuses
      # with a dot "."
      #"fieldname2", "[\\?#-]", "."
      "message", "\t", " ",
      "message", "\n", " "
    ]
  }
    grok {
    match => { "message" => "\[%{TIMESTAMP_ISO8601:timestamp_match}\]%{SPACE}\:\|\:%{SPACE}%{WORD:level}%{SPACE}\:\|\:%{SPACE}%{USERNAME:hostname}%{SPACE}\:\|\:%{SPACE}%{GREEDYDATA:coidkey}%{SPACE}\:\|\:%{SPACE}%{GREEDYDATA:clientinfo}%{SPACE}\:\|\:%{SPACE}(%{IP:clientip})?%{SPACE}\:\|\:%{SPACE}%{GREEDYDATA:Url}%{SPACE}\:\|\:%{SPACE}%{JAVACLASS:class}%{SPACE}\:\|\:%{SPACE}%{USER:ident}%{SPACE}%{GREEDYDATA:msg}"}
   }
}
output {
    stdout { codec => rubydebug }

  if "_grokparsefailure" in [tags] {
    # write events that didn't match to a file
    file { "path" => "/tmp/grok_failures.txt" }
  } else{
     elasticsearch {
       hosts => "dfsyselastic.df.jabodo.com:9200"
       user => "UN"
       password => "PW"
       index => "vicinio-%{+YYYY.MM.dd}"
       document_type => "log"
     }
   }
}

(Christian Dahlqvist) #2

Having multiple GREEDYDATA or DATA patterns in the middle of a grok expression can be very inefficient and use a lot of CPU. I would recommend trying to replace the ones you can with more targeted patterns, e.g. NOTSPACE.


(Nikhil Pawar) #3

Thanks Cristian for advise . I have tried few different patterns but for most of them it fails. My grok is working 100% at the moment . Can you please advise which field i can change in grok , my input looks like

[2017-06-16 11:25:02,111]  :|:  INFO   :|:  lvprdsndfe1.lv.jabodo.com  :|:  fd5dd89c4a434df9acd480c51fcbe  :|:  [BT:OT
HER, CC:99]  :|:
                                10.90.53.181  :|:  http://free.couponxplorer.com/index.jhtml  :|:  c.m.w.d.m.UnifiedLoggerW
rapper                                :|:   - [ET: SplashPageServed, IP: 10.90.53.181]
[2017-06-16 11:25:02,184]  :|:  INFO   :|:  lvprdsndfe1.lv.jabodo.com  :|:  59a0d92a2a404e467b972f555b046  :|:  [BT:OT
HER, CC:99]  :|:
                                10.90.53.181  :|:  http://www.translationbuddy.com/index.jhtml  :|:  c.m.w.d.m.UnifiedLogge
rWrapper                                :|:   - [ET: SplashPageServed, IP: 10.90.53.181]
[2017-06-16 11:25:02,186]  :|:  INFO   :|:  lvprdsndfe1.lv.jabodo.com  :|:  12301ef5d74a4911971dddfdb72c  :|:  [BT:OT
HER, CC:99]  :|:
                                10.90.53.181  :|:  http://www.radiorage.com/index.jhtml  :|:  c.m.w.d.i.LocaleInterceptor
                                 :|:   - Error setting the locale based on the model:
[2017-06-16 11:25:02,269]  :|:  INFO   :|:  lvprdsndfe1.lv.jabodo.com  :|:  b4cfe445a551436f9ed86d5b724bf  :|:  [BT:OT
HER, CC:99]  :|:
                                10.90.53.181  :|:  http://free.fromdoctopdf.com/index.jhtml  :|:  c.m.w.d.i.LocaleIntercept
or                                   :|:   - Error setting the locale based on the model:

(Magnus B├Ąck) #4

As Christian said, try NOTSPACE instead of DATA or GREEDYDATA. If that doesn't work, debug it systematically by using the simplest possible expression (\[%{TIMESTAMP_ISO8601:timestamp_match}\]) and building incrementally from there.


(Nikhil Pawar) #5

Replaced - %{GREEDYDATA:coidkey} with %{DATA:coidkey} for inputs which are coming into format fd5dd89c4a434df9acd480c51fcbe solved the issue . Replacing other GREEDYDATA with NOTSPACE are causing issues in grok filter and make it fail . Load has dropped 90% to ~ 15% .

Thanks.


(system) #6

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.