Extract integer from syslog

Hi there,

I have a basic syslog input/filter/output that's working fine to ship stuff to elasticsearch and onto kibana. Occasionally though there'll be a line in the syslog for a specific system like:

Sep  8 16:35:01 captiveportalhost captiveportal[24715]: [notice] There are currently 0 people logged into captivegateway

I'd like logstash to take the number of people in that line and store it as its own field so I can later us that to build a graph in kibana. Can anyone give me some pointers how to achieve it?

At present my config looks like:

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}

Any help gratefully recieved! I'm very new to this and finding it a bit hard getting going!

Luke

Was that a copy/paste mistake or do you actually have the same configuration block twice?

This is an easy job for a grok filter.

filter {
  grok {
    match => [
      "syslog_message",
      "There are currently %{INT:logged_in_count:int} people logged into captivegateway"
    ]
  }
}

Cheers for the answer I'll give it a go......and yes it was a copy/paste mistake....apologies!