Creating tags in logstash.conf


(Tim Dunphy) #1

Continuing the discussion from Can't create logstash indexes in elasticsearch:


(Tim Dunphy) #2

I'm using logstash with Pivotal Cloud Foundry. So far it seems to be going well! All the logs from the PCF "logragator" are being funneled into logstash and are turning up in kibana!

But now I need to tag each application that launches within PCF.

In the logs for each app inside of PCF, there is an identifier called 'app_id' that includes a long UUID value. I'm trying to create a grok filter that creates a tag for each application based on that. However it's not working. Here’s what I’ve tried putting in my logstash.conf file:

grok {
match => { "app_id" => "de425601-bb64-4c55-9278-c811d8bdbeb1" }
add_field => { "type" => "showdb_acceptance" }
}

When I to a search such as ‘type:’showdb_acceptance’’ nothing is turning up in the results.

I’m not sure where I’m going wrong in the above syntax. Normally, were this not a PCF setup, I would just install logstash-forwarded onto each node I would pull logs from. And setup the lumberjack.conf file to setup the logs I’d pull and how to tag them.

This approach doesn’t work with PCF because you have no access to the VMs via ssh. And in PCF all logs are available from a central access point they call the ‘loggragator’.

So I’m forced to take the approach of trying to setup the tags I would want in the main logstash.conf for logstash itself.

Below I’d like to show you the entire logstsash.conf file that I’m using to pull logs from PCF, with the section I’m having a problem with bolded:

input {
tcp {
port => 5000
type => syslog
}
udp {
port => 5000
type => syslog
}
}

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOG5424PRI}%{NONNEGINT:syslog5424_ver} +(?:%{TIMESTAMP_ISO8601:syslog5424_ts}|-) +(?:%{HOSTNAME:syslog5424_host}|-) +(?:%{NOTSPACE:syslog5424_app}|-) +(?:%{NOTSPACE:syslog5424_proc}|-) +(?:%{WORD:syslog5424_msgid}|-) +(?:%{SYSLOG5424SD:syslog5424_sd}|-|) +%{GREEDYDATA:syslog5424_msg}" }
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
if !("_grokparsefailure" in [tags]) {
mutate {
replace => [ "@source_host", "%{syslog_hostname}" ]
replace => [ "@message", "%{syslog_message}" ]
}
}
mutate {
remove_field => [ "syslog_hostname", "syslog_message", "syslog_timestamp" ]
}
grok {
match => { "app_id" => "de425601-bb64-4c55-9278-c811d8bdbeb1" }
add_field => { "type" => "showdb_acceptance" }
}
}
}

output {
elasticsearch {

host => "10.10.10.2"  # <-- fake IP
embedded => false
cluster => "optl_elasticsearch"

}

stdout { codec => rubydebug }

}

That config passes the configtest for logstash. And logstash seems to have no problem with it. But I’m wondering what I can change to get those tags I’m trying to create be searchable. Can you offer some pointers how to do this?

Thanks!
Tim


(Rafał Trójniak) #3

Hello,

The config passes configtest if it is correct syntatically, not if it works as you expect.
For me it looks like you are using grok filter in a wrong way.

Can you please show example input evnet, and event you want to get afeter it is parsed ?
Can you tell where the 'app_id' event field should come from ?

If you want to create tag for each event with that field set to some value, I would expect something like :

if [app_id] == 'Your_UUID_Here" {
  mutate{
    add_tag => [ "Your_Tag_Here" ]
  }
}

(Tim Dunphy) #4

Hello! Thanks for your input!

So this is what I've tried:

if [app_id] == "de425601-bb64-4c55-9278-c811d8bdbeb1" {
mutate{
add_tag => [ "showdb-acceptance" ]
}

However when I try to run a search with just "showdb-acceptance" in double quotes I get a message stating:

'showdb-acceptance' (2246062) count per 10m | (2246062 hits)

But no logs appear in the search results. And I see this error:

Oops! SearchPhaseExecutionException[Failed to execute phase [query], all shards failed]

Can someone point out what I'm doing wrong?

Also, what I'm trying to achive is something I've been able to do in the past using a lumberjack config. For example if I point the lumberjack config to a php error file using this syntax:

"paths": [
"/var/log/httpd/jf_php_error.log"
],
"fields": { "type": "php" }
},

I am able to execute searches in the kibana interface for logstash using this

type:'php'

and get results. So maybe I'm using the wrong terminology. How can I recreate this effect in the logstasth.conf?

Thanks,
Tim


(Tim Dunphy) #5

I restarted all 3 elasticsearch nodes as well as logstash. And that for some reason made the error I was getting earlier go away:

Oops! SearchPhaseExecutionException[Failed to execute phase [query], all shards failed]

That message disappeared after the restarts.

But now, when I do a search for the term I setup "showdb-acceptance" I get no results, even tho the application is logging correctly.

And if I do a search for the application ID I tried to setup the tag previously, I am getting results turning up.

This is the tag I tried to setup:

if [app_id] == "de425601-bb64-4c55-9278-c811d8bdbeb1" {
mutate{
add_tag => [ "showdb-acceptance" ]
}

Here's the whole config for a little more context. I hope this can help!

input {
tcp {
port => 5000
type => syslog
}
udp {
port => 5000
type => syslog
}
}

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOG5424PRI}%{NONNEGINT:syslog5424_ver} +(?:%{TIMESTAMP_ISO8601:syslog5424_ts}|-) +(?:%{HOSTNAME:syslog5424_host}|-) +(?:%{NOTSPACE:syslog5424_app}|-) +(?:%{NOTSPACE:syslog5424_proc}|-) +(?:%{WORD:syslog5424_msgid}|-) +(?:%{SYSLOG5424SD:syslog5424_sd}|-|) +%{GREEDYDATA:syslog5424_msg}" }
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
if !("_grokparsefailure" in [tags]) {
mutate {
replace => [ "@source_host", "%{syslog_hostname}" ]
replace => [ "@message", "%{syslog_message}" ]
}
}
mutate {
remove_field => [ "syslog_hostname", "syslog_message", "syslog_timestamp" ]
}
}
if [app_id] == "de425601-bb64-4c55-9278-c811d8bdbeb1" {
mutate{
add_tag => [ "showdb-acceptance" ]
}
}

}

output {
elasticsearch {

host => "10.10.10.2" # <-- fake IP
embedded => false
cluster => "optl_elasticsearch"

}

stdout { codec => rubydebug }

}

What I'd like to do is create an effect where I can search by the term "showdb-acceptance"

Thanks,
Tim


(system) #6