Indexing custom fields from grok filter

Good morning / afternoon / evening,

I've created a custom grok pattern and ran through the pattern debugging for the message to make sure the named captures are pulling correctly. I'm currently defining the following:

Input:
file {
path => "/var/log/remotelogs/192.168.52.14.log"
type => "cisco-ap"
start_position => "beginning"
}

Filter:
if [type] == "cisco-ap" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{IP:ap_ip} %{BASE10NUM:event_no}: AP:%{CISCOMAC:ap_mac}: *%{CISCOTIMESTAMP:ap_time}: %%{CISCOTAG:event_type}: %{GREEDYDATA:message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
add_field => [ "tf_grok", "grokAdded" ]
}
mutate {
add_field => [ "ap_mac", "%{ap_mac}" ]
add_field => [ "tf_mutate", "mutateAdded" ]
}
syslog_pri { }
}
}

Output:
elasticsearch {
index => "%{type}-%{+YYYY.MM.dd}"
}

The data I'm matching against is:
Sep 1 00:44:23 192.168.57.19 234: AP:6400.f15e.7c16: *Sep 1 05:44:22.233: %LWAPP-4-CLIENTEVENTLOG: OfficeExtend Localssid saved in AP flash

http://grokdebug.herokuapp.com/ shows all my named fields as being mapped. I added in two sample fields, tf_grok and tf_mutate, to see if the data was being added based on the location in the filter.

When viewing the index though a variety of ways the mapped fields I would like to pull out are not included and also don't show up via curl localhost:9200/cisco-ap-2016.09.01/_mapping?pretty

I'm sure it is something simple I need to do to have the fields pop into the index, but I'm not 100% sure. Does anyone have a few ideas to try?

Thanks!

To reach the right people, please edit your post and move it from the Elasticsearch category to the Logstash category.

When debugging, don't push straight to ES. Always use a stdout { codec => rubydebug } output until the events look as expected.

match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{IP:ap_ip} %{BASE10NUM:event_no}: AP:%{CISCOMAC:ap_mac}: *%{CISCOTIMESTAMP:ap_time}: %%{CISCOTAG:event_type}: %{GREEDYDATA:message}" }

Keep in mind that * is a regexp metacharacter that you need to escape. I'm also not completely sure if %%{CISCOTAG:event_type} will work. You may have to escape the first percentage sign.

index => "%{type}-%{+YYYY.MM.dd}"

I don't recommend naming your index series after the type field like this.

  • It's hard to create an index template that matches all your log indexes (should you want to do that) unless you're okay with the template matches all indexes.
  • Sooner or later you'll pick up events of various types and then you'll end up with bunch of index series. Since all shards have a fixed overhead you might end up with a lot of overhead.
  • A mistake in a filter (or a `type' field that's dynamic) will lead to the creation of many indexes.

Thanks for the help! I've modified our Output to:

output {
stdout { codec => ruby-debug }
elasticsearch { index => "%{type}-%{+YYYY.MM.dd}"}
}

And see messages appearing in the elasticsearch index, however there is no output to the console.

rubydebug, not ruby-debug (although with such an error the config I wouldn't expect Logstash to start up).

I've tried rubydebug as well and there is no console output. Using a file output, however, does result in the output being directed to a file.