Filtering question

I have the following output, as viewed in Kibana:

"_index": "filebeat-6.5.4-2019.01.07",
"_type": "doc",
"_id": "KoIlKWgBCwfgisRVoVxm",
"_version": 1,
"_score": null,
"_source": {
"offset": 159112,
"prospector": {
"type": "log"
"source": "/var/log/syslog",
"fileset": {
"module": "system",
"name": "syslog"
"input": {
"type": "log"
"@timestamp": "2019-01-07T16:29:11.000Z",
"system": {
"syslog": {
"hostname": "sentry01",
"pid": "9886",
"program": "ntpd",
"message": "Soliciting pool server",
"timestamp": "Jan 7 16:29:11"
"beat": {
"hostname": "",
"name": "",
"version": "6.5.4"
"host": {
"name": ""
"fields": {
"@timestamp": [
"highlight": {
"system.syslog.hostname": [
"sort": [

I want to filter this out, but despite efforts, can't seem to do so. Here's an attempt at it:

if [source] == "/var/log/syslog" and "Soliciting pool server" in [system][syslog][message] {
drop {

Can someone help me out?

Many thanks.

Are you OK with using

if [source] == "/var/log/syslog" and [system][syslog][message] =~ "Soliciting pool server"

Thanks for the response. I have no trouble using the above. Unfortunately, though, it doesn't work. The stuff I'm trying to filter out is still getting through. I seem to be having a problem filtering out anything using fields with the pattern x.y.z

Using periods in field names is unsupported. It works most of the time, but some things will break. So having a field called system.syslog.hostname is not good.

Are you certain the system object has been parsed at the point where you are testing it?

Does the following work for you?

input { generator { count => 1 message => '{ "system": { "syslog": { "hostname": "sentry01", "pid": "9886", "program": "ntpd", "message": "Soliciting pool server", "timestamp": "Jan 7 16:29:11" } } }' } }
filter { json { source => "message" } }
filter { if [system][syslog][message] =~ "Soliciting pool server" { mutate { add_tag => [ "ItMatched" ] } } }
output { stdout { codec => rubydebug } }


This is exactly how the field looks in Kibana: system.syslog.message (there are others of the same format). I guess that's the way Filebeat presents the field.

Apologies,but, where/how do I run the code you just provided?

Put it into /tmp/simple.conf and run logstash using something like

/usr/share/logstash/bin/logstash -f /tmp/simple.conf --path.settings /etc/logstash

Apparently, it does work:

root@elk01:/etc/logstash/conf.d# /usr/share/logstash/bin/logstash -f /tmp/simple.conf --path.settings /etc/logstash
Sending Logstash logs to /var/log/logstash which is now configured via
[2019-01-08T08:25:13,764][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-01-08T08:25:13,776][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.5.4"}
[2019-01-08T08:25:14,570][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-01-08T08:25:14,603][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x6d941996@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:157 sleep>"}
[2019-01-08T08:25:14,616][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2019-01-08T08:25:14,673][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
"@timestamp" => 2019-01-08T13:25:14.611Z,
"host" => "",
"tags" => [
[0] "ItMatched"
"message" => "{ "system": { "syslog": { "hostname": "sentry01", "pid": "9886", "program": "ntpd", "message": "Soliciting pool server", "timestamp": "Jan 7 16:29:11" } } }",
"system" => {
"syslog" => {
"program" => "ntpd",
"pid" => "9886",
"hostname" => "sentry01",
"message" => "Soliciting pool server",
"timestamp" => "Jan 7 16:29:11"
"sequence" => 0,
"@version" => "1"
[2019-01-08T08:25:14,822][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x6d941996@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:157 run>"}

But now, having gone through this, what does it mean in terms of getting my filter to work?

Can you show us your logstash config? The "if" works in your software version, but does not match the data in kibana. That suggests the data in logstash at the point where the "if" occurs does not match the data in kibana. i.e., there is some transformation in the logstash configuration that matters.

Something else that might help would be to show us the output of your data using "output { stdout { codec => rubydebug } }". That would make it clear if one of the structures in [system][syslog][message] is actually an array. I cannot remember if that shows up clearly in kibana.

Sure. I have three config files - input, filter, and output. You can find them here: . A warning that the filter config is very long. And, thanks for sticking with this! I very much appreciate it.

1. if [source] == "/var/log/syslog" and [system][syslog][message] =~ "Soliciting pool server" {
2. drop {
3. }
4. }
5. }

5 closes the filter. 3 closes the if. What does 4 close? I can't figure it out, but it seems to be inside another if, possibly an unintended one.

Yeah, the filter config is a bit messy, but it does work (for the most part :slight_smile: ). That last } closes the first filter. I rejiggered the file to add } to the first filter (used for geoip), and removed it from the very end of the file (after the filter that I'm trying to make work). Alas, it still doesn't work. It's making me absolutely crazy!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.