[SOLVED] Elasticsearch output filter by tags... now failing when on new indices

Hi forum...

I'm puzzled by several issues here... but this one is notably strange:
Im injecting to elasticsearch different indices filtering by tag... This has been working well (till now)
heres my output filter file that routes by tag:

output {

  if "_grokparsefailure" not in [tags] {
    if "syslogmatch" in [tags] {
      elasticsearch {
        host => "172.16.0.253"
        cluster => "ICCBroadcast"
        index => "logstash-syslog-%{+YYYY.MM.dd}"
      }
    } else if "pfsensematch" in [tags] { 
      elasticsearch {
        host => "172.16.0.253"
        cluster => "ICCBroadcast"
        index => "logstash-pfsense-%{+YYYY.MM.dd}"
      }
    } else if "suricatamatch" in [tags] {
      elasticsearch {
        host => "172.16.0.253"
        cluster => "ICCBroadcast"
        index => "logstash-suricata-%{+YYYY.MM.dd}"
      }
    } else if "shoutcastmatch" in [tags] {
      elasticsearch {
        host => "172.16.0.253"
        cluster => "ICCBroadcast"
        index => "logstash-shoutcast-%{+YYYY.MM.dd}"
      }
    } else if "icecastmatch" in [tags] {
      elasticsearch {
        host => "172.16.0.253"
        cluster => "ICCBroadcast"
        index => "logstash-icecast2-%{+YYYY.MM.dd}"
      }
    } else if "wowzamatch" in [tags] {
      elasticsearch {
        host => "172.16.0.253"
        cluster => "ICCBroadcast"
        index => "logstash-wowza-%{+YYYY.MM.dd}"
      }
    } else if "haproxymatch" in [tags] {
      elasticsearch {
        host => "172.16.0.253"
        cluster => "ICCBroadcast"
        index => "logstash-haproxy-%{+YYYY.MM.dd}"
      }
    } else if "iptables" in [tags] {
      elasticsearch {
        host => "172.16.0.253"
        cluster => "ICCBroadcast"
        index => "logstash-iptables-%{+YYYY.MM.dd}"
      }
    } else if "centovacastmatch" in [tags] {
      elasticsearch {
        host => "172.16.0.253"
        cluster => "ICCBroadcast"
        index => "logstash-centovacast-%{+YYYY.MM.dd}"
      }

    } else if "errorice2match" in [tags] {
      elasticsearch {
        host => "172.16.0.253"
        cluster => "ICCBroadcast"
        index => "logstash-errorice2-%{+YYYY.MM.dd}"
      }

    } else {
      elasticsearch { 
	host => "172.16.0.253"
	cluster => "ICCBroadcast"
        index => "logstash-bulk-%{+YYYY.MM.dd}"
      }
    }
  }
}

ALL "routes" have being working well sice I assembled the cluster this summer....
Note I use a "last resource" bulk index to route any eventual "non categorized" doc. Since I use drop if grokparsefailure on all filters, and I tag EVERYTHING bulk has been always empty... TILL NOW

The problem appeared as I needed tho parse error log file for Icecast2 server: So, I created a working filter, and added it to the setup... as always... and everything runs fine until here....

My new icecst2error docs are landing on bulk index.
But the tag "errorice2match" which filters the line IS PRESENT
In fact, lal groking and routing goes well... until the last step.

Here I paste a doc reding it directly from "bulk" index:

{ "_index": "logstash-bulk-2015.10.08","_type": "icecast2error-log","_id": "AVBJ0kNmHx3Eulx7fZje","_version": 1,"_score": 1,"_source": { "@version": "1","@timestamp": "2015-10-08T23:38:10.000Z","type": "icecast2error-log","logfile": "error.log","host": "OVHrelayer2","path": "/var/log/icecast2/error.log","service": "audio_transport","server_service": "icecast2","server_name": "OVHrelayer2","server_type": "transport","server_domain": "streaming-pro.com","tags": [ "ErrorIce2","errorice2match"],"timestamp": "2015-10-09 01:38:10","clientID": 10111,"clientip": "87.123.231.123","geoip": { "ip": "87.123.231.123","country_code2": "DE","country_code3": "DEU","country_name": "Germany","continent_code": "EU","region_name": "07","city_name": "Greven","latitude": 52.099999999999994,"longitude": 7.616700000000009,"timezone": "Europe/Berlin","real_region_name": "Nordrhein-Westfalen","location": [ 7.616700000000009,52.099999999999994],"coordinates": [ 7.616700000000009,52.099999999999994]}}

As you can see, the filtering tag "errorice2match" is present on the doc... But the filtering has being completely ignored... the doc reached the last else and landed at the bulk index
Everything else WORKS... normally as usual...
Any clue on this plaese?

Thanks and regards!

Solved:
It turned out to be a "rogue" logstash process.
This is something I readed somewhere and that has puzzled me this time.

Since I keep a periodic restart process policy in order to keep everything up... somehow the init.d script was watching a wrong/missing/obsolete pid.
The rogue process, which was using the former setup was getting the connectio to the redis server, so it procesed new doc wrong

By killing it and runnig start init.d script the new / legitimate process loaded the new setup and worked as expected.

Hope this could advice others!

2 Likes