Catch Memcached Filter exceptions

Hi,

I'm trying to connect to a memcached server via the filter Memcached. The problem is that the pipeline failed if the server is not available, and it is not acceptable.
I couldn't find anything to handler failure so I tried to do it with a Ruby filter (it is the first time that I write something in Ruby so I do not know the language) :

ruby {
    code => "
        require 'dalli'
        options = { :expires_in => 0 }
        Dalli::Client.new(['server:11211'], options).tap do |client|
            client.alive!
        end
    "
}
if '_rubyexception' not in [tags] {
    memcached {
        hosts => ["server:11211"]
        get => {"domain-%{[url][domain]}" => "[misp_src]"}
    }
    if ![misp_src] {
        mutate {
            add_field => {"[misp_src]" => "none"}
        }
    }
}
else{
    mutate {
        add_field => {"[misp_src]" => "none"}
    }
}

I have the following errors in Logstash docker logs :

[2020-04-03T15:04:41,953][ERROR][logstash.filters.ruby    ] Ruby exception occurred: No server available
[2020-04-03T15:04:41,953][ERROR][logstash.filters.ruby    ] Ruby exception occurred: No server available
[2020-04-03T15:04:41,961][ERROR][logstash.filters.ruby    ] Ruby exception occurred: No server available
W, [2020-04-03T15:04:42.074478 #1]  WARN -- : server:11211 failed (count: 0) Timeout::Error: IO timeout: {:host=>"server", :port=>11211, :down_retry_delay=>60, :socket_timeout=>2.0, :socket_max_failures=>2, :socket_failure_delay=>0.01, :value_max_bytes=>1048576, :error_when_over_max_size=>false, :compressor=>Dalli::Compressor, :compression_min_size=>1024, :compression_max_size=>false, :serializer=>Marshal, :keepalive=>true, :sndbuf=>nil, :rcvbuf=>nil}
[2020-04-03T15:04:42,086][ERROR][logstash.filters.ruby    ] Ruby exception occurred: No server available
W, [2020-04-03T15:04:42.106821 #1]  WARN -- : server:11211 failed (count: 0) Timeout::Error: execution expired
W, [2020-04-03T15:04:42.117425 #1]  WARN -- : server:11211 failed (count: 0) Timeout::Error: execution expired
[2020-04-03T15:04:42,120][ERROR][logstash.filters.ruby    ] Ruby exception occurred: No server available
W, [2020-04-03T15:04:42.128311 #1]  WARN -- : server:11211 failed (count: 0) Timeout::Error: execution expired
[2020-04-03T15:04:42,128][ERROR][logstash.filters.ruby    ] Ruby exception occurred: No server available
W, [2020-04-03T15:04:42.133284 #1]  WARN -- : server:11211 failed (count: 0) Timeout::Error: execution expired
[2020-04-03T15:04:42,139][ERROR][logstash.filters.ruby    ] Ruby exception occurred: No server available
[2020-04-03T15:04:42,144][ERROR][logstash.filters.ruby    ] Ruby exception occurred: No server available
W, [2020-04-03T15:04:42.145689 #1]  WARN -- : server:11211 failed (count: 0) Timeout::Error: execution expired
W, [2020-04-03T15:04:42.145865 #1]  WARN -- : server:11211 failed (count: 0) Timeout::Error: execution expired
W, [2020-04-03T15:04:42.146935 #1]  WARN -- : server:11211 failed (count: 0) Timeout::Error: execution expired
[2020-04-03T15:04:42,156][ERROR][logstash.filters.ruby    ] Ruby exception occurred: No server available
[2020-04-03T15:04:42,157][ERROR][logstash.filters.ruby    ] Ruby exception occurred: No server available
[2020-04-03T15:04:42,157][ERROR][logstash.filters.ruby    ] Ruby exception occurred: No server available
W, [2020-04-03T15:04:42.158384 #1]  WARN -- : server:11211 failed (count: 0) Timeout::Error: execution expired

Moreover, when a lauch the pipeline, the memcached server seems to be overloaded. Without the Ruby code (just the memcached filter), it works well but I don't handle exception.

Thanks for any help,
Maxime

The memcached filter connects when it is initialized, not when it processes an event. It never attemps to reconnect, which is a known issue.

Your approach might work to avoid calling a memcached filter that failed to connect, but the logs are going to be noisy unless you catch the exception and add a tag to the event (i.e. set your own tag, do not rely on _rubyexception).

Thanks for the answer.

I understand better why the memcached server is overloaded with the ruby filter : because logstash will try to connect for each event whereas, with the memcached filter, the connection happens once. So thas it means that the if a value is set in memecached server after logstash tried to connect, it will never be seen by the filter ?

Moreover, my problem is that I need sommething that prevent logstash from crashing at start if the memcached server is down. Is there something possible to do that ?

I see no reason to believe that. I woud expect that if the cache is updated the old connection will see the new data.