LocalJumpError Logstash 2.3.0

Logstash 2.3.0
Windows Server 2008

If I start my Logstash-Shipper with this output-configuation...

output {
    redis { 
        host => ["redisserver1:6379","redisserver2:6380"]
	    shuffle_hosts => true
            data_type => "list" 
            key => "logstash"
	    batch => true
	    batch_events => 100
	    codec => json { charset => "ISO-8859-1"}
    }
}

i'll get this error:

←[33mError encoding event {:exception=>#< LocalJumpError: unexpected next>, :event=>...

without batch => true, everything is fine. why i cant use batch

Please upgrade to LS 2.3.1, there is a few known issues with 2.3.0.

Same in 2.3.1, but i found a workaround. You have to edit some code:
logstash-2.3.0/vendor/jruby/1.9/gems/logstash-output-redis-2.0.4/lib/logstash/outputs/redis.rb

At line 233, original:

if @batch and @data_type == 'list' # Don't use batched method for pubsub.
      # Stud::Buffer
      buffer_receive(payload, key)
      next
end

You have to change next to return:

if @batch and @data_type == 'list' # Don't use batched method for pubsub.
      # Stud::Buffer
      buffer_receive(payload, key)
      return
end

Can you raise this against the plugin repo please?

I have a similar problem with logstash 2.3.1 on ubuntu. Running a config that works perfectly fine with 2.1.1, on 2.3.1 it gives me

:message=>"Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash.", "exception"=>#<LocalJumpError: unexpected next>, "backtrace"=>["/appdata/logstash-2.3.1/vendor/local_gems/aea1f510/logstash-filter-ms_ctest_errorratio-0.1.0/lib/logstash/filters/ms_ctest_errorratio.rb:45:in `filter'", "/appdata/logstash-2.3.1/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.1-java/lib/logstash/filters/base.rb:151:in `multi_filter'", "org/jruby/RubyArray.java:1613:in `each'", "/appdata/logstash-2.3.1/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.1-java/lib/logstash/filters/base.rb:148:in `multi_filter'", "(eval):190:in `filter_func'", "/appdata/logstash-2.3.1/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.1-java/lib/logstash/pipeline.rb:267:in `filter_batch'", "org/jruby/RubyArray.java:1613:in `each'", "org/jruby/RubyEnumerable.java:852:in `inject'", "/appdata/logstash-2.3.1/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.1-java/lib/logstash/pipeline.rb:265:in `filter_batch'", "/appdata/logstash-2.3.1/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.1-java/lib/logstash/pipeline.rb:223:in `worker_loop'", "/appdata/logstash-2.3.1/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.1-java/lib/logstash/pipeline.rb:201:in `start_workers'"],

We're not using redis, we're using the elasticsearch output but the error message doesn't indicate that the problem arises from there. Out of the dozen logstashes that we run on 2.3.1, all of which are using the same elasticsearch plugin, this one is the only to show this behaviour.
We have rolled back to 2.1.1 as a workaround.
The config is:

input
{
kafka
{
# some kafka config
}
}
filter
{

date
{
match => [ "[header][time]", "UNIX" ]
}

if [metric][name] =~ /api/
{
drop {}
}

ruby
{
code => '
splitted = event["[metric][name]"].split("-")
event["c_test"] = splitted.pop()
event["method"] = splitted.join("_")
'
}

mutate
{
remove_field => ["host", "version","path", "message", "@version", "set_fields", "@set_fields", "[header][time]", "unknown_fields", "[metric][name]", "@version"]
add_field => { "source" => "server" }
}

ms_ctest_errorratio
{
# config for some custom plugin that we wrote to calculate error ratios
}

if "miss_ratio" in [tags]
{
mutate
{
add_field => { "source" => "ratio_calculator" }
}
}

}
output
{

elasticsearch
{
hosts => ["10.2.3.189","10.2.3.198","10.2.3.199","10.2.3.223","10.2.3.224"]
index => "java-ctests-ms-%{+YYYY.MM.dd}"
document_type => "%{source}"
flush_size => 5000
}

}