Logstash java.lang.unsupportedOperationException

I think through the TCP protocol to logstash from nxlog write log. But, i get some exceptions,excemple 'Exception in thread ">elasticsearch.4 " java.lang.unsupportedoperationexception'.
I try to reduce flush_size in logstash config and increase to 1024MB for LS_HEAP_SIZE.However there is no use

Please show the full error message and stacktrace.

Next time, please copy/paste error messages instead of creating screenshots. It makes life easier for everyone, including yourself.

What's your configuration? Which JVM?

I'm sorry , i will correct it.

JVM:
java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)

logstash config:

input {
tcp
{
codec => "json"
type => "iis"
port=> 524
}

redis {
host => "127.0.0.1"
data_type => "list"
port => "6379"
key => "logstash:redis"
threads => 10
password => "*****"
}

}

filter {
if[type]=="iis"
{
mutate {
rename => [ "cs-host", "item" ]
add_field => { "cs-host" => "%{item}" }
}
}
}

output {
stdout { }
elasticsearch {
host => "127.0.0.1"
port => 9200
protocol => "http"
index => "logstash-%{type}-%{item}-%{+YYYY.MM.dd}"
document_type => "%{type}"
workers => 100
user=>""
password=>"
*****"
retry_max_interval=>2
retry_max_items=>5000
flush_size=>500
}
}

This is the memory usage, memory loss is obvious

root@ubuntu:/usr/local/logstash/etc# free -m
total used free shared buffers cached
Mem: 7971 7731 239 15 257 3840
-/+ buffers/cache: 3633 4337
Swap: 1021 5 1016

Why have you defined 100 worker threads for the Elasticsearch output? The Elasticsearch plugin collects events in order to submit bulk requests, so having lots of workers here will result in a lot of memory being used. This is rarely the limiting factor, so please reset this to the default value of 1 and see if that has any impact on memory usage. The increase it slowly to find the optimal value.

Also, which version of Logstash are you using?

thank you for your reply.logstash 1.5.4