Elasticsearch really weak at handle string change. i have a script like this
def newSegment;
if(params.data !=null){
for (entry in params.data.entrySet()){
if (entry.getValue() != null) {
ctx._source[entry.getKey()] = entry.getValue();
}
else {
ctx._source.remove(entry.getKey());
}
}
}
if (params.segment != null) {
if(ctx._source.segment_ids == null){
newSegment = ' ';
}
else{
newSegment = ctx._source.segment_ids;
}
if (params.segment.include != null) {
for (inc in params.segment.include){
if (!newSegment.contains(inc)){
if(newSegment == ' '){
newSegment = newSegment.trim() + inc;
}
else {
newSegment = newSegment + '|'+inc;
}
}
}
}
if(newSegment !=' '){
if (params.segment.exclude != null) {
for (exc in params.segment.exclude){
def temp = '|'+exc+'|';
if (newSegment.contains(temp)){
newSegment = newSegment.replace(temp,'|');
continue;
}
temp = exc +'|';
if(newSegment.contains(temp) & newSegment.indexOf(exc)==0){
newSegment = newSegment.replace(temp,'');
continue;
}
temp = '|' + exc;
if(newSegment.contains(temp) & newSegment.indexOf(exc) == newSegment.length()-exc.length()){
newSegment = newSegment.replace(temp,'');
continue;
}
if (newSegment.contains(exc) & newSegment.length() - exc.length() ==0){
newSegment = newSegment.replace(exc,'');
}
}
}
}
if(newSegment == ' '){
ctx._source.remove('segment_ids');
}
else{
ctx._source.segment_ids = newSegment
}
}
and here is my update body
{
"script":{
"id":"cdp_upsert_v2",
"params":{
"data":{
"customer_id":"999999"
},
"segment":{
"include":["0","1","2","3","4","5","6","7","8","9","10","11","12","13","14","15","16","17","18","19","20","21","22","23","24","25","26","27","28","29","30","31","32","33","34","35","36","37","38","39","40","41","42","43","44","45","46","47","48","49","50","51","52","53"],
"exclude":["0","1","2","3","4","5","6","7","8","9","10","11","12","13","14","15","16","17","18","19","20","21","22","23","24","25","26","27","28","29","30","31","32","33","34","35","36","37","38","39","40","41","42","43","44","45","46","47","48","49","50","51","52","53","54","55","56","57","58","59","60","61","62","63","64","65","66","67","68","69","70","71","72","73","74","75","76","77","78","79","80","81","82","83","84","85","86","87","88","89","90","91","92","93","94","95","96","97","98","99","100","101","102","103","104","105","106","107","108","109","110","111","112","113","114","115","116","117","118","119","120","121","122","123","124","125"]
}
}
}
}
every thing is fine if exclude is have little element, but when it have more element like example above, it return false result
here is the response
{
"_index" : "v3_customers_33167",
"_type" : "_doc",
"_id" : "33167:999999",
"_version" : 52,
"_seq_no" : 16994624,
"_primary_term" : 9,
"found" : true,
"_source" : {
"customer_id" : "999999",
"segment_ids" : "111111111122222222223333333333444444444455553"
}
}
segment_ids field should be deleted, not response like that. is it because I use replace too much time?, it's still fine with include, don't need to know how much element in it