Ingested Pipelines

Hi!

I've updated Elastic Stack and now I have an error and I don't know how to manage it.
Before I was using ingested pipelines and everything was just fine and now when I want to use them again I get this error
Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [script] Too many dynamic script compilations within, max: [75/5m]; please use indexed, or scripts with parameters instead; this limit can be changed by the [script.context.template.max_compilations_rate] setting

I pretty stuck and I would be grateful to any king of help.

Thank you!

From what to what?

7.9 to 7.10

And I've chose to save old configuration for yml files

Is there anything in your Elasticsearch logs?

I have a big error where is complaining about every ingested pipeline and sometimes it doesn't like a rollover alias from filebeat.

Posting it could prove helpful.

Ok. Just a second to edit it!

Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [script] Too many dynamic script compilations within, max: [75/5m]; please use indexed, or scripts with parameters instead; this limit can be changed by the [script.context.template.max_compilations_rate] setting at org.elasticsearch.script.ScriptCache.checkCompilationLimit(ScriptCache.java:179) ~[elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.script.ScriptCache.lambda$compile$0(ScriptCache.java:109) ~[elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:433) ~[elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.script.ScriptCache.compile(ScriptCache.java:99) ~[elasticsearch-7.10.1.jar:7.10.1] ... 22 more Suppressed: org.elasticsearch.script.GeneralScriptException: Failed to compile inline script [{{zeek.ssl.client.subject.state}}] using lang [mustache] at org.elasticsearch.script.ScriptCache.compile(ScriptCache.java:121) ~[elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.script.ScriptService.compile(ScriptService.java:384) ~[elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.ingest.ValueSource.wrap(ValueSource.java:80) ~[elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.ingest.common.SetProcessor$Factory.create(SetProcessor.java:108) ~[?:?] at org.elasticsearch.ingest.common.SetProcessor$Factory.create(SetProcessor.java:87) ~[?:?] at org.elasticsearch.ingest.ConfigurationUtils.readProcessor(ConfigurationUtils.java:430) ~[elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.ingest.ConfigurationUtils.readProcessor(ConfigurationUtils.java:399) ~[elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.ingest.ConfigurationUtils.readProcessorConfigs(ConfigurationUtils.java:337) ~[elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.ingest.Pipeline.create(Pipeline.java:74) ~[elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.ingest.IngestService.innerUpdatePipelines(IngestService.java:735) ~[elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.ingest.IngestService.applyClusterState(IngestService.java:710) [elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:510) [elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:501) [elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:471) [elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:418) [elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.cluster.service.ClusterApplierService.access$000(ClusterApplierService.java:68) [elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:162) [elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.10.1.jar:7.10.1] at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.10.1.jar:7.10.1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]

this is for only one pipeline

And is very strange because after I have this error :
[2020-12-18T00:30:42,675][ERROR][o.e.x.s.a.e.ReservedRealm] [ServerNod] failed to retrieve password hash for reserved user [kibana_system] org.elasticsearch.action.UnavailableShardsException: at least one primary shard for the index [.security-7] is unavailable at org.elasticsearch.xpack.security.support.SecurityIndexManager.getUnavailableReason(SecurityIndexManager.java:181) ~[x-pack-security-7.10.1.jar:7.10.1] at org.elasticsearch.xpack.security.authc.esnative.NativeUsersStore.getReservedUserInfo(NativeUsersStore.java:525) [x-pack-security-7.10.1.jar:7.10.1] at org.elasticsearch.xpack.security.authc.esnative.ReservedRealm.getUserInfo(ReservedRealm.java:225) [x-pack-security-7.10.1.jar:7.10.1]

Given your other thread about Logstash, I think there's larger issues here.

What is the output from _cluster/stats?pretty&human?

Which part of the output ?

"cluster_uuid" : "n9mU4lLWTU6a55nrUAW7QQ",
"timestamp" : 1608244991213,
"status" : "yellow",
"indices" : {
"count" : 24,
"shards" : {
"total" : 24,
"primaries" : 24,
"replication" : 0.0,
"index" : {
"shards" : {
"min" : 1,
"max" : 1,
"avg" : 1.0
},
"primaries" : {
"min" : 1,
"max" : 1,
"avg" : 1.0
},
"replication" : {
"min" : 0.0,
"max" : 0.0,
"avg" : 0.0
}
}
},
"docs" : {
"count" : 127629,
"deleted" : 764
},
"store" : {
"size" : "59.2mb",
"size_in_bytes" : 62151057,
"reserved" : "0b",
"reserved_in_bytes" : 0
},
"fielddata" : {
"memory_size" : "1.1kb",
"memory_size_in_bytes" : 1176,
"evictions" : 0
},
"query_cache" : {
"memory_size" : "6.1kb",
"memory_size_in_bytes" : 6328,
"total_count" : 499,
"hit_count" : 16,
"miss_count" : 483,
"cache_size" : 2,
"cache_count" : 2,
"evictions" : 0
},
"completion" : {
"size" : "0b",
"size_in_bytes" : 0
},
"segments" : {
"count" : 128,
"memory" : "1005.1kb",
"memory_in_bytes" : 1029224,
"terms_memory" : "614.8kb",
"terms_memory_in_bytes" : 629584,
"stored_fields_memory" : "61.6kb",
"stored_fields_memory_in_bytes" : 63120,
"term_vectors_memory" : "0b",
"term_vectors_memory_in_bytes" : 0,
"norms_memory" : "8.2kb",
"norms_memory_in_bytes" : 8448,
"points_memory" : "0b",
"points_memory_in_bytes" : 0,
"doc_values_memory" : "320.3kb",
"doc_values_memory_in_bytes" : 328072,
"index_writer_memory" : "30.7mb",
"index_writer_memory_in_bytes" : 32235668,
"version_map_memory" : "7.3kb",
"version_map_memory_in_bytes" : 7540,
"fixed_bit_set" : "20.4kb",
"fixed_bit_set_memory_in_bytes" : 20920,
"max_unsafe_auto_id_timestamp" : 1608244817836,
"file_sizes" : { }
},

"nodes" : {
"count" : {
"total" : 1,
"coordinating_only" : 0,
"data" : 1,
"data_cold" : 1,
"data_content" : 1,
"data_hot" : 1,
"data_warm" : 1,
"ingest" : 1,
"master" : 1,
"ml" : 1,
"remote_cluster_client" : 1,
"transform" : 1,
"voting_only" : 0
},
"versions" : [
"7.10.1"
],
"os" : {
"available_processors" : 5,
"allocated_processors" : 5,
"names" : [
{
"name" : "Linux",
"count" : 1
}
],
"pretty_names" : [
{
"pretty_name" : "Kali GNU/Linux Rolling",
"count" : 1
}
],
"mem" : {
"total" : "11.7gb",
"total_in_bytes" : 12669161472,
"free" : "619.3mb",
"free_in_bytes" : 649433088,
"used" : "11.1gb",
"used_in_bytes" : 12019728384,
"free_percent" : 5,
"used_percent" : 95
}
},
"process" : {
"cpu" : {
"percent" : 3
},
"open_file_descriptors" : {
"min" : 484,
"max" : 484,
"avg" : 484
}
},
"jvm" : {
"max_uptime" : "12.8m",
"max_uptime_in_millis" : 768940,
"versions" : [
{
"version" : "15.0.1",
"vm_name" : "OpenJDK 64-Bit Server VM",
"vm_version" : "15.0.1+9",
"vm_vendor" : "AdoptOpenJDK",
"bundled_jdk" : true,
"using_bundled_jdk" : true,
"count" : 1
}
],
"mem" : {
"heap_used" : "972.8mb",
"heap_used_in_bytes" : 1020097024,
"heap_max" : "5gb",
"heap_max_in_bytes" : 5368709120
},
"threads" : 75
},
"fs" : {
"total" : "145.7gb",
"total_in_bytes" : 156449120256,
"free" : "129.5gb",
"free_in_bytes" : 139125129216,
"available" : "122.1gb",
"available_in_bytes" : 131106566144
},
"plugins" : ,
"network_types" : {
"transport_types" : {
"security4" : 1
},
"http_types" : {
"security4" : 1
}
},
"discovery_types" : {
"single-node" : 1
},
"packaging_types" : [
{
"flavor" : "default",
"type" : "deb",
"count" : 1
}
],
"ingest" : {
"number_of_pipelines" : 43,
"processor_stats" : {
"conditional" : {
"count" : 1611,
"failed" : 0,
"current" : 0,
"time" : "2.7s",
"time_in_millis" : 2785
},

And, if helps, when i delete all ingested pipelines and use filebeat directly to Elasticsearch everything works just fine.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.