Elastic crash

Elastic have some issue about java heap size.

Elastic, Kibana, and Logstash live on 1 server that has 52 GB RAM. We've also tried change the jvm options for elastic from 1 to 40 Gb. But the elastic always shut down all of sudden.

There is no consistent period of the time the elastic alive between variation of jvm option configuration (1-40 gb) that we've tried.

What else do we need to see? thankyou in advance.

[2022-01-21T22:24:59,025][INFO ][o.e.n.Node ] [node-1] starting ...
[2022-01-21T22:24:59,040][INFO ][o.e.x.s.c.f.PersistentCache] [node-1] persistent cache index loaded
[2022-01-21T22:24:59,207][INFO ][o.e.t.TransportService ] [node-1] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2022-01-21T22:24:59,660][INFO ][o.e.c.c.Coordinator ] [node-1] cluster UUID [aZA8EGmTSFKePIDvlJNdhA]
[2022-01-21T22:24:59,754][INFO ][o.e.c.s.MasterService ] [node-1] elected-as-master ([1] nodes joined)[{node-1}{gdIi0ZQcS2yATknSmPpcFA}{JgJXtA6KSiKuR7LCmbgAwQ}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw} elect leader, BECOME_MASTER_TASK, FINISH_ELECTION], term: 118, version: 6944, delta: master node changed {previous , current [{node-1}{gdIi0ZQcS2yATknSmPpcFA}{JgJXtA6KSiKuR7LCmbgAwQ}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}]}
[2022-01-21T22:24:59,848][INFO ][o.e.c.s.ClusterApplierService] [node-1] master node changed {previous , current [{node-1}{gdIi0ZQcS2yATknSmPpcFA}{JgJXtA6KSiKuR7LCmbgAwQ}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}]}, term: 118, version: 6944, reason: Publication{term=118, version=6944}
[2022-01-21T22:24:59,966][INFO ][o.e.h.AbstractHttpServerTransport] [node-1] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2022-01-21T22:25:00,106][INFO ][o.e.n.Node ] [node-1] started
[2022-01-21T22:25:00,331][INFO ][o.e.l.LicenseService ] [node-1] license [fd544bc6-f5a6-4797-9903-83b8d624be1a] mode [basic] - valid
[2022-01-21T22:25:00,331][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [node-1] Active license is now [BASIC]; Security is enabled
[2022-01-21T22:25:00,331][INFO ][o.e.g.GatewayService ] [node-1] recovered [31] indices into cluster_state
[2022-01-21T22:25:00,479][ERROR][o.e.x.s.a.e.NativeUsersStore] [node-1] security index is unavailable. short circuiting retrieval of user [logstash_internal]
[2022-01-21T22:25:04,408][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[datatype-interval][0]]]).
[2022-01-21T22:27:20,649][INFO ][o.e.t.LoggingTaskListener] [node-1] 640 finished with response BulkByScrollResponse[took=2.1s,timed_out=false,sliceId=null,updated=11,created=0,deleted=0,batches=1,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=,search_failures=]
[2022-01-21T22:27:21,175][INFO ][o.e.t.LoggingTaskListener] [node-1] 643 finished with response BulkByScrollResponse[took=3.2s,timed_out=false,sliceId=null,updated=692,created=0,deleted=0,batches=1,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=,search_failures=]
[2022-01-21T22:30:54,591][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][352] overhead, spent [385ms] collecting in the last [1s]
[2022-01-21T22:30:56,600][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][354] overhead, spent [372ms] collecting in the last [1s]
[2022-01-21T22:30:57,911][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][355] overhead, spent [366ms] collecting in the last [1.3s]
[2022-01-21T22:30:59,940][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][357] overhead, spent [369ms] collecting in the last [1s]
[2022-01-21T22:31:01,959][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][359] overhead, spent [371ms] collecting in the last [1s]
[2022-01-21T22:32:46,626][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][461] overhead, spent [325ms] collecting in the last [1s]
[2022-01-21T22:34:33,351][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][567] overhead, spent [319ms] collecting in the last [1s]
[2022-01-21T22:34:35,377][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][569] overhead, spent [348ms] collecting in the last [1s]
[2022-01-21T22:34:36,533][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][570] overhead, spent [341ms] collecting in the last [1.1s]
[2022-01-21T22:34:38,551][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][572] overhead, spent [346ms] collecting in the last [1s]
[2022-01-21T22:34:40,570][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][574] overhead, spent [339ms] collecting in the last [1s]
[2022-01-21T22:34:41,773][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][575] overhead, spent [344ms] collecting in the last [1.2s]
[2022-01-21T22:34:43,807][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][577] overhead, spent [388ms] collecting in the last [1s]
[2022-01-21T22:34:45,838][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][579] overhead, spent [397ms] collecting in the last [1s]
[2022-01-21T22:34:47,088][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][580] overhead, spent [400ms] collecting in the last [1.2s]
[2022-01-21T22:34:49,119][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][582] overhead, spent [401ms] collecting in the last [1s]
[2022-01-21T22:34:51,136][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][584] overhead, spent [405ms] collecting in the last [1s]
[2022-01-21T22:34:52,514][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][585] overhead, spent [420ms] collecting in the last [1.3s]
[2022-01-21T22:34:54,551][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][587] overhead, spent [399ms] collecting in the last [1s]
[2022-01-21T22:34:56,554][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][589] overhead, spent [394ms] collecting in the last [1s]
[2022-01-21T22:34:57,851][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][590] overhead, spent [388ms] collecting in the last [1.3s]
[2022-01-21T22:34:59,882][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][592] overhead, spent [388ms] collecting in the last [1s]
[2022-01-21T22:35:01,905][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][594] overhead, spent [383ms] collecting in the last [1s]
[2022-01-21T22:35:03,203][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][595] overhead, spent [383ms] collecting in the last [1.3s]
[2022-01-21T22:35:05,234][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][597] overhead, spent [384ms] collecting in the last [1s]
[2022-01-21T22:35:05,452][INFO ][o.e.i.b.HierarchyCircuitBreakerService] [node-1] attempting to trigger G1GC due to high heap usage [40818966528]
[2022-01-21T22:35:05,815][INFO ][o.e.i.b.HierarchyCircuitBreakerService] [node-1] GC did not bring memory usage down, before [40818966528], after [41203789824], allocations [54], duration [363]
[2022-01-21T22:35:14,034][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][old][600][1] duration [6.6s], collections [1]/[6.6s], total [6.6s]/[6.6s], memory [39.9gb]->[37.5gb]/[40gb], all_pools {[young] [0b]->[0b]/[0b]}{[old] [39.9gb]->[37.5gb]/[40gb]}{[survivor] [0b]->[0b]/[0b]}
[2022-01-21T22:35:14,034][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][600] overhead, spent [6.6s] collecting in the last [6.6s]
[2022-01-21T22:35:14,372][INFO ][o.e.i.b.HierarchyCircuitBreakerService] [node-1] attempting to trigger G1GC due to high heap usage [40813844168]
[2022-01-21T22:35:14,434][INFO ][o.e.i.b.HierarchyCircuitBreakerService] [node-1] GC did not bring memory usage down, before [40813844168], after [40870013648], allocations [1], duration [62]
[2022-01-21T22:35:21,683][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][old][602][2] duration [5.7s], collections [1]/[6.6s], total [5.7s]/[12.3s], memory [38.8gb]->[39.6gb]/[40gb], all_pools {[young] [767.9mb]->[0b]/[0b]}{[old] [37.8gb]->[39.6gb]/[40gb]}{[survivor] [256mb]->[0b]/[0b]}
[2022-01-21T22:35:21,683][INFO ][o.e.i.b.HierarchyCircuitBreakerService] [node-1] attempting to trigger G1GC due to high heap usage [42547741848]
[2022-01-21T22:35:21,695][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][602] overhead, spent [5.9s] collecting in the last [6.6s]
[2022-01-21T22:35:30,119][INFO ][o.e.i.b.HierarchyCircuitBreakerService] [node-1] GC did not bring memory usage down, before [42547741848], after [42585411840], allocations [1], duration [8436]
[2022-01-21T22:35:30,119][WARN ][o.e.h.AbstractHttpServerTransport] [node-1] handling request [null][POST][/.kibana_task_manager/_update_by_query?ignore_unavailable=true&refresh=true&conflicts=proceed][Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:54221}] took [8435ms] which is above the warn threshold of [5000ms]
[2022-01-21T22:35:30,132][WARN ][o.e.h.AbstractHttpServerTransport] [node-1] handling request [null][GET][/_xpack?accept_enterprise=true][Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:54218}] took [8435ms] which is above the warn threshold of [5000ms]
[2022-01-21T22:35:34,875][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][603] overhead, spent [13.1s] collecting in the last [13.1s]
[2022-01-21T22:35:38,491][INFO ][o.e.i.b.HierarchyCircuitBreakerService] [node-1] attempting to trigger G1GC due to high heap usage [42598058016]
[2022-01-21T22:35:38,523][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][604] overhead, spent [3.6s] collecting in the last [3.6s]
[2022-01-21T22:35:56,031][INFO ][o.e.i.b.HierarchyCircuitBreakerService] [node-1] GC did not bring memory usage down, before [42598058016], after [42628950944], allocations [1], duration [17540]
[2022-01-21T22:35:56,051][WARN ][o.e.h.AbstractHttpServerTransport] [node-1] handling request [null][POST][/.kibana_task_manager/_update_by_query?ignore_unavailable=true&refresh=true&conflicts=proceed][Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:54221}] took [17539ms] which is above the warn threshold of [5000ms]
[2022-01-21T22:35:56,044][WARN ][o.e.h.AbstractHttpServerTransport] [node-1] handling request [null][GET][/_xpack][Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:54190}] took [17539ms] which is above the warn threshold of [5000ms]
[2022-01-21T22:35:56,031][WARN ][o.e.h.AbstractHttpServerTransport] [node-1] handling request [null][POST][/.reporting-*/_search][Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:54208}] took [17539ms] which is above the warn threshold of [5000ms]
[2022-01-21T22:52:57,926][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-1] fatal error in thread [Elasticsearch[node-1][generic][T#26]], exiting
java.lang.OutOfMemoryError: Java heap space
[2022-01-21T22:52:57,926][WARN ][i.n.c.n.NioEventLoop ] [node-1] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2022-01-21T22:52:58,099][WARN ][i.n.c.n.NioEventLoop ] [node-1] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2022-01-21T22:52:57,926][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-1] fatal error in thread [ticker-schedule-trigger-engine], exiting
java.lang.OutOfMemoryError: Java heap space
[2022-01-21T22:52:58,078][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-1] fatal error in thread [Elasticsearch[node-1][refresh][T#1]], exiting
java.lang.OutOfMemoryError: Java heap space
[2022-01-21T22:52:58,181][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-1] fatal error in thread [Elasticsearch[node-1][transport_worker][T#2]], exiting
java.lang.OutOfMemoryError: Java heap space
[2022-01-21T22:52:58,094][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-1] fatal error in thread [Elasticsearch[ilm-history-store-flush-scheduler][T#1]], exiting
java.lang.OutOfMemoryError: Java heap space
[2022-01-21T22:52:57,926][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-1] fatal error in thread [Elasticsearch[node-1][search][T#3]], exiting
java.lang.OutOfMemoryError: Java heap space
[2022-01-21T22:52:58,078][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-1] fatal error in thread [Connection evictor], exiting
java.lang.OutOfMemoryError: Java heap space

What is the output from the _cluster/stats?pretty&human API?

do i write it on dev tools? but it says need request body

You can do it via curl or Dev Tools.

{
"error" : {
"root_cause" : [
{
"type" : "parse_exception",
"reason" : "request body is required"
}
],
"type" : "parse_exception",
"reason" : "request body is required"
},
"status" : 400
}

here is the output,
im sorry, what do i need to fill in request body?

It'd be helpful if you posted the entire command you are running as well.


Here is the dev tools, what went missing? thankyou @warkolm

The command to run is GET _cluster/stats. Also do not post screenshots or images of text. Instead copy and paste the output here and format it correctly.

1 Like

you are doing wrong @Grace_A you don't have to write your cluster name in query just simply run
GET _cluster/stats?pretty&human in kibana dev console

2 Likes

{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "my-cluster",
"cluster_uuid" : "aZA8EGmTSFKePIDvlJNdhA",
"timestamp" : 1643612937923,
"status" : "yellow",
"indices" : {
"count" : 31,
"shards" : {
"total" : 31,
"primaries" : 31,
"replication" : 0.0,
"index" : {
"shards" : {
"min" : 1,
"max" : 1,
"avg" : 1.0
},
"primaries" : {
"min" : 1,
"max" : 1,
"avg" : 1.0
},
"replication" : {
"min" : 0.0,
"max" : 0.0,
"avg" : 0.0
}
}
},
"docs" : {
"count" : 136353495,
"deleted" : 921995
},
"store" : {
"size" : "12.1gb",
"size_in_bytes" : 13080749418,
"total_data_set_size" : "12.1gb",
"total_data_set_size_in_bytes" : 13080749418,
"reserved" : "0b",
"reserved_in_bytes" : 0
},
"fielddata" : {
"memory_size" : "594.5kb",
"memory_size_in_bytes" : 608856,
"evictions" : 0
},
"query_cache" : {
"memory_size" : "16.6mb",
"memory_size_in_bytes" : 17447480,
"total_count" : 1126973,
"hit_count" : 92904,
"miss_count" : 1034069,
"cache_size" : 897,
"cache_count" : 897,
"evictions" : 0
},
"completion" : {
"size" : "0b",
"size_in_bytes" : 0
},
"segments" : {
"count" : 205,
"memory" : "1.1mb",
"memory_in_bytes" : 1166846,
"terms_memory" : "779.5kb",
"terms_memory_in_bytes" : 798304,
"stored_fields_memory" : "110.4kb",
"stored_fields_memory_in_bytes" : 113096,
"term_vectors_memory" : "0b",
"term_vectors_memory_in_bytes" : 0,
"norms_memory" : "94.2kb",
"norms_memory_in_bytes" : 96512,
"points_memory" : "0b",
"points_memory_in_bytes" : 0,
"doc_values_memory" : "155.2kb",
"doc_values_memory_in_bytes" : 158934,
"index_writer_memory" : "0b",
"index_writer_memory_in_bytes" : 0,
"version_map_memory" : "0b",
"version_map_memory_in_bytes" : 0,
"fixed_bit_set" : "2.4kb",
"fixed_bit_set_memory_in_bytes" : 2552,
"max_unsafe_auto_id_timestamp" : 1642065113415,
"file_sizes" : { }
},
"mappings" : {
"field_types" : [
{
"name" : "boolean",
"count" : 4,
"index_count" : 4,
"script_count" : 0
},
{
"name" : "date",
"count" : 60,
"index_count" : 23,
"script_count" : 0
},
{
"name" : "float",
"count" : 42,
"index_count" : 14,
"script_count" : 0
},
{
"name" : "keyword",
"count" : 352,
"index_count" : 23,
"script_count" : 0
},
{
"name" : "long",
"count" : 33,
"index_count" : 17,
"script_count" : 0
},
{
"name" : "nested",
"count" : 4,
"index_count" : 4,
"script_count" : 0
},
{
"name" : "object",
"count" : 36,
"index_count" : 8,
"script_count" : 0
},
{
"name" : "text",
"count" : 176,
"index_count" : 23,
"script_count" : 0
}
],
"runtime_field_types" :
},
"analysis" : {
"char_filter_types" : ,
"tokenizer_types" : ,
"filter_types" : ,
"analyzer_types" : ,
"built_in_char_filters" : ,
"built_in_tokenizers" : ,
"built_in_filters" : ,
"built_in_analyzers" :
},
"versions" : [
{
"version" : "7.13.2",
"index_count" : 31,
"primary_shard_count" : 31,
"total_primary_size" : "12.1gb",
"total_primary_bytes" : 13080749418
}
]
},
"nodes" : {
"count" : {
"total" : 1,
"coordinating_only" : 0,
"data" : 1,
"data_cold" : 1,
"data_content" : 1,
"data_frozen" : 1,
"data_hot" : 1,
"data_warm" : 1,
"ingest" : 1,
"master" : 1,
"ml" : 1,
"remote_cluster_client" : 1,
"transform" : 1,
"voting_only" : 0
},
"versions" : [
"7.13.2"
],
"os" : {
"available_processors" : 4,
"allocated_processors" : 4,
"names" : [
{
"name" : "Windows Server 2019",
"count" : 1
}
],
"pretty_names" : [
{
"pretty_name" : "Windows Server 2019",
"count" : 1
}
],
"architectures" : [
{
"arch" : "amd64",
"count" : 1
}
],
"mem" : {
"total" : "51.9gb",
"total_in_bytes" : 55833968640,
"free" : "24.9gb",
"free_in_bytes" : 26738257920,
"used" : "27gb",
"used_in_bytes" : 29095710720,
"free_percent" : 48,
"used_percent" : 52
}
},
"process" : {
"cpu" : {
"percent" : 4
},
"open_file_descriptors" : {
"min" : -1,
"max" : -1,
"avg" : 0
}
},
"jvm" : {
"max_uptime" : "6h",
"max_uptime_in_millis" : 21806940,
"versions" : [
{
"version" : "16",
"vm_name" : "OpenJDK 64-Bit Server VM",
"vm_version" : "16+36",
"vm_vendor" : "AdoptOpenJDK",
"bundled_jdk" : true,
"using_bundled_jdk" : true,
"count" : 1
}
],
"mem" : {
"heap_used" : "7.3gb",
"heap_used_in_bytes" : 7865510600,
"heap_max" : "15gb",
"heap_max_in_bytes" : 16106127360
},
"threads" : 85
},
"fs" : {
"total" : "299.4gb",
"total_in_bytes" : 321543729152,
"free" : "251.7gb",
"free_in_bytes" : 270306131968,
"available" : "251.7gb",
"available_in_bytes" : 270306131968
},
"plugins" : ,
"network_types" : {
"transport_types" : {
"security4" : 1
},
"http_types" : {
"security4" : 1
}
},
"discovery_types" : {
"single-node" : 1
},
"packaging_types" : [
{
"flavor" : "default",
"type" : "zip",
"count" : 1
}
],
"ingest" : {
"number_of_pipelines" : 1,
"processor_stats" : {
"gsub" : {
"count" : 0,
"failed" : 0,
"current" : 0,
"time" : "0s",
"time_in_millis" : 0
},
"script" : {
"count" : 0,
"failed" : 0,
"current" : 0,
"time" : "0s",
"time_in_millis" : 0
}
}
}
}
}

Thankyou for the correction @Christian_Dahlqvist and @Aniket_Pant
here is the output

In your jvm.options what do you have set?

Example:
-Xms16g
-Xmx16g

Pet peeve... Please don't run servers with odd memory config keep in multiples or 8 after the first 8. 8, 16, 32, 64, 128.

for jvm options, I've tried several combinations to use

from
-Xms1g
-Xmx1g

to
-Xms40g
-Xmx40g

But there is no configuration that makes the elastic never crash. Sometimes jvm options 1G can be longer alive than 20, vice versa.

Thank you for reminding the RAM, we'll take notes on that.

Why?

1 Like

Don't go above 30g of heap.

1 Like

Not so much of an issue with VM's it's a carry over from physical days. One of which has saved my bacon on odd ball applications. I still use to this day and never have strange memory issues. So do vendors when the sell servers. It's really hard to get a server with 40Gb unless another 8 was added in afterwards which will have negative effects.

When you have bank interleaving "every modernish machine" it's better to have matching stick 8gb for 8gb so your timing on the ram modules are the same. Memory intensive applications are very sensitive to that and keeping an even number to match even for a VM makes scaling easier. For example I'm building a new hypervisor cluster if I ask need 1TB of ram how will it be broken up per numa nodes? 512Gb is how it turns out which either will be double or quad channel access. If you go to 1.5Tb you normally end up with a 768 and a triple channel access which one some CPU's can do.

That's a very tiny portion of a very large off topic subject.

@Grace_A
Set to 16g for both min and max and see if it stays online. Your current data set is pretty small so maybe even drop back to 8gb. Please keep in mind that the OS needs some RAM 3~4Gb as well and Logstash on the same machine can be a challenge based on what it's pulling and sending to Elastic. Your proc count is also a little under for what your trying to do. 8 would be the lowest in that setup I would go with. Kibana isn't all that heavy but can top around 2Gb.

What anti virus/malware setup are you running on the host machine. I know that can cause some massive headaches if you have anti ransomware services enabled or in some cases anti exploit if it's attaching to java. Some will even connect to OpenJDK versions and that can wreak havoc quickly. Sorry for basic's its a Windows server which brings a different set of issues into play.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.