Shield License Update Problem

Hi Everyone,

Several years ago, "shield" installed to one of our new customer's elasticsearch(v5.5) servers by an old employee and it's licence expired long time ago.
On here they have 10 servers as cluster structure. They want from us export their old indexed data.

Firstly we added these 2 lines at the end of elasticsearch.yml config for bypass the shield.
shield.enabled: false
xpack.security.enabled: false
(Added two in a row for be sure bypass the shield)

After add the lines and restart services it works for 8 servers. But on 2 servers the elasticsearch service didn't start again.
The log is below
"""
[2020-01-23 12:10:32,899][ERROR][shield.action ] [node2] blocking [cluster:monitor/health] operation due to expired license. Cluster health, cluster stats and indices stats
operations are blocked on shield license expiration. All data operations (read and write) continue to work.
If you have a new license, please update it. Otherwise, please reach out to your support contact.
[2020-01-23 12:10:35,421][ERROR][shield.action ] [node2] blocking [cluster:monitor/health] operation due to expired license. Cluster health, cluster stats and indices stats
operations are blocked on shield license expiration. All data operations (read and write) continue to work.
If you have a new license, please update it. Otherwise, please reach out to your support contact.
[2020-01-23 12:10:37,946][ERROR][shield.action ] [node2] blocking [cluster:monitor/health] operation due to expired license. Cluster health, cluster stats and indices stats
operations are blocked on shield license expiration. All data operations (read and write) continue to work.
If you have a new license, please update it. Otherwise, please reach out to your support contact.
[2020-01-23 12:10:40,464][ERROR][shield.action ] [node2] blocking [cluster:monitor/health] operation due to expired license. Cluster health, cluster stats and indices stats
operations are blocked on shield license expiration. All data operations (read and write) continue to work.
If you have a new license, please update it. Otherwise, please reach out to your support contact.
[2020-01-23 12:10:42,982][ERROR][shield.action ] [node2] blocking [cluster:monitor/health] operation due to expired license. Cluster health, cluster stats and indices stats
operations are blocked on shield license expiration. All data operations (read and write) continue to work.
If you have a new license, please update it. Otherwise, please reach out to your support contact.
[2020-01-23 12:10:45,501][ERROR][shield.action ] [node2] blocking [cluster:monitor/health] operation due to expired license. Cluster health, cluster stats and indices stats
operations are blocked on shield license expiration. All data operations (read and write) continue to work.
If you have a new license, please update it. Otherwise, please reach out to your support contact.
[2020-01-23 12:10:48,033][ERROR][shield.action ] [node2] blocking [cluster:monitor/health] operation due to expired license. Cluster health, cluster stats and indices stats
operations are blocked on shield license expiration. All data operations (read and write) continue to work.
If you have a new license, please update it. Otherwise, please reach out to your support contact.
[2020-01-23 12:10:50,560][ERROR][shield.action ] [node2] blocking [cluster:monitor/health] operation due to expired license. Cluster health, cluster stats and indices stats
operations are blocked on shield license expiration. All data operations (read and write) continue to work.
If you have a new license, please update it. Otherwise, please reach out to your support contact.
[2020-01-23 12:10:52,623][INFO ][node ] [node2] stopping ...
[2020-01-23 12:10:52,843][INFO ][node ] [node2] stopped
[2020-01-23 12:10:52,844][INFO ][node ] [node2] closing ...
[2020-01-23 12:10:52,853][INFO ][node ] [node2] closed
"""

I got new licence but on licence installation instructions says: "xput licence.json file to ip:9200" --> https://www.elastic.co/guide/en/x-pack/5.5/installing-license.html
But cause of service failed I can't do that.

Is there any way for update licence without service is up (like as copy new licence to /etc/elasticsearch/.../licence.json file manually) and restart service after that?
Or what are your advices for this problem? Because this problem may be caused by something else (such as cluster node problem).

Thank you so much to all.
Best regards.

Still need help :pray:

Edit. version was 2.3

The logging at the end above indices a clean intentional/triggered shutdown of Elasticsearch. The other logging just indicates that some operations will not work as expected due to the expired license.

Is there more information in the logfiles that has been removed or is this all?

Hi @spinscale

Thanks for the answer.
These are all of logs. There is no different than these logs.

Yesterday we noticed that disk usage is %100. After open some space, service ran but stopped again after short time on one machine with that log.
"""
ReceiveTimeoutTransportException[[][1.1.1.1:9300][internal:discovery/zen/unicast] request_id [3] timed out after [3751ms]] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:679) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

[2020-02-14 15:40:24,802][WARN ][discovery.zen.ping.unicast] [node6] failed to send ping to [{#zen_unicast_5#}{1.1.1.1}{1.1.1.1:9300}] ReceiveTimeoutTransportException[[][1.1.1.1:9300][internal:discovery/zen/unicast] request_id [9] timed out after [3750ms]] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:679) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

[2020-02-14 15:40:24,801][WARN ][discovery.zen.ping.unicast] [node6] failed to send ping to [{#zen_unicast_1#}{1.1.1.1}{1.1.1.1:9300}] ReceiveTimeoutTransportException[[][1.1.1.1:9300][internal:discovery/zen/unicast] request_id [8] timed out after [3750ms]] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:679) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

[2020-02-14 15:40:24,800][WARN ][discovery.zen.ping.unicast] [node6] failed to send ping to [{#zen_unicast_4#}{1.1.1.1}{1.1.1.1:9300}] ReceiveTimeoutTransportException[[][1.1.1.1:9300][internal:discovery/zen/unicast] request_id [5] timed out after [3751ms]] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:679) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

[2020-02-14 15:40:24,800][WARN ][discovery.zen.ping.unicast] [node6] failed to send ping to [{#zen_unicast_10#}{1.1.1.1}{1.1.1.1:9300}] ReceiveTimeoutTransportException[[][1.1.1.1:9300][internal:discovery/zen/unicast] request_id [4] timed out after [3751ms]] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:679) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

[2020-02-14 15:40:24,798][WARN ][discovery.zen.ping.unicast] [node6] failed to send ping to [{#zen_unicast_6#}{1.1.1.1}{1.1.1.1:9300}] ReceiveTimeoutTransportException[[ ][1.1.1.1:9300][internal:discovery/zen/unicast] request_id [2] timed out after [3751ms]] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:679) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
"""
""" query_result
{"error":{"root_cause":[{"type":"cluster_block_exception","reason":"blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];"}],"type":"cluster_block_exception","reason":"blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];"},"status":503}​`
"""
On other machine, didn't write any logs even.
We decided to restore backup. After that I'll announce the result or errors again to here for your answer. Looking forward for that.

Thank you so much, best regards.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.