Elasticsearch could not store watch with watch id


(Kanthasamyraja Shanmugam) #1

Hi,

Am using ELK version 6.5.4. Running with trial license.

I am getting the below ERROR message.

could not store triggered watch with id [_d6Y1y0bQ4mypPMdh3YZzg_xpack_license_expiration_89c0aa03-0208-400b-a351-db9cf57c2d2b-2019-02-08T11:47:17.631Z]: [ClusterBlockException[blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];]]

Not storing logs from Logstash.

Can anyone help?


(Christian Dahlqvist) #2

That typically means that you are running out of disk space and that Elasticsearch has put the indices in read-only mode as the flood stage watermark has been exceeded.


(Kanthasamyraja Shanmugam) #3

Have more than enough disk space. Space not an issue.
Thanks


(Christian Dahlqvist) #4

Sorry. I missed the first part of the message which indicates that you trial license may have expired.


(Kanthasamyraja Shanmugam) #5

Recently enabled it. 1 month didnt complete.


(Christian Dahlqvist) #6

Check what the get license API shows.


(Kanthasamyraja Shanmugam) #7

Thanks for your quick reply

GET /_xpack/license

{
"license" : {
"status" : "active",
"uid" : "dbe10339-7784-452e-980e-d3990d39087b",
"type" : "trial",
"issue_date" : "2019-01-30T09:24:55.548Z",
"issue_date_in_millis" : 1548840295548,
"expiry_date" : "2019-03-01T09:24:55.548Z",
"expiry_date_in_millis" : 1551432295548,
"max_nodes" : 1000,
"issued_to" : "elasticsearch",
"issuer" : "elasticsearch",
"start_date_in_millis" : -1
}
}


(Christian Dahlqvist) #8

Then I am not sure what is wrong...


(Kanthasamyraja Shanmugam) #9

@Christian_Dahlqvist : Thanks for checking this.
May I know who else can help on this?


(Christian Dahlqvist) #10

I would recommend verifying that the data path Elasticsearch uses has enough space. If you are on linux, post the output of df -k here so we can see.


(Kanthasamyraja Shanmugam) #11

/etc/elasticsearch/elasticsearch.yml

path.data: /datavg/elasticsearch/data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg.datavg-lv.datavg 99G 6.8G 87G 8% /datavg


(Kanthasamyraja Shanmugam) #12

When restarting getting this exception
..................
..................
[2019-02-08T12:01:50,651][INFO ][o.e.n.Node ] [Az0RWz6] started
[2019-02-08T12:01:53,285][INFO ][o.e.m.j.JvmGcMonitorService] [Az0RWz6] [gc][8] overhead, spent [297ms] collecting in the last [1s]
[2019-02-08T12:01:53,376][WARN ][r.suppressed ] [Az0RWz6] path: /.kibana/doc/config%3A6.5.4, params: {index=.kibana, id=config:6.5.4, type=doc}
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:166) ~[elasticsearch-6.5.4.jar:6.5.4]
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction.checkGlobalBlock(TransportSingleShardAction.java:122) ~[elasticsearch-6.5.4.jar:6.5.4]
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.(TransportSingleShardAction.java:156) ~[elasticsearch-6.5.4.jar:6.5.4]
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.(TransportSingleShardAction.java:140) ~[elasticsearch-6.5.4.jar:6.5.4]
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction.doExecute(TransportSingleShardAction.java:97) ~[elasticsearch-6.5.4.jar:6.5.4]
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction.doExecute(TransportSingleShardAction.java:61) ~[elasticsearch-6.5.4.jar:6.5.4]
at org.elasticsearch.action.support.TransportAction.doExecute(TransportAction.java:143) ~[elasticsearch-6.5.4.jar:6.5.4]
at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167) ~[elasticsearch-6.5.4.jar:6.5.4]


(Kanthasamyraja Shanmugam) #13

Some more information related to this.

    {
      "id" : "add_to_alerts_index",
      "type" : "index",
      "status" : "failure",
      "error" : {
        "root_cause" : [
          {
            "type" : "cluster_block_exception",
            "reason" : "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"
          }
        ],
        "type" : "cluster_block_exception",
        "reason" : "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"
      }

(David Turner) #14

This means that someone or something has applied the index.blocks.read_only_allow_delete block to this index. This is applied automatically when running out of disk space, and this is usually where this block comes from. There will be messages in the logs indicating that this has happened if so. The block is not removed when disk usage drops, so you have to manually remove it as per the reference docs. You can also apply this block manually via the index settings API, so if you don't think you ran out of disk space then that's where it came from.


(Kanthasamyraja Shanmugam) #15

Elasticsearch keep printing below line in log

could not store triggered watch with id [_d6Y1y0bQ4mypPMdh3YZzg_logstash_version_mismatch_1e150970-84b7-4d14-b2c3-9887a73e038b-2019-02-11T12:12:47.491Z]: [ClusterBlockException[blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];]]

But now Indexes are created and app logs are available in Kibana.

How to clear this error message?


(David Turner) #16

See links above.