Receiving the report despite error 504

Hi. I scheduled a daily report to my email from my Local Kibana (localhost) instance using X-pack. When I force run this watch, I receive an 504 Error. Despite that, seconds later, I do receive the report I wanted. Report itself as PDF is around 900 Kb. It seems like it is bugged. Here I will paste both the watch config and error message I get:

PUT _xpack/watcher/watch/kpi_report
{
  "trigger" : {
    "schedule": {
      "interval": "30d"
    }
  },
  "actions" : {
    "send_email" : { 
      "email": {
        "to": "...",
        "subject": "Error Monitoring Report",
        "body": "Please, see attached daily KPIs as requested",
        "attachments" : {
          "daily_kpi.pdf" : {
            "reporting" : {
              "url": "...", 
              "retries":6, 
              "interval":"10s", 
              "auth":{ 
                "basic":{
                  "username":"elastic",
                  "password":"..."
                }
              }
            }
          }
        }
      }
    }
  }
}

And this will be the output of force run:

{
  "statusCode": 504,
  "error": "Gateway Time-out",
  "message": "Client request timeout"
}

where do you start this force run? Do you mean you start an execute watch API run?

Hi @spinscale, indeed. I do it via execute watch API

POST _xpack/watcher/kpi_report/_execute

so the timeout is coming from kibana which does not wait until the execution of the watch is finished. the execution of the watch is successfullthough

if you do

POST _xpack/watcher/kpi_report/_execute
{
  "record_execution" : true
}

you will get a watch history entry, that you can take a look at.

Where do I get this history? I ran the command

POST _xpack/watcher/watch/kpi_report/_execute
{
  "record_execution": true
}

I get the same error

{
  "statusCode": 504,
  "error": "Gateway Time-out",
  "message": "Client request timeout"
}

In Elastic logs or in Kibana logs there is nothing regarding it. Elastic log:

[2018-03-22T21:19:29,024][INFO ][o.e.c.r.a.DiskThresholdMonitor] [lt-mJ9i] low disk watermark [85%] exceeded on [lt-mJ9iqQ6uZrpJRqn8Mrg][lt-mJ9i][C:\Users\emirzayev\elasticsearch-6.2.2\data\nodes\0] free: 27.3gb[11.5%], replicas will not be assigned to this node
[2018-03-22T21:19:59,034][INFO ][o.e.c.r.a.DiskThresholdMonitor] [lt-mJ9i] low disk watermark [85%] exceeded on [lt-mJ9iqQ6uZrpJRqn8Mrg][lt-mJ9i][C:\Users\emirzayev\elasticsearch-6.2.2\data\nodes\0] free: 27.3gb[11.5%], replicas will not be assigned to this node
[2018-03-22T21:20:29,163][INFO ][o.e.c.r.a.DiskThresholdMonitor] [lt-mJ9i] low disk watermark [85%] exceeded on [lt-mJ9iqQ6uZrpJRqn8Mrg][lt-mJ9i][C:\Users\emirzayev\elasticsearch-6.2.2\data\nodes\0] free: 27.3gb[11.5%], replicas will not be assigned to this node
[2018-03-22T21:20:59,209][INFO ][o.e.c.r.a.DiskThresholdMonitor] [lt-mJ9i] low disk watermark [85%] exceeded on [lt-mJ9iqQ6uZrpJRqn8Mrg][lt-mJ9i][C:\Users\emirzayev\elasticsearch-6.2.2\data\nodes\0] free: 27.3gb[11.5%], replicas will not be assigned to this node
[2018-03-22T21:21:07,006][INFO ][o.e.c.m.MetaDataMappingService] [lt-mJ9i] [.watcher-history-7-2018.03.22/5TA1aMLSTn6HWcDwJCWntA] update_mapping [doc]
.........................[2018-03-22T21:20:59,209][INFO ][o.e.c.r.a.DiskThresholdMonitor] [lt-mJ9i] low disk watermark [85%] exceeded on [lt-mJ9iqQ6uZrpJRqn8Mrg][lt-mJ9i][C:\Users\emirzayev\elasticsearch-6.2.2\data\nodes\0] free: 27.3gb[11.5%], replicas will not be assigned to this node

Kibana log:

  
  log   [20:17:27.880] [info][kibana-monitoring][monitoring-ui] Stopping all Kibana monitoring collectors
  log   [20:17:27.908] [info][license][xpack] Imported license information from Elasticsearch for the [monitoring] cluster: mode: trial | status: active | expiry date: 2018-04-14T17:34:37+02:00
  log   [20:17:35.253] [info][kibana-monitoring][monitoring-ui] Starting all Kibana monitoring collectors
  log   [20:17:35.266] [info][status][plugin:elasticsearch@6.2.2] Status changed from red to green - Ready

There is a time based index starting with .watcher-history and then containing the current date, that will hold the record execution.

You can get the latest entry via

GET .watcher-history-*/_search
{
  "size": 1,
  "query": {
    "term": {
      "watch_id": {
        "value": "YOUR_WATCH_ID_HERE"
      }
    }
  },
  "sort": [
    {
      "trigger_event.triggered_time": {
        "order": "desc"
      }
    }
  ]
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.