Unable to send reporting via email

I don't receive email with report from Kibana console.

When executing the command:

POST /_xpack/watcher/watch/reporting_demo/_execute
{
  "record_execution": true
}

I am getting the response:

  "actions": [
    {
      "id": "email_admin",
      "type": "email",
      "status": "failure",
      "reason": "Watch[reporting_demo] attachment[demo_report.pdf] Error executing HTTP request host[10.235.81.127], port[5601], method[POST], path[/api/reporting/generate/dashboard/First-Half-Error-dashboard-28-12], exception[Connection timed out]"
    }
  ]

In the elasticsearch access log I see the following

[2016-12-28T15:22:35,797] [transport] [access_granted]  origin_type=[rest], origin_address=[127.0.0.1], principal=[kibana], action=[indices:data/read/search[phase/query]], indices=[.reporting-2016.12.25,.reporting-2016.12.18], request=[ShardSearchTransportRequest]
(

I did a test and I see that my server is able to send emails.

Any Idea?

Regards,
Sharon.

That error message makes it sound like the connection to Kibana actually timed out. I'm guessing 10.235.81.127 is your Kibana server.

Can you successfully generate a PDF manually of that "First-Half-Error-dashboard-28-12" dashboard?

If not, how many visualizations do you have on the dashboard you are trying to generate a report of?

If you are able to manually create the PDF, it's probably Watcher that is timing out. You may have to increase the read_timeout setting in the email action. And if you aren't using that setting right now, that's definitely the problem. That setting controls the maximum amount of time Watcher will wait for the PDF to be generated, and the default value is almost certainly not going to be long enough.

Hi Joe,

Yes, I am able to generate manually a pdf.

Yes, I have the setting of read_timeout in my email. The timeout is on 700s, but still the same problem.

This is the full answer I am getting:

 "_id": "vp_reporting_demo_0-2016-12-28T22:21:54.534Z",
  "watch_record": {
    "watch_id": "vp_reporting_demo",
    "state": "executed",
    "trigger_event": {
      "type": "manual",
      "triggered_time": "2016-12-28T22:21:54.534Z",
      "manual": {
        "schedule": {
          "scheduled_time": "2016-12-28T22:21:54.534Z"
        }
      }
    },
    "input": {
      "none": {}
    },
    "condition": {
      "always": {}
    },
    "result": {
      "execution_time": "2016-12-28T22:21:54.534Z",
      "execution_duration": 3005,
      "input": {
        "type": "none",
        "status": "success",
        "payload": {}
      },
      "condition": {
        "type": "always",
        "status": "success",
        "met": true
      },
      "actions": [
        {
          "id": "email_admin",
          "type": "email",
          "status": "failure",
          "reason": "Watch[vp_reporting_demo] attachment[demo_report.pdf] Error executing HTTP request host[10.235.81.127], port[5601], method[POST], path[/api/reporting/generate/dashboard/First-Half-Error-dashboard-28-12], exception[Connection timed out]"
        }
      ]
    },
    "messages": []
  }
}

And this is from the kibana log:

{"type":"response","@timestamp":"2016-12-28T22:21:54Z","tags":[],"pid":2811,"method":"post","statusCode":200,"req":{"url":"/api/console/proxy?uri=%2F_xpack%2Fwatcher%2Fwatch%2Fvp_reporting_demo%2F_execute","method":"post","headers":{"host":"10.235.81.127:5601","connection":"keep-alive","content-length":"31","accept":"text/plain, */*; q=0.01","origin":"http://10.235.81.127:5601","kbn-version":"5.0.1","user-agent":"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36","content-type":"application/json","referer":"http://10.235.81.127:5601/app/kibana","accept-encoding":"gzip, deflate","accept-language":"en-US,en;q=0.8"},"remoteAddress":"10.233.161.82","userAgent":"10.233.161.82","referer":"http://10.235.81.127:5601/app/kibana"},"res":{"statusCode":200,"responseTime":3028,"contentLength":9},"message":"POST /api/console/proxy?uri=%2F_xpack%2Fwatcher%2Fwatch%2Fvp_reporting_demo%2F_execute 200 3028ms - 9.0B"}

Thanks
Sharon.

Hey,

two things here. First, the watch record you pasted had an execution_duration of 3005 milliseconds. This means it ran only a little more than 3 seconds - which to me means the timeout was not honored at all. Do all of your runs have a runtime of around 3 seconds. I just want to be sure that there is not another component in the network, that kills connections after a rather short period of idle time.

Second, I suppose you are using an HTTP attachment here (you havent pasted the full watch so this is just guessing, feel free to correct me)? If you are on version 5.1 of the stack, you could check out the new reporting attachment type, which tries to prevent long running HTTP request, by constantly polling kibana.

Hope this helps!

--Alex

HI,

I am running version 5.0.1

I don't know from where this 3005 is coming but I didn't set it no where. (At least I don't aware of it). Where should I set it? Do I ?

In my console I have two commands.

The first one that I run is:

PUT _xpack/watcher/watch/demo_report
{
  "trigger" : {
    "schedule": {
      "interval": "1h"
    }
  },
  "actions" : {
   "email_admin" : { 
    "email" : {
      "to": "'sharonsa@amdocs.com'",
      "cc": "'yoelb@amdocs.com'",
      "subject": "Error Code Monitoring Report",
      "body" : "{{ctx.payload.hits.total}} error codes logs found",
      "attachments" : {
       "dashboard.pdf" : {
          "http" : {
            "content_type" : "application/pdf",
            "request" : {
              "method": "POST", 
              "headers": {
                "kbn-xsrf": "reporting"
              },
              "read_timeout": "300s", 
              "auth":{ 
                "basic":{
                  "username":"elastic",
                  "password":"elastic"
                }
              },
              "url": "http://10.235.81.127:5601/api/reporting/generate/dashboard/First-Half-Error-dashboard-28-12?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:%272016-01-01T08:28:55.359Z%27,mode:quick,to:%272016-06-01T07:30:14.917Z%27))&_a=(filters:!(),options:(darkTheme:!f),panels:!((col:1,id:weblogic-logs-error_code-28-12,panelIndex:3,row:1,size_x:3,size_y:2,type:visualization),(col:4,id:First-Half-Severity-Visualization,panelIndex:2,row:1,size_x:3,size_y:2,type:visualization)),query:(query_string:(analyze_wildcard:!t,query:%27*%27)),title:First-Half-Error-dashboard-28-12,uiState:())&sync" 
            }
          }
        }
      }
    }
  }
 }
}

And the response is:

{
  "_id": "demo_report",
  "_version": 1,
  "created": true
}

Then I am running the following command (Should I?) :

POST /_xpack/watcher/watch/demo_report/_execute
{
  "record_execution": true
}

And I am getting the response:

 {
  "_id": "demo_report_0-2016-12-29T14:14:01.284Z",
  "watch_record": {
    "watch_id": "demo_report",
    "state": "executed",
    "trigger_event": {
      "type": "manual",
      "triggered_time": "2016-12-29T14:14:01.284Z",
      "manual": {
        "schedule": {
          "scheduled_time": "2016-12-29T14:14:01.284Z"
        }
      }
    },
    "input": {
      "none": {}
    },
    "condition": {
      "always": {}
    },
    "result": {
      "execution_time": "2016-12-29T14:14:01.284Z",
      "execution_duration": 3006,
      "input": {
        "type": "none",
        "status": "success",
        "payload": {}
      },
      "condition": {
        "type": "always",
        "status": "success",
        "met": true
      },
      "actions": [
        {
          "id": "email_admin",
          "type": "email",
          "status": "failure",
          "reason": "Watch[demo_report] attachment[dashboard.pdf] Error executing HTTP request host[10.235.81.127], port[5601], method[POST], path[/api/reporting/generate/dashboard/First-Half-Error-dashboard-28-12], exception[Connection timed out]"
        }
      ]
    },
    "messages": []
  }
} 

What next? I am not getting any email.

Thanks
Sharon.

Everything looks right with the watch to me. The reporting URL has the &sync query parameter and you've set the read_timeout, which should be all it takes.

It's consistently being killed after 3 seconds (+/- a few milliseconds), which makes me think that something else is closing the connection. Do you have a proxy or something that you are accessing Kibana through? Or maybe some firewall rules or something that would be closing connections early? That's the only thing I can think you might be running in to here.

Hey,

the execution_duration is nothing you need to configure, but an information, how long the execution of the watch took. And this means, after three seconds the watch execution ran into a time out. Main question is why. Two things to try further:

When you log into the machine where watcher is running and run time curl ... with the URL of the dashboard and the correct credentials - is the PDF downloaded correctly to that system? Can you paste the output of that command?

Second, can you also set the connection_timeout to a higher value on top of the read_timeout in your watch? However the default settings are 10 seconds, so this does not match up with the three seconds when your watch is cancelled, so Joe might be right that there is another component like a firewall (or the operating system itself is badly configured) taking action - we will find out when you do the above curl call.

--Alex

Hey,

also again, if you want to prevent timeout issues for long runnign requests (3 seconds are not really long however, but the dashboard generation can easily take a few minutes), you should look at Elasticsearch 5.1 and the new reporting integration, which will not have those problems.

--Alex

Regarding something that kill the process, it is not something that I am aware of it and I may need our infra of the machine to try and understand whats happen. If you think about something I can do to try and find it , it will be great.

I will try to do that time curl.... and see what the answer is.
Will update soon.

How to do that?

ok, I did it.

mpswrk1@eaasrt!MPS:/usr/share/kibana> :1,size_x:3,size_y:2,type:visualization)),query:(query_string:(analyze_wildcard:!t,query:%27*%27)),title:First-Half-Error-dashboard-28-12,uiState:())&sync" <
3.002

Just use curl with the same URL you are using with the Watcher action. Since you're using Security as well, you'll need to provide auth details. And Kibana requires the kbn-xsrf header.

Based on the formula you posted earlier, this should work:

curl -k -XPOST -u elastic:elastic -H "kbn-xsrf: reporting" http://10.235.81.127:5601/api/reporting/generate/dashboard/First-Half-Error-dashboard-28-12?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:%272016-01-01T08:28:55.359Z%27,mode:quick,to:%272016-06-01T07:30:14.917Z%27))&_a=(filters:!(),options:(darkTheme:!f),panels:!((col:1,id:weblogic-logs-error_code-28-12,panelIndex:3,row:1,size_x:3,size_y:2,type:visualization),(col:4,id:First-Half-Severity-Visualization,panelIndex:2,row:1,size_x:3,size_y:2,type:visualization)),query:(query_string:(analyze_wildcard:!t,query:%27*%27)),title:First-Half-Error-dashboard-28-12,uiState:())&sync -o test.pdf

Note that I added -o test.pdf, this will save the output to a file called test.pdf. If it aborts after 3 seconds, that file may not be a PDF at all, but instead may contain an error message.

Also, I don't think we've asked yet, but do you see any failure messages when you check the history of report generation in the Reporting section under Management in Kibana?

Note that you can keep that screen open, even when you make the curl requests, and you'll see the new report generation process get created and start running there. If it completes, but your curl request still fails after 3 seconds, then the problem is definitely caused by something terminating the request early.

Hey,

if you did that, can you please paste the full output either here or in a gist - what you pasted is not showing any output and I do not know where it comes from. Also dont forget to prefix the curl call with the time command so you can measure the runtime.

--Alex

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.