Kibana 5.3 xpack reporting fails

reporting

(Kribor) #1

Hi I'm trying to get reporting to work in kibana without luck.

"There was an error generating your report for the "MyVis" visualization: Max attempts reached (3)"

Using this thread I managed to get logging working and rule out some issues... Kibana 5.2 +x-pack error when generating reports

Basically when I try to curl http://localhost:5601 I get a 302 with redirect to login page. I would assume that the user would follow what I'm logged into kibana with, but maybe not? Can I config user/pass for how kibana should connect to itself. No sign of it here: https://www.elastic.co/guide/en/x-pack/current/reporting-settings.html#general-reporting-settings

In the logs I see:

{"type":"log","@timestamp":"2017-04-27T18:07:46Z","tags":["reporting","debug","phantomConsole"],"pid":340,"message":"complete in 229.61ms"}
{"type":"log","@timestamp":"2017-04-27T18:08:12Z","tags":["reporting","worker","debug"],"pid":340,"message":"printable_pdf: Job timeout exceeded: (j20q1nze009g68bbd4axsuhq)"}
{"type":"log","@timestamp":"2017-04-27T18:08:14Z","tags":["reporting","debug"],"pid":340,"message":"Screenshot failed Kibana took too long to load - Timeout exceeded (30000)"}
{"type":"log","@timestamp":"2017-04-27T18:08:15Z","tags":["reporting","worker","debug"],"pid":340,"message":"printable_pdf: Worker error: (undefined)"}
{"type":"log","@timestamp":"2017-04-27T18:22:40Z","tags":["reporting","debug"],"pid":1895,"message":"Phantom installed at /var/lib/kibana/phantomjs-2.1.1-linux-x86_64/bin/phantomjs"}
{"type":"log","@timestamp":"2017-04-27T18:22:40Z","tags":["reporting","debug"],"pid":1895,"message":"Screenshot concurrency: 8"}
{"type":"log","@timestamp":"2017-04-27T18:22:40Z","tags":["reporting","worker","debug"],"pid":1895,"message":"printable_pdf: Registering printable_pdf worker"}
{"type":"log","@timestamp":"2017-04-27T18:23:34Z","tags":["reporting","debug"],"pid":1895,"message":"1 saved object(s) to process"}
{"type":"log","@timestamp":"2017-04-27T18:24:07Z","tags":["reporting","worker","debug"],"pid":1895,"message":"Converting 1 saved object(s) to printable_pdf"}
{"type":"log","@timestamp":"2017-04-27T18:24:07Z","tags":["reporting","debug"],"pid":1895,"message":"fetching screenshot of http://localhost:5601/app/kibana#/visualize/edit/f0452120-1dc3-11e7-a9b3-f9ecc71b9f18?_g=()&_a=(filters:!(),linked:!f,query:(query_string:(analyze_wildcard:!t,query:'application:abcapp%20AND%20abcapp.call.request.method.raw:GET')),uiState:(),vis:(aggs:!((enabled:!t,id:'1',params:(),schema:metric,type:count),(enabled:!t,id:'2',params:(customInterval:'2h',extended_bounds:(),field:'@timestamp',interval:auto,min_doc_count:1),schema:segment,type:date_histogram),(enabled:!t,id:'3',params:(field:jfox.call.request.handler.signature.raw,order:desc,orderBy:'1',size:20),schema:group,type:terms)),listeners:(),params:(addLegend:!t,addTimeMarker:!f,addTooltip:!t,defaultYExtents:!f,drawLinesBetweenPoints:!t,interpolate:linear,legendPosition:right,radiusRatio:9,scale:linear,setYExtents:!f,showCircles:!t,times:!()),title:'JFox%20-%20Request%20handler%20(read)',type:line))"}
{"type":"log","@timestamp":"2017-04-27T18:24:10Z","tags":["reporting","debug","phantomConsole"],"pid":1895,"message":"INFO: 2017-04-27T18:24:10Z\n Adding connection to http://localhost:5601/es_admin\n\n"}
{"type":"log","@timestamp":"2017-04-27T18:24:10Z","tags":["reporting","debug","phantomConsole"],"pid":1895,"message":"INFO: 2017-04-27T18:24:10Z\n Adding connection to http://localhost:5601/elasticsearch\n\n"}
{"type":"log","@timestamp":"2017-04-27T18:24:10Z","tags":["reporting","debug","phantomConsole"],"pid":1895,"message":"setting active api to [es_5_0]"}
{"type":"log","@timestamp":"2017-04-27T18:24:11Z","tags":["reporting","debug","phantomConsole"],"pid":1895,"message":"Index Patterns: index pattern set to logstash-syslog-*"}
{"type":"log","@timestamp":"2017-04-27T18:24:11Z","tags":["reporting","debug","phantomConsole"],"pid":1895,"message":"complete in 193.52ms"}
{"type":"log","@timestamp":"2017-04-27T18:24:37Z","tags":["reporting","worker","debug"],"pid":1895,"message":"printable_pdf: Job timeout exceeded: (j20qmrmf01gn68bbd42k1zt8)"}
{"type":"log","@timestamp":"2017-04-27T18:24:38Z","tags":["reporting","debug"],"pid":1895,"message":"Screenshot failed Kibana took too long to load - Timeout exceeded (30000)"}

I've set xpack.reporting.kibanaServer.hostname: localhost

Curiously I don't see any access logs that indicate that kibana has attempted to fetch this url in /var/log/kibana/kibana.stdout


(Joe Fleming) #2

I would assume that the user would follow what I'm logged into kibana with, but maybe not?

It does, it adds it as part of the task when you click the button in Kibana to generate a report. When you use curl, or Watcher, you have to specify your credentials manually. That's why you get the 302 when you simply curl the URL.

I've set xpack.reporting.kibanaServer.hostname: localhost

You shouldn't have to configure Reporting specially, unless you're connecting to Kibana through a proxy, in which case you need to configure Reporting to use the same proxy. That doesn't sound like the case for you though, but correct me if I'm wrong.

You can see in the logs that the Phantom process IS able to load Kibana, and does attempt to take a screenshot. In fact, it looks like it's successful in doing so. The important lines in the logs here are:

{"type":"log","@timestamp":"2017-04-27T18:23:34Z","tags":["reporting","debug"],"pid":1895,"message":"1 saved object(s) to process"}
{"type":"log","@timestamp":"2017-04-27T18:24:07Z","tags":["reporting","worker","debug"],"pid":1895,"message":"Converting 1 saved object(s) to printable_pdf"}

{"type":"log","@timestamp":"2017-04-27T18:24:07Z","tags":["reporting","debug"],"pid":1895,"message":"fetching screenshot of <VISUALIZATION URL>"}

{"type":"log","@timestamp":"2017-04-27T18:24:11Z","tags":["reporting","debug","phantomConsole"],"pid":1895,"message":"complete in 193.52ms"}

{"type":"log","@timestamp":"2017-04-27T18:24:37Z","tags":["reporting","worker","debug"],"pid":1895,"message":"printable_pdf: Job timeout exceeded: (j20qmrmf01gn68bbd42k1zt8)"}

So Reporting it trying to capture 1 saved object here, a single visualization, which it starts doing at 18:24:07, and finishes at 18:24:11, about 4 seconds later. Then, 30 seconds after starting, at 18:24:37, the queue can't find the successful document and times out (based on the default timeout setting of 30000ms). This is quite odd. Based on the logs, everything seems to work correctly, and well within the configured timeout. Perhaps the communication back to the queue in Elasticsearch isn't successful? It should log an error in that case though...

Can you take a look in Kibana's "data" directory (by default, located in the root of the Kibana code and named data) and see if you see png files in there? If so, then Reporting is at least working, and we can try digging into the communication between with the queue.


(Kribor) #3

I am indeed not using a proxy. I'm not able to find the data directory. The install is 5.3 RPM from repo on CentOS 6, init.d/kibana references /usr/share/kibana. The data directory is not there. Any ideas on how to find it? Also no seemingly generated pngs in any subfolder.


(A User) #4

Hello,
i am having more or less the same issue. Reporting fails with max attempts 3 reached. Though, the very first try it said it took too long / timeout.

But my log file looks different. I do not find anything with the string "pdf" in it.

Kibana and ES are running on a linux server. xpack.reporting.kibanaServer.hostname is configured using the machines host name, etc.

I also am not able to find the mentioned data directory to check if anything was generated. A general problem with kibana is that if something goes wrong it is not really understandable what and why and where to find more informations about it. It was a pain in the ass to get the logging working, this should be default on not off.

Anyhow, if you need more informations or the log file in addition to the one of the op ask me.

Edit: it is running on ubuntu btw.


(Joe Fleming) #5

I'm not actually familiar with the RPM packages, but I'm told the data dir is actually supposed to be /var/lib/kibana. Sorry for misguiding you. Can you take a look in there and see if you have any png files?

Also, in both your case and @A_User, are you using a single instance of Kibana on a single instance of Elasticsearch? If not, what do your setups look like?


(A User) #6

I have no png's. Am using one kibana instance with one elastic search instance.


(Kribor) #7

There's indeed a /var/lib/kibana folder. it doesn't however contain a data directory nor any png's. It just contains a uuid file and phantomjs folder.

The elastic cluster has three nodes. Two kibana nodes co-hosted on two of the elastic nodes, each pointing to the local elastic instance. Kibana is accessed through one kibana node at a time (active/passive setup) so there's no load balancing between kibana nodes.

How about the log line containing:
printable_pdf: Worker error: (undefined)

Doesn't that indicate something is breaking early?


(A User) #8

I have no png's there and using a single instance of kibana with a single instance of elastic search.


(Joe Fleming) #9

There's indeed a /var/lib/kibana folder. it doesn't however contain a data directory nor any png's.

That path is the data dir. Sorry I didn't make that clear. It has what I'd expect to see there. The reason you don't see any png files is because it cleans them up when it's done. I was curious to see if the process was somehow failing to do that, but apparently not.

How about the log line containing:
printable_pdf: Worker error: (undefined)
Doesn't that indicate something is breaking early?

Probably. It definitely doesn't look good, and while undefined in the message isn't super helpful, it probably gives us a place to start digging in to. Thanks for finding that.

Kibana is accessed through one kibana node at a time (active/passive setup) so there's no load balancing between kibana nodes.

This isn't the cause of the issue, but just a note that if they point to the same Elasticsearch cluster, they share the same Reporting index, and thus need to have the same xpack.reporting.encryptionKey value, otherwise you'll start seeing other problems.

@A_User your setup is a lot simpler, and it's troubling that it doesn't work. Do you happen to see the same printable_pdf: Worker error: (undefined) error message in your logs?


(system) #10

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.