Kibana 5.2 +x-pack error when generating reports

@Brandon_Kobel logging on kibana.yml throws an error so I removed it.

Here's the error

When you enable logging, you'll have to ensure that the kibana user has permissions to write to the file that you specify.

@Brandon_Kobel this is the log message that I get

{"type":"log","@timestamp":"2017-01-25T06:06:08Z","tags":["reporting","debug"],"pid":12178,"message":"fetching screenshot of http://192.168.1.237:5601/app/kibana#/visualize/edit/Mentioned-Users?_g=(time:(from:'2017-01-05T06:47:17.953Z',mode:absolute,to:'2017-01-19T06:47:17.953Z'))&_a=(filters:!(),linked:!f,query:(query_string:(analyze_wildcard:!t,query:'peter_kenneth%20%2B%20sonko')),uiState:(),vis:(aggs:!((enabled:!t,id:'1',params:(),schema:size_node,type:count),(enabled:!t,id:'2',params:(field:userMentionEntities.name.keyword,order:desc,orderBy:'1',size:10),schema:first,type:terms),(enabled:!t,id:'3',params:(field:text.keyword,order:desc,orderBy:'1',size:5),schema:second,type:terms)),listeners:(),params:(canvasBackgroundColor:%23FFFFFF,firstNodeColor:%23FD7BC4,maxCutMetricSizeEdge:5000,maxCutMetricSizeNode:5000,maxEdgeSize:20,maxNodeSize:80,minCutMetricSizeNode:0,minEdgeSize:0.1,minNodeSize:8,secondNodeColor:%2300d1ff,shapeFirstNode:dot,shapeSecondNode:box,showColorLegend:!t,showLabels:!t,showPopup:!f),title:'Mentioned%20Users%20in%20Tweets',type:network))"}

and when I test it, it takes me to the visualization I was trying to generate a report from.

@Brandon_Kobel upon further investigation of the log file, I found this line as well

which is essentially the error that am getting.

Would you mind trying to access that URL from the server that is running Kibana/Reporting to make sure it's accessible from there as well?

@Brandon_Kobel I checked, and it was also very accessible from that end....not sure what else to check

Which OS are you running Kibana on? If you could also include your full kibana.yml and briefly describe your network/server topology I'll try to reproduce the issue that you're experiencing.

Thanks for being so patient and responsive @kioie this is an interesting one.

@Brandon_Kobel am running on Ubuntu 16.04....my KB and ES are running on localhost on server 192.168.1.237 with a direct connection to the gateway on 192.168.1.1

Here's my kibana.yml

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
#xpack.security.enabled: false
xpack.reporting.kibanaServer.port: 5601
xpack.reporting.enabled: true
xpack.reporting.kibanaServer.hostname: 192.168.1.237

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "192.168.1.237"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
server.name: "192.168.1.237"

# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://192.168.1.237:9200"

# When this setting’s value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn’t already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "discover"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "neurotech"
#elasticsearch.password: "neuro987!"

# Paths to the PEM-format SSL certificate and SSL key files, respectively. These
# files enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.cert: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.cert: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.ca: /path/to/your/CA.pem

# To disregard the validity of SSL certificates, change this setting’s value to false.
#elasticsearch.ssl.verify: true

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
logging.dest: /home/neurotech/kibana.log

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
logging.verbose: true

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

Hope this helps! Elastic search also follows the same pattern with all urls declared explicitly. As mentioned, my elasticsearch includes X-pack installed

Your kibana.yml looks right to me, and I was able to setup a virtual machine locally with the same configuration you described on Ubuntu 16.04.01 and everything is working correctly for me.

Is your dev environment on a cloud provider, or are you running it on your own physical/virtualized environment?

@Brandon_Kobel hmm... interesting. My environment is running locally on a virtualized environment. I am using Oracle VM manager to provision instances. Specs 70g storage, 4gb ram and 4 core.

@kioie did you install Elasticsearch/Kibana using apt-get/deb or manually?

@Brandon_Kobel I used apt-get

And just to confirm, you're still seeing the Screenshot failed Kibana took too long to load - Timeout exceeded (30000) message, not some other error message?

@Brandon_Kobel correct!

Are you only running Kibana and ES on the VM? What does the CPU/Memory utilization look like on the VM, and are you actively indexing a lot of documents in ES with trying to run the report?

That timeout that you're hitting is currently hard-coded, and it's possible that if the VM is under a significant amount of load which is causing an atypically long amount of time for reporting to generate the PDF which is hitting the timeout. Recently, we had an internal user running ES/Kibana/Metricbeat/Filebeat all on a single VM which was indexing a large number of documents, which was causing that timeout to be hit.

@kioie have you had a chance to look at the CPU/Memory utilization to see if the VM is under a high amount of load during report generation?

@Brandon_Kobel yes I did...the load is normal, but it does spike up during report processing, but it does not go beyond 40% usage...

@Brandon_Kobel I ran a apt-get upgrade and it offered an option to update to version 5.2.0...am thinking of upgrading, do you think the error is solved on that version?

@kioie I'm not aware of anything that changed in 5.2.0 that would affect the problem, but it does include other fixes with Reporting, so it wouldn't hurt. You'll wanna make sure you update ES and Kibana both to 5.2.0