Kibana export issues (xpack.reporting.encryptionKey, timeout and data too large)

I have a three node cluster running in my homelab. It is a virtualised environment (ESXi) with Ubuntu server OS (LTS) & two nodes being data nodes & the third one being a voting only node. Both the data nodes run a Kibana instance. Underlying hardware is SSDs with segregation for OS disk and elasticsearch log storage disk. I usually fail to have logs being exported with two prominent errors:

  1. Error: Failed to decrypt report job data. Please ensure that xpack.reporting.encryptionKey is set and re-generate this report. Error: Unsupported state or unable to authenticate data
    A. I only get this error when both the Kibana instances are running. If I manually stop (service kibana stop) Kibana on one of the VMs, I am able to export logs successfully.
    B. I found configuration inconsistencies between the Kibana instances. I've fixed this and restarted the services.
    C. None of the Kibana instances have xpack.reporting.encrpyitonKey set. I am unaware as to what this should be. Should it be same as - xpack.encryptedSavedObjects.encryptionKey?

  2. I've set xpack.reporting.csv.maxSizeBytes: 100mb, yet most of the reports time out or get error "data too large" - What is the best way to overcome this?
    A. I'm a student and I have been collecting data for over a year. The indice size is over 1 TB. I need to export some of this data for my rerpoting. Without export entire year of collection may go to waste.

Thank you!

This setting is explained in the documentation: Reporting settings in Kibana | Kibana Guide [7.11] | Elastic

An example similar to yours is explained further in the documentation: Reporting configuration | Kibana Guide [7.11] | Elastic

You can use any alphanumeric, at least 32 characters long text string as the encryption key. When you have multiple instances of Kibana, the encryption key should be the same on every instance, as they split the workload in the reporting queue across any instance that is available. Without an encryption key in the settings, a random encryption key is generated at startup time, and it isn't going to be the same on every instance.

It sounds like you have a backup use case, not a reporting use case. You should look into Elasticsearch snapshots so that you can back up your data: Snapshot and restore | Elasticsearch Reference [7.11] | Elastic

BTW the "data too large" error is most likely coming from Elasticsearch or a proxy somewhere in your stack - that is not an error type that Kibana sends.

Thank you very much for this. Let me check and update.

BTW the "data too large" error is most likely coming from Elasticsearch or a proxy somewhere in your stack - that is not an error type that Kibana sends.

I don't have a proxy between source and accessing Elasticsearch. I either access via Kibana (URL) or via postman (API calls)

I've uploaded a printscreen, please let me know if it helps.

If you don't have a proxy, then the "Request Entity Too Large" error is coming from Elasticsearch.

Check the http.max_content_length setting on the ES nodes: HTTP | Elasticsearch Reference [7.11] | Elastic

Thank you very much. This worked perfectly.