I have a three node cluster running in my homelab. It is a virtualised environment (ESXi) with Ubuntu server OS (LTS) & two nodes being data nodes & the third one being a voting only node. Both the data nodes run a Kibana instance. Underlying hardware is SSDs with segregation for OS disk and elasticsearch log storage disk. I usually fail to have logs being exported with two prominent errors:
Error: Failed to decrypt report job data. Please ensure that xpack.reporting.encryptionKey is set and re-generate this report. Error: Unsupported state or unable to authenticate data
A. I only get this error when both the Kibana instances are running. If I manually stop (service kibana stop) Kibana on one of the VMs, I am able to export logs successfully.
B. I found configuration inconsistencies between the Kibana instances. I've fixed this and restarted the services.
C. None of the Kibana instances have xpack.reporting.encrpyitonKey set. I am unaware as to what this should be. Should it be same as - xpack.encryptedSavedObjects.encryptionKey?
I've set xpack.reporting.csv.maxSizeBytes: 100mb, yet most of the reports time out or get error "data too large" - What is the best way to overcome this?
A. I'm a student and I have been collecting data for over a year. The indice size is over 1 TB. I need to export some of this data for my rerpoting. Without export entire year of collection may go to waste.
You can use any alphanumeric, at least 32 characters long text string as the encryption key. When you have multiple instances of Kibana, the encryption key should be the same on every instance, as they split the workload in the reporting queue across any instance that is available. Without an encryption key in the settings, a random encryption key is generated at startup time, and it isn't going to be the same on every instance.
BTW the "data too large" error is most likely coming from Elasticsearch or a proxy somewhere in your stack - that is not an error type that Kibana sends.
Thank you very much for this. Let me check and update.
BTW the "data too large" error is most likely coming from Elasticsearch or a proxy somewhere in your stack - that is not an error type that Kibana sends.
I don't have a proxy between source and accessing Elasticsearch. I either access via Kibana (URL) or via postman (API calls)
I've uploaded a printscreen, please let me know if it helps.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.