Elastic stack license problem

Hello,

I installed the Elastic stack with 'basic' license on Docker 4 days ago. I initialized the built-in users passwords and created a user for accessing the Kibana web UI and another user for storing into Elasticsearch the data coming to Logstash from Filebeat. Filebeat is installed on another server and sends log data to Logstash. This configuration worked well the days before and I got Filebeat dashboards into Kibana.

However, now when I log in into Kibana, I cannot access any dashboard. Kibana logs display the following error message

{"type":"error","@timestamp":"2019-08-26T09:34:15Z","tags":["warning","process"],"pid":1,"level":"error","error":{"message":"TypeError: Cannot read property 'formatForBulkUpload' of undefined\n    at rawData.reduce (/usr/share/kibana/x-pack/legacy/plugins/monitoring/server/kibana_monitoring/bulk_uploader.js:201:57)\n    at Array.reduce (<anonymous>)\n    at BulkUploader.reduce [as toBulkUploadFormat] (/usr/share/kibana/x-pack/legacy/plugins/monitoring/server/kibana_monitoring/bulk_uploader.js:200:33)\n    at BulkUploader.toBulkUploadFormat [as _fetchAndUpload] (/usr/share/kibana/x-pack/legacy/plugins/monitoring/server/kibana_monitoring/bulk_uploader.js:136:26)","name":"UnhandledPromiseRejectionWarning","stack":"UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'formatForBulkUpload' of undefined\n    at rawData.reduce (/usr/share/kibana/x-pack/legacy/plugins/monitoring/server/kibana_monitoring/bulk_uploader.js:201:57)\n    at Array.reduce (<anonymous>)\n    at BulkUploader.reduce [as toBulkUploadFormat] (/usr/share/kibana/x-pack/legacy/plugins/monitoring/server/kibana_monitoring/bulk_uploader.js:200:33)\n    at BulkUploader.toBulkUploadFormat [as _fetchAndUpload] (/usr/share/kibana/x-pack/legacy/plugins/monitoring/server/kibana_monitoring/bulk_uploader.js:136:26)\n    at emitWarning (internal/process/promises.js:81:15)\n    at emitPromiseRejectionWarnings (internal/process/promises.js:120:9)\n    at process._tickCallback (internal/process/next_tick.js:69:34)"},"message":"TypeError: Cannot read property 'formatForBulkUpload' of undefined\n    at rawData.reduce (/usr/share/kibana/x-pack/legacy/plugins/monitoring/server/kibana_monitoring/bulk_uploader.js:201:57)\n    at Array.reduce (<anonymous>)\n    at BulkUploader.reduce [as toBulkUploadFormat] (/usr/share/kibana/x-pack/legacy/plugins/monitoring/server/kibana_monitoring/bulk_uploader.js:200:33)\n    at BulkUploader.toBulkUploadFormat [as _fetchAndUpload] (/usr/share/kibana/x-pack/legacy/plugins/monitoring/server/kibana_monitoring/bulk_uploader.js:136:26)"}
{"type":"error","@timestamp":"2019-08-26T09:34:15Z","tags":["warning","process"],"pid":1,"level":"error","error":{"message":"Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1234)","name":"UnhandledPromiseRejectionWarning","stack":"TypeError: Cannot read property 'formatForBulkUpload' of undefined\n    at rawData.reduce (/usr/share/kibana/x-pack/legacy/plugins/monitoring/server/kibana_monitoring/bulk_uploader.js:201:57)\n    at Array.reduce (<anonymous>)\n    at BulkUploader.reduce [as toBulkUploadFormat] (/usr/share/kibana/x-pack/legacy/plugins/monitoring/server/kibana_monitoring/bulk_uploader.js:200:33)\n    at BulkUploader.toBulkUploadFormat [as _fetchAndUpload] (/usr/share/kibana/x-pack/legacy/plugins/monitoring/server/kibana_monitoring/bulk_uploader.js:136:26)"},"message":"Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1234)"}

When looking at Logstash logs, I got the following error message

[2019-08-26T09:07:10,513][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '429' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
[2019-08-26T09:10:10,513][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '429' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}

And when taking a look at Elasticsearch logs, I get the following message

{"type": "server", "timestamp": "2019-08-26T09:37:15,942+0000", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "mg-elasticsearch", "node.name": "ff7e8a22a1da", "cluster.uuid": "zWPcD53DQ2qZGDOzJnl0Qg", "node.id": "iGVBXdheTXC9wLsS6YXVZg",  "message": "[gc][405538] overhead, spent [705ms] collecting in the last [1s]"  }
{"type": "server", "timestamp": "2019-08-26T09:37:19,489+0000", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "mg-elasticsearch", "node.name": "ff7e8a22a1da", "cluster.uuid": "zWPcD53DQ2qZGDOzJnl0Qg", "node.id": "iGVBXdheTXC9wLsS6YXVZg",  "message": "[gc][405541] overhead, spent [1.4s] collecting in the last [1.5s]"  }

Do you know how this problem can be solved ?

Thank you in advance

Please don't post images of text as they are hard to read, may not display correctly for everyone, and are not searchable.

Instead, paste the text and format it with </> icon or pairs of triple backticks (```), and check the preview window to make sure it's properly formatted before posting it. This makes it more likely that your question will receive a useful answer.

It would be great if you could update your post to solve this.

This has nothing to do with the License itself, it looks like your Elasticsearch node spends all of its time on Garbage Collection and doesn't respond to requests. ( The request to get the license just happens to be the first one to come from Logstash ) What are the specs of the system where you run Elasticsearch on Docker ?

Thank you, I updated my post accordingly.

I run the ELK stack on Docker inside a Debian 10 virtual machine with 8 GB of RAM, and 2 vCPUs. Elasticsearch is configured in single-node.

Can you also share a larger part of the elasticsearch.log ? Does this constant GC keeps happening? Does your node end up stopping with OOM errors?

If it stays on long enough, can you share the output of cluster stats API ?

Yes this constant GC keeps happening as we can see in the logs below. However the node does not end up stopping with OOM errors

{"type": "server", "timestamp": "2019-08-26T09:58:50,711+0000", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "mg-elasticsearch", "node.name": "ff7e8a22a1da", "cluster.uuid": "zWPcD53DQ2qZGDOzJnl0Qg", "node.id": "iGVBXdheTXC9wLsS6YXVZg",  "message": "[gc][406722] overhead, spent [1.3s] collecting in the last [1.4s]"  }
{"type": "server", "timestamp": "2019-08-26T09:58:51,712+0000", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "mg-elasticsearch", "node.name": "ff7e8a22a1da", "cluster.uuid": "zWPcD53DQ2qZGDOzJnl0Qg", "node.id": "iGVBXdheTXC9wLsS6YXVZg",  "message": "[gc][406723] overhead, spent [546ms] collecting in the last [1s]"  }
{"type": "server", "timestamp": "2019-08-26T09:58:57,098+0000", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "mg-elasticsearch", "node.name": "ff7e8a22a1da", "cluster.uuid": "zWPcD53DQ2qZGDOzJnl0Qg", "node.id": "iGVBXdheTXC9wLsS6YXVZg",  "message": "[gc][406728] overhead, spent [1.3s] collecting in the last [1.3s]"  }
{"type": "server", "timestamp": "2019-08-26T09:59:01,575+0000", "level": "INFO", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "mg-elasticsearch", "node.name": "ff7e8a22a1da", "cluster.uuid": "zWPcD53DQ2qZGDOzJnl0Qg", "node.id": "iGVBXdheTXC9wLsS6YXVZg",  "message": "[gc][406732] overhead, spent [551ms] collecting in the last [1.4s]"  }
{"type": "server", "timestamp": "2019-08-26T09:59:11,586+0000", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "mg-elasticsearch", "node.name": "ff7e8a22a1da", "cluster.uuid": "zWPcD53DQ2qZGDOzJnl0Qg", "node.id": "iGVBXdheTXC9wLsS6YXVZg",  "message": "[gc][406742] overhead, spent [568ms] collecting in the last [1s]"  }
{"type": "server", "timestamp": "2019-08-26T09:59:12,587+0000", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "mg-elasticsearch", "node.name": "ff7e8a22a1da", "cluster.uuid": "zWPcD53DQ2qZGDOzJnl0Qg", "node.id": "iGVBXdheTXC9wLsS6YXVZg",  "message": "[gc][406743] overhead, spent [722ms] collecting in the last [1s]"  }
{"type": "server", "timestamp": "2019-08-26T09:59:13,587+0000", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "mg-elasticsearch", "node.name": "ff7e8a22a1da", "cluster.uuid": "zWPcD53DQ2qZGDOzJnl0Qg", "node.id": "iGVBXdheTXC9wLsS6YXVZg",  "message": "[gc][406744] overhead, spent [582ms] collecting in the last [1s]"  }
{"type": "server", "timestamp": "2019-08-26T09:59:14,811+0000", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "mg-elasticsearch", "node.name": "ff7e8a22a1da", "cluster.uuid": "zWPcD53DQ2qZGDOzJnl0Qg", "node.id": "iGVBXdheTXC9wLsS6YXVZg",  "message": "[gc][406745] overhead, spent [678ms] collecting in the last [1.2s]"  }
{"type": "server", "timestamp": "2019-08-26T09:59:18,294+0000", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "mg-elasticsearch", "node.name": "ff7e8a22a1da", "cluster.uuid": "zWPcD53DQ2qZGDOzJnl0Qg", "node.id": "iGVBXdheTXC9wLsS6YXVZg",  "message": "[gc][406748] overhead, spent [806ms] collecting in the last [1.4s]"  }
{"type": "server", "timestamp": "2019-08-26T09:59:19,710+0000", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "mg-elasticsearch", "node.name": "ff7e8a22a1da", "cluster.uuid": "zWPcD53DQ2qZGDOzJnl0Qg", "node.id": "iGVBXdheTXC9wLsS6YXVZg",  "message": "[gc][406749] overhead, spent [1.1s] collecting in the last [1.4s]"  }
{"type": "server", "timestamp": "2019-08-26T09:59:21,711+0000", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "mg-elasticsearch", "node.name": "ff7e8a22a1da", "cluster.uuid": "zWPcD53DQ2qZGDOzJnl0Qg", "node.id": "iGVBXdheTXC9wLsS6YXVZg",  "message": "[gc][406751] overhead, spent [549ms] collecting in the last [1s]"  }
{"type": "server", "timestamp": "2019-08-26T09:59:24,712+0000", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "mg-elasticsearch", "node.name": "ff7e8a22a1da", "cluster.uuid": "zWPcD53DQ2qZGDOzJnl0Qg", "node.id": "iGVBXdheTXC9wLsS6YXVZg",  "message": "[gc][406754] overhead, spent [650ms] collecting in the last [1s]"  }

Regarding the cluster stats, I get the following message

{"_nodes":{"total":1,"successful":1,"failed":0},"cluster_name":"mg-elasticsearch","cluster_uuid":"zWPcD53DQ2qZGDOzJnl0Qg","timestamp":1566813785289,"status":"yellow","indices":{"count":25,"shards":{"total":25,"primaries":25,"replication":0.0,"index":{"shards":{"min":1,"max":1,"avg":1.0},"primaries":{"min":1,"max":1,"avg":1.0},"replication":{"min":0.0,"max":0.0,"avg":0.0}}},"docs":{"count":8973766,"deleted":597952},"store":{"size_in_bytes":3149496889},"fielddata":{"memory_size_in_bytes":1248,"evictions":0},"query_cache":{"memory_size_in_bytes":96760,"total_count":136,"hit_count":25,"miss_count":111,"cache_size":9,"cache_count":15,"evictions":6},"completion":{"size_in_bytes":0},"segments":{"count":178,"memory_in_bytes":6668806,"terms_memory_in_bytes":3338580,"stored_fields_memory_in_bytes":2258408,"term_vectors_memory_in_bytes":0,"norms_memory_in_bytes":3328,"points_memory_in_bytes":526770,"doc_values_memory_in_bytes":541720,"index_writer_memory_in_bytes":30829828,"version_map_memory_in_bytes":5359351,"fixed_bit_set_memory_in_bytes":314792,"max_unsafe_auto_id_timestamp":1566552106322,"file_sizes":{}}},"nodes":{"count":{"total":1,"coordinating_only":0,"data":1,"ingest":1,"master":1,"voting_only":0},"versions":["7.3.0"],"os":{"available_processors":2,"allocated_processors":2,"names":[{"name":"Linux","count":1}],"pretty_names":[{"pretty_name":"CentOS Linux 7 (Core)","count":1}],"mem":{"total_in_bytes":8366530560,"free_in_bytes":362360832,"used_in_bytes":8004169728,"free_percent":4,"used_percent":96}},"process":{"cpu":{"percent":46},"open_file_descriptors":{"min":421,"max":421,"avg":421}},"jvm":{"max_uptime_in_millis":412133086,"versions":[{"version":"12.0.1","vm_name":"OpenJDK 64-Bit Server VM","vm_version":"12.0.1+12","vm_vendor":"Oracle Corporation","bundled_jdk":true,"using_bundled_jdk":true,"count":1}],"mem":{"heap_used_in_bytes":230466136,"heap_max_in_bytes":259522560},"threads":64},"fs":{"total_in_bytes":42006183936,"free_in_bytes":33604177920,"available_in_bytes":31439970304},"plugins":[],"network_types":{"transport_types":{"security4":1},"http_types":{"security4":1}},"discovery_types":{"single-node":1},"packaging_types":[{"flavor":"default","type":"docker","count":1}]}}

It looks like you have your heap set to 250MB. That is generally far to little for Elasticsearch. I would recommend at least doubling this, but ideally set it even higher. Ensure that at most 50% of the available RAM is assigned to the heap.

Great, thank you !

This heap setting was actually configured inside the docker-compose.yml file of the Docker ELK project (https://github.com/deviantony/docker-elk) as follows

ES_JAVA_OPTS: "-Xmx256m -Xms256m"

Therefore, I configured the ES_JAVA_OPTS environment variable as follows

ES_JAVA_OPTS: "-Xmx4g -Xms4g"

All is working well now

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.