Pi 4+ 4gb seems to give me a Cyclic Heap error preventing ingestion

Hey All, First post. I put this on stack overflow ( https://stackoverflow.com/questions/59740792/monitoring-indexing-stats-gives-me-a-current-indexing-set-to-0-i-think-somethin ), but i wanted to generally summarize it here at the recommendation of a colleague of yours.

Issue: I notice that Elasticsearch stops ingesting and observing, i saw that /var/logs/elasticsearch/gc.log shows a cyclic issue:

Concurrent Sweep
Concurrent Sweep 76.256ms
User=0.07s Sys=0.00s Real=0.08s
Concurrent Reset
Concurent Reset 21.102ms
User=0.02s Sys=0.00s Real=0.02s
Old: 1963216k->1960141k(2031616k)
Application time: 2.0972808 seconds
Entering safepoint region: CMS_Initial_Mark
Pause Initial Mark
Pause Initial Mark 1955M->1955M(2041M) 42.177ms
User=0.14s Sys=0.02s Real=0.02s
Leaving safepoint region
Total time for which application threads were stopped: 0.0439385 seconds, Stopping threads took 0.0001746 seconds

Statistics:
I have a Raspberry Pi 4+ 4g. It has both Logstash and Elasticsearch on it. Logstash ingests from a HDD connected by USB3.0, and I created the Elasticsearch DB on the HDD as well. At first I was thinking the issue was thrashing related, but i dont think it is the issue at all.

Setting up the Pi was similar to following this how-to: https://medium.com/hepsiburadatech/setup-elasticsearch-7-x-cluster-on-raspberry-pi-asus-tinker-board-6a307851c801

I noticed that the ingesting stops, and was noticing that there was an issue with the Java heap, which is defined by the above issue. Logstash is set up as the default of 1g, and Elasticsearch is set up with the default 2g. Logstash is also running correctly according to the logs.

Ideally, i wanted to keep this to a single Pi, it is a master/data node which references the HDD for the esdb.

I have been looking into this for the better part of a week, but still lost on it. While I am thinking to add a bunch more nodes to this 1 node cluster, I find that this might kicking the can down the road and was curious as to what was going on.

Has anyone else seen this issue?

I was thinking that this might be a hardware limitation. A PI is cool for a POC but not really good for a dev or prod environment. That being said, since it is a 4g machine, i figured 1g + 2g + system space was more than sufficient. When stopping logstash, i noticed that the issue is still there, which is ruling out the logstash itself.

Ideas that cropped up to me were: Hardwarae Limitations, Soft Errors (even though min ram is 2g, it actually needs more to function), or if there is an issue with Elasticsearch itself as I dont really see why it should be creating this circular issue.

On an as needed basis, and faster responses, we can discuss through Twitter for a faster turnaround, and if we can find a solution, we can properly document it all here. Twitter Handle is: @Fallenreaper

Thanks,
Will

Edit: I did notice that there isnt really a Pi ELK stack readily available, and requires a bit extra configuration. Right now, i have ingested 1.2B documents, but I have at least 100B more documents to create. At this point, I was trying to understand why the Heap Error Occurred, and how to overcome it.

For the record of the instance itself. I see the following when starting the service:

[elasticsearch] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[elasticsearch] using discovery type [zen] and seed hoosts providers [settings]
[elasticsearch] initialized
[elasticsearch] starting ...
[elasticsearch] Failed to find a usable hardware address from the network interfaaces; using random bytes: 3a:78:3a:f4:28:73:95:fe
[elasticsearch] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[elasticsearch] system call filters failed to install; check the logs and fix your configuraation or disable system call filters at your own risk
[elasticsearch] the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
[elasticsearch] cluster UUID [some numbers]
[elasticsearch] no discovery configuration found, will perform best-effort cluster bootstrapping after [3s] unless existings master is discovered
[elasticsearch] elected-as-master ([1] nodes joined)[{elasticsearch}{_STUFF}{MORE_STUFF}{127.0.0.1}{127.0.0.1:9300}{dim}{xpack.installed=true}]}, term: 12, version: 68, reason: Publication{term=12, version=68}
[elasticsearch] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0,1:9200}
[elasticsearch] started
[elasticsearch] license [stuff] mode [basic] - valid
[elasticsearch] Active license is now [BASIC]; Security is disabled
[elasticsearch] recovered [1] indices into cluster_state
[elasticsearch] Cluster health status changed from [RED] to [YELLOW](reason: [shards started [elasticsearch[0]] ...]).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.