All in one on ubuntu64 OS Raspberry Pi 4 4G fails at elastic-operator

I have been looking into this for some time. I created ELK which worked quite well on my Macbook Pro. It was using the Docker for Desktop installed kubernetes instance. After I add the all-in-one yaml and launch my file, it works and was ingesting data and effectively using the stack. I was happy! I thought to pass this off to a different machine.

I decided that since it works, I wanted to put it on a raspberry pi. That way I have a static always connected system running, while i work on understanding requirements of building out a cluster.

Machine to Follow:
Hardware: Raspberry Pi 4 with 4G RAM, 32G SD Card.
OS: Ubuntu 64
Software: I installed kubeadm, kubectl to get the K8S running. Then i applied the all-in-one yaml prior to running my elastic files.

Issue:
Elastic-Operator, a part of the all-in-one file, never initializes. When thinking it would just initialize over days, i ran my yaml file definition and noticed that elastic/kibana never get a status. When doing fresh installs and doing other demos, I noticed that elastic and kibana dont start booting until after the elastic-operator is set up.

I have been trying this over weeks. I have thought it may take a few days or something. Then I started to look at logs as well as describing the elastic-operator to see what is actually going on, and I got a similar error when dealing with some postgres containers. It was giving me an Exec error. So it made me think that the base elastic-operator container is not built for arm64.

I have been looking around, and it said that Elastic will have ARM stuff now since they want to get the product to users that consume it in various formats.

I have been able to reproduce it on all of my PI machines, to the point where it isnt even needing my pod yaml. It just needs to start a local cluster on the pi and then run all-in-one. It works on MacbookPro, but takes like 10 minutes or so to initialize, but it is a format that I already knew worked. It seemed the the issue I was isolating it to was related to Pis and possibly ARM64 Requirements.

Am I doing something wrong or incorrect? Should I be using something else when setting up my own ELK K8S cluster?

The ECK operator does not support ARM currently. We are looking into changing that in a future release (see this issue https://github.com/elastic/cloud-on-k8s/issues/3504). But for now you are out of luck unless you feel comfortable to build your own experimental ARM version (see the issue I mentioned).

Ill take a look. I dont mind using buildx if that is what youre using, as it is what I had to do myself.
thank you.

@pebrc I dont mind building an arm version through buildx etc, im just not sure how one would go about applying that to the Kube cluster for use if you know what I mean.

I did an initial attempt with buildx, but at the end it attempted to push the image somewhere, so i wasnt sure where I should attempt to store it such that kubectl apply will recognize it as a part of eck.yaml definitions or otherwise.

i was looking up and did the changes in your linked changes and it seemed to do well except for not knowing the registry info and how to get it into k8s itself.

You have to push the image to a container registry you have access to. It can be something you host yourself or a cloud-based one like DockerHub, CodeFresh, GitHub etc. Assuming you sign up to DockerHub with the username fallenreaper, the value for REGISTRY would be docker.io and REGISTRY_NAMESPACE would be fallenreaper.

You can then edit the all-in-one.yaml file and replace the operator image with the one that you built. You can also try running make again as follows to generate an all-in-one.yaml file with the correct settings as well. (The file will be generated in the config directory)

make generate-all-in-one REGISTRY=docker.io REGISTRY_NAMESPACE=fallenreaper

Im not sure if you had some insight, but i created a new builder, built the image, and pushed it to docker.io based on the above information. I then created the all in one like you said and ran it on my machine.

When looking through, I am still getting Crashloop backoff for the elastic-operator pod. I describe the pod and get:

Backoff restarting failed container by creating image.

I looked at the logs of the elastic-operator pod, it states:

standard_init_linux.go:211: exec user process caused "exec format error"

I was not sure if I should be building this on a raspberry pi, as I ran the build scripts on my mac. I was thinking that buildx handles the arm64 (it seems so) as i do it with other images. Not quite sure what else is going on with that regard. Sort of sitting on my hands at the moment with it.

Seems maybe something else is going on, but I am trying to debug to see if I can gain additional insights.

Sounds like the binary was not cross-compiled to ARM. Was the patch applied correctly? The Makefile had GOARCH hardcoded to amd64 and that should be removed.

You can check whether the ARM image was created by running docker buildx imagetools inspect <your_image_name>. It sounds like it was, given that you were able to pull it on RPi. It's worth double checking anyway.

I did this and it showed 2 manifests, 1 for linux/amd64 and 1 for linux/arm64. So i think it is doing the right thing when creating the image itself. Just seems to not be giving me the desired results.

Is this planning to be a future sprint, to add ARM to the build process? If so, i dont mind shelving my work currently, THOUGH it does block ELK dev work as my needs are related to arm machines.

Im trying to think of an alternative way in which I could set up and configure ELK in K8S, while leveraging 5 ARM64 Raspberry Pis.

As an alternative, I could put Elasticsearch by itself on here, but the es-operator issue still exists with current versions.

I think the issue is that the binary is not compiled properly. Possibly due to a minor problem with the Dockerfile. Make sure that you have the --platform argument in the FROM clause and removed GOARCH=amd64 from the build command.

This issue is on our radar but there's no specific timeline on when an official ARM image will be published.

You can try using the Elastic Helm Charts as an alternative method of deploying the Elastic Stack on Kubernetes.

Ill take a look into the helm charts.

I think ultimately it comes down to: It's far easier for me to set up a bunch of k8s on PIs and then add them to the network vs manually logging into each and installing elasticsearch and then adding to an existing cluster. This doesn't even mention power outages and how it would function then.

Just seems odd that i can create a centos docker container, and likely port that to k8s and then work to create a network and expand etc, as opposed to using eck or something.

Seems like there are ways to set up single node options at least. I think there is just something missing where k8s works with simpler things, but when it comes more to scale, even the website's default fails.

Either way, thank you for your help.

I see github shows helm builds are failing.

Hi....I made it run on a 3B+, yet in an extremely restricted style. Moving it to a 4GB Pi4 had a major effect to execution. You truly need the additional memory. Google for establishment strategies - that is the way I discovered working guidelines.

You can't run the most recent rendition of the stack. I trust it went to 64bit just at some new point. I'm running adaptation 5.6.15 of the stack.