Hi, I'm updating Kibana due to the security advisory posted yesterday.
After updating Canvas disappeared (maybe there can somehow be better feedback for when this will happen), and after removing / attempting to install the plugin I get the following:
NODE_OPTIONS="--max-old-space-size=4096" ./bin/kibana-plugin install https://download.elastic.co/kibana/canvas/kibana-canvas-0.1.2200.zip
Plugin installation was unsuccessful due to error "Incorrect Kibana version in plugin [canvas]. Expected [6.4.3]; found [6.4.2]"
BUT hangs around here Optimizing and caching browser bundles...
{"type":"log","@timestamp":"2018-11-08T22:12:39Z","tags":["info","optimize"],"pid":3085,"message":"Optimizing and caching bundles for ml, stateSessionStorageRedirect, status_page, timelion, graph, monitoring, login, logout, dashboardViewer, apm, canvas and kibana. This may take a few minutes"}
{"type":"log","@timestamp":"2018-11-08T22:16:04Z","tags":["info","optimize"],"pid":3085,"message":"Optimization of bundles for ml, stateSessionStorageRedirect, status_page, timelion, graph, monitoring, login, logout, dashboardViewer, apm, canvas and kibana complete in 205.80 seconds"}
{"type":"log","@timestamp":"2018-11-08T22:16:04Z","tags":["info"],"pid":3085,"message":"Plugin initialization disabled."}
maybe this Optimizing and caching browser bundles... could take a long time?
Yes, yes it does. It also takes a lot of resources. You probably won't have luck running it on a system with less than 4GB of ram.
That said, since you're building containers, once you install the plugin and run it once, you don't need to run it again. Though, do be careful if you change the kibana.yml file (via a volume for example), since enabling/disabling plugins and some of the other setting cause a re-optimize. That will fail on resource constrained machines/environments, and will happen every time you start that container.
If you want to customize things in the kibana.yml, your best bet is doing that first, then installing plugins and letting the optimize step finish. Once it does, you've got a Kibana container that works how you want and doesn't have that 4GB ram requirement.
Here's an example multi-stage Dockerfile I recently put together that you may find helpful.
FROM ubuntu:18.04 as builder
# update apt, install dependencies
RUN apt-get -y update \
&& apt-get -y install wget
# fetch and unpack kibana, move path to "kibana"
WORKDIR /build
ARG KIBANA_VERSION=6.4.3
RUN wget -q "https://artifacts.elastic.co/downloads/kibana/kibana-${KIBANA_VERSION}-linux-x86_64.tar.gz" \
&& tar zxf "kibana-${KIBANA_VERSION}-linux-x86_64.tar.gz" && mv kibana-${KIBANA_VERSION}-linux-x86_64 kibana
# copy kibana.yml file
WORKDIR /build/kibana
COPY ./kibana.yml ./config/kibana.yml
# install canvas plugin
ARG CANVAS_VERSION=0.1.2201
RUN NODE_OPTIONS="--max-old-space-size=4096" ./bin/kibana-plugin install "https://download.elastic.co/kibana/canvas/kibana-canvas-${CANVAS_VERSION}.zip"
# create new container for smaller image
FROM ubuntu:18.04
WORKDIR /app
COPY --from=builder /build/kibana .
# run kibana
EXPOSE 5601
VOLUME /app/config
ENTRYPOINT ["bin/kibana"]
Just create a kibana.yml file next to the Dockerfile and build. You can docker cp that from docker.elastic.co/kibana/kibana:6.4.3, or just grab it from the repo.
You might be able to get away with an Alpine build or some other smaller image for the second stage, I haven't tried, Ubuntu was small enough for me.
Ouch! I takes 5-10 minutes for me on an older macbook pro, sounds like you've got yourself a pretty resource constrained system. Glad it worked out though, and that the Dockerfile was helpful.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.