I've upgraded to Elasticsearch 1.5.2 and Kibana 4.0.2 which I am deploying
in a Kubernetes http://kubernetes.io/ cluster.
Specifically, I am running Elasticsearch in one "pod" (a Kubernetes
container with its own IP) and I am running Kibana in another pod (again
with a distinct IP address).
The Kubernetes cluster runs a DNS service which will map the name
"elasticsearch-logging.default:9200" to the pod running Elasticsearch.
This works fine: I can exec into any Docker container in a pod and run
"curl http://elasticsearch-logging.default:9200" and the right thing
happens.
I've configured Kibana to let it know where Elasticsearch is running:
Since I want to access Kibana from outside the cluster I use a proxy
running on the master node of the cluster (after adding certificates to my
browser for the SSL connection) e.g.
Have you tried defining a kubernetes service for Kibana? You can add a
public IP of one of your minions to this service so that you can reach it
easier from outside of the cluster.
On Tuesday, April 28, 2015 at 11:31:05 PM UTC+2, Satnam Singh wrote:
Hello,
I've upgraded to Elasticsearch 1.5.2 and Kibana 4.0.2 which I am deploying
in a Kubernetes http://kubernetes.io/ cluster.
Specifically, I am running Elasticsearch in one "pod" (a Kubernetes
container with its own IP) and I am running Kibana in another pod (again
with a distinct IP address).
The Kubernetes cluster runs a DNS service which will map the name
"elasticsearch-logging.default:9200" to the pod running Elasticsearch.
This works fine: I can exec into any Docker container in a pod and run
"curl http://elasticsearch-logging.default:9200" and the right thing
happens.
I've configured Kibana to let it know where Elasticsearch is running:
Since I want to access Kibana from outside the cluster I use a proxy
running on the master node of the cluster (after adding certificates to my
browser for the SSL connection) e.g.
As for exposing an IP of the minion -- I don't want to do this because from
a conceptual viewpoint it is not the right thing to do (access via the
service proxy to Kibana should work, and the Kibana server should be able
to access Elasticsearch via DNS) -- and not all clouds allow the IPs on
minions to be exposed in this way.
I will experiment with a reverse proxy running in a different container in
the same pod which can terminate the SSL connection from the browser
(Kibana) to port :80 and then proxy pass them as HTTP calls to port :5601
which is being served in a different container running the Kibana service.
However, I feel that (in this situation) I should not beed a reverse proxy
and I worry that there is a bug somewhere in Kibana.
Cheers,
Satnam
On Wednesday, April 29, 2015 at 5:59:51 AM UTC-7, Nils Dijk wrote:
Hi,
Have you tried defining a kubernetes service for Kibana? You can add a
public IP of one of your minions to this service so that you can reach it
easier from outside of the cluster.
On Tuesday, April 28, 2015 at 11:31:05 PM UTC+2, Satnam Singh wrote:
Hello,
I've upgraded to Elasticsearch 1.5.2 and Kibana 4.0.2 which I am
deploying in a Kubernetes http://kubernetes.io/ cluster.
Specifically, I am running Elasticsearch in one "pod" (a Kubernetes
container with its own IP) and I am running Kibana in another pod (again
with a distinct IP address).
The Kubernetes cluster runs a DNS service which will map the name
"elasticsearch-logging.default:9200" to the pod running Elasticsearch.
This works fine: I can exec into any Docker container in a pod and run
"curl http://elasticsearch-logging.default:9200" and the right thing
happens.
I've configured Kibana to let it know where Elasticsearch is running:
Since I want to access Kibana from outside the cluster I use a proxy
running on the master node of the cluster (after adding certificates to my
browser for the SSL connection) e.g.
As for exposing an IP of the minion -- I don't want to do this because
from a conceptual viewpoint it is not the right thing to do (access via the
service proxy to Kibana should work, and the Kibana server should be able
to access Elasticsearch via DNS) -- and not all clouds allow the IPs on
minions to be exposed in this way.
I will experiment with a reverse proxy running in a different container in
the same pod which can terminate the SSL connection from the browser
(Kibana) to port :80 and then proxy pass them as HTTP calls to port :5601
which is being served in a different container running the Kibana service.
However, I feel that (in this situation) I should not beed a reverse proxy
and I worry that there is a bug somewhere in Kibana.
Cheers,
Satnam
On Wednesday, April 29, 2015 at 5:59:51 AM UTC-7, Nils Dijk wrote:
Hi,
Have you tried defining a kubernetes service for Kibana? You can add a
public IP of one of your minions to this service so that you can reach it
easier from outside of the cluster.
On Tuesday, April 28, 2015 at 11:31:05 PM UTC+2, Satnam Singh wrote:
Hello,
I've upgraded to Elasticsearch 1.5.2 and Kibana 4.0.2 which I am
deploying in a Kubernetes http://kubernetes.io/ cluster.
Specifically, I am running Elasticsearch in one "pod" (a Kubernetes
container with its own IP) and I am running Kibana in another pod (again
with a distinct IP address).
The Kubernetes cluster runs a DNS service which will map the name
"elasticsearch-logging.default:9200" to the pod running Elasticsearch.
This works fine: I can exec into any Docker container in a pod and run
"curl http://elasticsearch-logging.default:9200" and the right thing
happens.
I've configured Kibana to let it know where Elasticsearch is running:
Since I want to access Kibana from outside the cluster I use a proxy
running on the master node of the cluster (after adding certificates to my
browser for the SSL connection) e.g.
Well, after adding an external load balancer to my Kibana Kubernetes
services and using that to access the Kibana dashboard I see that
everything works as intended. So definately a Kubernetes issue.
As for exposing an IP of the minion -- I don't want to do this because
from a conceptual viewpoint it is not the right thing to do (access via the
service proxy to Kibana should work, and the Kibana server should be able
to access Elasticsearch via DNS) -- and not all clouds allow the IPs on
minions to be exposed in this way.
I will experiment with a reverse proxy running in a different container
in the same pod which can terminate the SSL connection from the browser
(Kibana) to port :80 and then proxy pass them as HTTP calls to port :5601
which is being served in a different container running the Kibana service.
However, I feel that (in this situation) I should not beed a reverse proxy
and I worry that there is a bug somewhere in Kibana.
Cheers,
Satnam
On Wednesday, April 29, 2015 at 5:59:51 AM UTC-7, Nils Dijk wrote:
Hi,
Have you tried defining a kubernetes service for Kibana? You can add a
public IP of one of your minions to this service so that you can reach it
easier from outside of the cluster.
On Tuesday, April 28, 2015 at 11:31:05 PM UTC+2, Satnam Singh wrote:
Hello,
I've upgraded to Elasticsearch 1.5.2 and Kibana 4.0.2 which I am
deploying in a Kubernetes http://kubernetes.io/ cluster.
Specifically, I am running Elasticsearch in one "pod" (a Kubernetes
container with its own IP) and I am running Kibana in another pod (again
with a distinct IP address).
The Kubernetes cluster runs a DNS service which will map the name
"elasticsearch-logging.default:9200" to the pod running Elasticsearch.
This works fine: I can exec into any Docker container in a pod and run
"curl http://elasticsearch-logging.default:9200" and the right thing
happens.
I've configured Kibana to let it know where Elasticsearch is running:
Since I want to access Kibana from outside the cluster I use a proxy
running on the master node of the cluster (after adding certificates to my
browser for the SSL connection) e.g.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.