Hi ,
I am running elastic search kibana on kubernetes platform and able to access the kibana dashboard
but only error is appearing saying "elasticsearch 1.0.0 Service Unavailable" so can you please suggest the cause for this problem .
I have run the query from kibana pod/container to elastic search container and it is working fine. Below is the output.
root@kibana-logging-3636129189-swz70:/# curl -X GET http://elasticsearch-logging:9200
{
"name" : "Caregiver",
"cluster_name" : "kubernetes-logging",
"cluster_uuid" : "na",
"version" : {
"number" : "2.4.1",
"build_hash" : "c67dc32e24162035d18d6fe1e952c4cbcbe79d16",
"build_timestamp" : "2016-09-27T18:57:55Z",
"build_snapshot" : false,
"lucene_version" : "5.5.2"
},
"tagline" : "You Know, for Search"
}
root@kibana-logging-3636129189-swz70:/#
Elasticsearch configuration:
root@kibana-logging-3636129189-swz70:/# cat /opt/kibana/config/kibana.yml.org
Kibana is served by a back end server. This controls which port to use.
server.port: 5601
The host to bind the server to.
server.host: "0.0.0.0"
If you are running kibana behind a proxy, and want to mount it at a path,
specify that path here. The basePath can't end in a slash.
server.basePath: ""
The maximum payload size in bytes on incoming server requests.
server.maxPayloadBytes: 1048576
The Elasticsearch instance to use for all your queries.
elasticsearch.url: 'http://elasticsearch-logging:9200'
preserve_elasticsearch_host true will send the hostname specified in elasticsearch
. If you set it to false,
then the host you use to connect to this Kibana instance will be sent.
elasticsearch.preserveHost: true
Kibana uses an index in Elasticsearch to store saved searches, visualizations
and dashboards. It will create a new index if it doesn't already exist.
kibana.index: ".kibana"
The default application to load.
kibana.defaultAppId: "discover"
If your Elasticsearch is protected with basic auth, these are the user credentials
used by the Kibana server to perform maintenance on the kibana_index at startup. Your Kibana
users will still need to authenticate with Elasticsearch (which is proxied through
the Kibana server)
elasticsearch.username: "user"
elasticsearch.password: "pass"
SSL for outgoing requests from the Kibana Server to the browser (PEM formatted)
server.ssl.cert: /path/to/your/server.crt
server.ssl.key: /path/to/your/server.key
Optional setting to validate that your Elasticsearch backend uses the same key files (PEM formatted)
elasticsearch.ssl.cert: /path/to/your/client.crt
elasticsearch.ssl.key: /path/to/your/client.key
If you need to provide a CA certificate for your Elasticsearch instance, put
the path of the pem file here.
elasticsearch.ssl.ca: /path/to/your/CA.pem
Set to false to have a complete disregard for the validity of the SSL
certificate.
elasticsearch.ssl.verify: true
Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
request_timeout setting
elasticsearch.pingTimeout: 1500
Time in milliseconds to wait for responses from the back end or elasticsearch.
This must be > 0
elasticsearch.requestTimeout: 30000
Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
by client-side headers.
elasticsearch.customHeaders: {}
Time in milliseconds for Elasticsearch to wait for responses from shards.
Set to 0 to disable.
elasticsearch.shardTimeout: 0
Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
elasticsearch.startupTimeout: 5000
Set the path to where you would like the process id file to be created.
pid.file: /var/run/kibana.pid
If you would like to send the log output to a file you can set the path below.
logging.dest: stdout
Set this to true to suppress all logging output.
logging.silent: false
Set this to true to suppress all logging output except for error messages.
logging.quiet: false
Set this to true to log all events, including system usage information and all requests.
logging.verbose: false
root@kibana-logging-3636129189-swz70:/#
elasticsearch configuration;
root@elasticsearch-logging-d6rf7:/# cat /elasticsearch/config/elasticsearch.yml
cluster.name: kubernetes-logging
node.master: ${NODE_MASTER}
node.data: ${NODE_DATA}
transport.tcp.port: ${TRANSPORT_PORT}
http.port: ${HTTP_PORT}
path.data: /data
network.host: 0.0.0.0
discovery.zen.minimum_master_nodes: ${MINIMUM_MASTER_NODES}
discovery.zen.ping.multicast.enabled: false
root@elasticsearch-logging-d6rf7:/#