Kibana: Unable to connect to Elasticsearch

Greetings,

I've got a logstash/kibana/elasticsearch system setup. I also setup elastic search in a cluster with 3 nodes (10.1.0.81, 10.1.0.82, 10.1.0.83). I see all nodes healthy when I go to http://10.1.0.81:9200/_plugin/head/ but for some reason when I try to load kibana I get the following:

Kibana: Unable to connect to Elasticsearch
Error: Unable to connect to Elasticsearch
Error: Bad Gateway
at respond (http://logstashvm01.company.com/index.js?_b=7562:85289:15)
at checkRespForFailure (http://logstashvm01.company.com/index.js?_b=7562:85257:7)
at http://logstashvm01.company.com/index.js?_b=7562:83895:7
at wrappedErrback (http://logstashvm01.company.com/index.js?_b=7562:20902:78)
at wrappedErrback (http://logstashvm01.company.com/index.js?_b=7562:20902:78)
at wrappedErrback (http://logstashvm01.company.com/index.js?_b=7562:20902:78)
at http://logstashvm01.company.com/index.js?_b=7562:21035:76
at Scope.$eval (http://logstashvm01.company.com/index.js?_b=7562:22022:28)
at Scope.$digest (http://logstashvm01.company.com/index.js?_b=7562:21834:31)
at Scope.$apply (http://logstashvm01.company.com/index.js?_b=7562:22126:24)

I also get these in /var/log/kibana.log:

{"name":"Kibana","hostname":"logstashvm01.copmany.com","pid":25888,"level":50,"err":{"message":"connect ECONNREFUSED","name":"Error","stack":"Error: connect ECONNREFUSED\n at errnoException (net.js:905:11)\n at Object.afterConnect [as oncomplete] (net.js:896:19)","code":"ECONNREFUSED"},"msg":"","time":"2015-11-04T22:09:11.181Z","v":0}
{"name":"Kibana","hostname":"logstashvm01.company.com","pid":25888,"level":30,"req":{"method":"GET","url":"/elasticsearch/?=1446674984430","headers":{"connection":"upgrade","host":"logstashvm01.company.com","authorization":"Basic aXRhZG1pbjp2MW55bDF6MzIz","accept":"application/json, text/plain, /","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.85 Safari/537.36","referer":"http://logstashvm01.company.com/","accept-encoding":"gzip, deflate, sdch","accept-language":"en-US,en;q=0.8"},"remoteAddress":"127.0.0.1","remotePort":55644},"res":{"statusCode":502,"responseTime":3,"contentLength":77},"msg":"GET /?=1446674984430 502 - 3ms","time":"2015-11-04T22:09:11.182Z","v":0}
{"name":"Kibana","hostname":"logstashvm01.company.com","pid":25888,"level":30,"req":{"method":"GET","url":"/bower_components/font-awesome/fonts/fontawesome-webfont.woff?v=4.2.0","headers":{"connection":"upgrade","host":"logstashvm01.company.com","cache-control":"max-age=0","authorization":"Basic aXRhZG1pbjp2MW55bDF6MzIz","if-none-match":"W/"ffac-1913517162"","if-modified-since":"Tue, 15 Sep 2015 00:23:42 GMT","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.85 Safari/537.36","accept":"/","referer":"http://logstashvm01.company.com/","accept-encoding":"gzip, deflate, sdch","accept-language":"en-US,en;q=0.8"},"remoteAddress":"127.0.0.1","remotePort":55646},"res":{"statusCode":304,"responseTime":0,"contentLength":0},"msg":"GET /bower_components/font-awesome/fonts/fontawesome-webfont.woff?v=4.2.0 304 - 0ms","time":"2015-11-04T22:09:11.201Z","v":0}

Here's the relevant config on all three nodes for elasticsearch:

cluster.name: cluster1
node.name: "logstashvm01"
index.number_of_replicas: 2
network.host: 10.1.0.81
gateway.recover_after_nodes: 2
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["10.1.0.82", "10.1.0.83"]
script.disable_dynamic: true

Any help would be greatly appreciated. Please let me know if there's anything else needed to debug this.

Thanks!

Jason.

@jberenson

Can you post you kibana config fie.

Like niraj_kumar said. Could you show the kibana.yml ?
It usually has a part regarding the connection with Elasticsearch. You have to specify the url and port to elasticsearch. If you don't do this, you won't be able to connect with it.

My apologies for the late reply. Here's the kibana config:

Kibana is served by a back end server. This controls which port to use.

port: 5601

The host to bind the server to.

host: "10.1.0.81"

The Elasticsearch instance to use for all your queries.

elasticsearch_url: "http://localhost:9200"

preserve_elasticsearch_host true will send the hostname specified in

elasticsearch. If you set it to false,

then the host you use to connect to this Kibana instance will be sent.

elasticsearch_preserve_host: true

Kibana uses an index in Elasticsearch to store saved searches,

visualizations

and dashboards. It will create a new index if it doesn't already exist.

kibana_index: ".kibana"
#kibana_index: "logstash"

If your Elasticsearch is protected with basic auth, this is the user

credentials

used by the Kibana server to perform maintence on the kibana_index at

statup. Your Kibana

users will still need to authenticate with Elasticsearch (which is

proxied thorugh

the Kibana server)

kibana_elasticsearch_username: user

kibana_elasticsearch_password: pass

If your Elasticsearch requires client certificate and key

kibana_elasticsearch_client_crt: /path/to/your/client.crt

kibana_elasticsearch_client_key: /path/to/your/client.key

If you need to provide a CA certificate for your Elasticsarech

instance, put

the path of the pem file here.

ca: /path/to/your/CA.pem

The default application to load.

default_app_id: "discover"

Time in milliseconds to wait for elasticsearch to respond to pings,

defaults to

request_timeout setting

ping_timeout: 1500

Time in milliseconds to wait for responses from the back end or

elasticsearch.

This must be > 0

request_timeout: 300000

Time in milliseconds for Elasticsearch to wait for responses from shards.

Set to 0 to disable.

shard_timeout: 0

Time in milliseconds to wait for Elasticsearch at Kibana startup

before retrying

startup_timeout: 5000

Set to false to have a complete disregard for the validity of the SSL

certificate.

verify_ssl: true

SSL for outgoing requests from the Kibana Server (PEM formatted)

ssl_key_file: /path/to/your/server.key

ssl_cert_file: /path/to/your/server.crt

Set the path to where you would like the process id file to be created.

pid_file: /var/run/kibana.pid

If you would like to send the log output to a file you can set the

path below.

This will also turn off the STDOUT log output.

log_file: ./kibana.log

Plugins that are included in the build, and no longer found in the

plugins/ folder
bundled_plugin_ids:

  • plugins/dashboard/index
  • plugins/discover/index
  • plugins/doc/index
  • plugins/kibana/index
  • plugins/markdown_vis/index
  • plugins/metric_vis/index
  • plugins/settings/index
  • plugins/table_vis/index
  • plugins/vis_types/index
  • plugins/visualize/index

Your kibana file seems to be something different and has couple of missing things like the default dashboard file, your verify_ssl is set to true(i don't know if you have ssl enabled). The host is binding to just one host instead of binding it in cluster.

So going ahead how do you access the nodes via a cluster. Say if you wanna do a POST or PUT or GET how do you call your cluster(the FQDN).

I am attaching a file and you can see if that works for you. Assuming you have kibana 3.

Remember to backup your file.

/** @scratch /configuration/config.js/1
  • == Configuration
  • config.js is where you will find the core Kibana configuration. This file contains parameter that
  • must be set before kibana is run for the first time.
    */
    define(['settings'],
    function (Settings) {

/** @scratch /configuration/config.js/2
*

  • === Parameters
    */
    return new Settings({
/** @scratch /configuration/config.js/5
 *
 * ==== elasticsearch
 *
 * The URL to your elasticsearch server. You almost certainly don't
 * want +http://localhost:9200+ here. Even if Kibana and Elasticsearch are on
 * the same host. By default this will attempt to reach ES at the same host you have
 * kibana installed on. You probably want to set it to the FQDN of your
 * elasticsearch host
 *
 * Note: this can also be an object if you want to pass options to the http client. For example:
 *
 *  +elasticsearch: {server: "http://localhost:9200", withCredentials: true}+
 *
 */
elasticsearch: "https://stage-06-kibana.example.com",

/** @scratch /configuration/config.js/5
 *
 * ==== default_route
 *
 * This is the default landing page when you don't specify a dashboard to load. You can specify
 * files, scripts or saved dashboards here. For example, if you had saved a dashboard called
 * `WebLogs' to elasticsearch you might use:
 *
 * default_route: '/dashboard/elasticsearch/WebLogs',
 */
default_route     : '/dashboard/file/default.json',

/** @scratch /configuration/config.js/5
 *
 * ==== kibana-int
 *
 * The default ES index to use for storing Kibana specific object
 * such as stored dashboards
 */
kibana_index: "kibana-int",

/** @scratch /configuration/config.js/5
 *
 * ==== panel_name
 *
 * An array of panel modules available. Panels will only be loaded when they are defined in the
 * dashboard, but this list is used in the "add panel" interface.
 */
panel_names: [
  'histogram',
  'map',
  'goal',
  'table',
  'filtering',
  'timepicker',
  'text',
  'hits',
  'column',
  'trends',
  'bettermap',
  'query',
  'terms',
  'stats',
  'sparklines'
]

});
});

I upgraded to the latest kibana and elasticsearch and things seem to be
running now and I can connect. For some reason though, on the main
screen where it asks to "Configure an index pattern", I enter .logstash
but there's nothing available for "Time-field name".

I'm attaching my elasticsearch.yml and kibana.yml config files. Any
help would be much appreciated.

Thanks!

Jason.

Ah you shouldn't enter .logstash !
You should add logstash-*
All logstash logs are filtered as logstash-(date) where the asterisk implies all
At least I think that's what your problem is right now :slight_smile:

So I've got the following in kibana.yml:

kibana.index: ".logstash-"

And I tried ".logstash-" and "logstash-" on the kibana interface when
creating an index but it still doesn't give anything in the drop down
for time-field name so I can't click create.

Jason.

hmm that's weird. I do want to add the the default setting of kibana.index is commented out and the value at that point is .kibana

I never touched this and just added logstash-* in the "index name or pattern" bit.
I'm no expert whatsoever in this and so I can only give you my personal experiences.

So know I'm getting the following in the logstash logs:

[root@logstashvm01 ~]# tail -f /var/log/logstash/logstash.log
{:timestamp=>"2015-12-04T22:53:14.495000-0800", :message=>"Got error to send bulk of actions: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];", :level=>:error}
{:timestamp=>"2015-12-04T22:53:14.495000-0800", :message=>"Failed to flush outgoing items", :outgoing_count=>43, :exception=>"Java::OrgElasticsearchClusterBlock::ClusterBlockException", :backtrace=>["org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(org/elasticsearch/cluster/block/ClusterBlocks.java:151)", "org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(org/elasticsearch/cluster/block/ClusterBlocks.java:141)", "org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(org/elasticsearch/action/bulk/TransportBulkAction.java:215)", "org.elasticsearch.action.bulk.TransportBulkAction.access$000(org/elasticsearch/action/bulk/TransportBulkAction.java:67)", "org.elasticsearch.action.bulk.TransportBulkAction$1.onFailure(org/elasticsearch/action/bulk/TransportBulkAction.java:153)", "org.elasticsearch.action.support.TransportAction$ThreadedActionListener$2.run(org/elasticsearch/action/support/TransportAction.java:137)", "java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:617)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}
{:timestamp=>"2015-12-04T22:53:14.629000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:15.130000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:15.630000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:16.131000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:16.631000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:17.132000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:17.632000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:18.133000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:18.505000-0800", :message=>"CircuitBreaker::rescuing exceptions", :name=>"Lumberjack input", :exception=>LogStash::SizedQueueTimeout::TimeoutError, :level=>:warn}
{:timestamp=>"2015-12-04T22:53:18.506000-0800", :message=>"Lumberjack input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover.", :exception=>LogStash::CircuitBreaker::HalfOpenBreaker, :level=>:warn}
{:timestamp=>"2015-12-04T22:53:18.633000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:19.134000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:19.635000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:20.135000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:20.636000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:21.136000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:21.637000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:22.137000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:22.638000-0800", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-12-04T22:53:23.139000-0800", :message=>"Lumberjack input: the pipeline is