How to revive connection?

I'm trying to set up Kibana and ElasticSearch in the container but continuously receive that error message:
Unable to revive connection: http://maperclip-elasticsearch:9200/"

This is how my docker-compose.yml looks like:

# Elasticsearch
    client-elasticsearch:
        container_name: client-elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:7.4.2
        volumes:
            - esdata:/usr/share/elasticsearch/data
        hostname: client-elasticsearch
        environment:
            - bootstrap.memory_lock=true
            - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
            - discovery.type=single-node
        logging:
            driver: none
        ports:
            - 9300:9300
            - 9200:9200
        networks: 
            - client

# Kibana
    kibana:
        container_name: kibana
        image: docker.elastic.co/kibana/kibana:7.4.2
        ports:
            - 5601:5601
        volumes:
            - ./kibana.yml:/usr/share/kibana/config/kibana.yml
        depends_on:
            - client-elasticsearch
        networks: 
            - client
    
volumes:
    esdata:

networks: 
    client:

And my kibana.yml:

# Kibana is served by a back end server. This setting specifies the port to use.

#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.

# The default is 'localhost', which usually means remote machines will not be able to connect.

# To allow connections from remote users, set this parameter to a non-loopback address.

#server.host: "localhost"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.

# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath

# from requests it receives, and to prevent a deprecation warning at startup.

# This setting cannot end in a slash.

#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with

# `server.basePath` or require that they are rewritten by your reverse proxy.

# This setting was effectively always `false` before Kibana 6.3 and will

# default to `true` starting in Kibana 7.0.

#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.

#server.maxPayloadBytes: 1048576

# The Kibana server's name. This is used for display purposes.

#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.

elasticsearch.hosts: ["http://client-elasticsearch:9200"]

# When this setting's value is true Kibana uses the hostname specified in the server.host

# setting. When the value of this setting is false, Kibana uses the hostname of the host

# that connects to this Kibana instance.

#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and

# dashboards. Kibana creates a new index if the index doesn't already exist.

#kibana.index: ".kibana"

# If your Elasticsearch is protected with basic authentication, these settings provide

# the username and password that the Kibana server uses to perform maintenance on the Kibana

# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which

# is proxied through the Kibana server.

#elasticsearch.username: "kibana"

#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.

# These settings enable SSL for outgoing requests from the Kibana server to the browser.

#server.ssl.enabled: false

#server.ssl.certificate: /path/to/your/server.crt

#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.

# These files are used to verify the identity of Kibana to Elasticsearch and are required when

# xpack.ssl.verification_mode in Elasticsearch is set to either certificate or full.

#elasticsearch.ssl.certificate: /path/to/your/client.crt

#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate

# authority for your Elasticsearch instance.

#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.

#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of

# the elasticsearch.requestTimeout setting.

#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value

# must be a positive integer.

#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side

# headers, set this value to [] (an empty list).

#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten

# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.

#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.

#elasticsearch.shardTimeout: 30000

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.

#elasticsearch.startupTimeout: 5000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.

#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.

#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.

#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.

#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.

#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information

# and all requests.

#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance

# metrics. Minimum is 100ms. Defaults to 5000.

#ops.interval: 5000

Why my kibana doesn't see the Elasticsearch?
Are there any additional configuration that need to be set?

Hi @ishv,

Sorry you're having a hard time getting Kibana to connect to Elasticsearch. Can you supply the full log file that Kibana generates?

Before the Unable to revive connection messages, Kibana will typically log an explanation as to why it can't connect. It can be hard to find sometimes, but the message is usually there.

{"type":"log","@timestamp":"2019-11-21T16:02:21Z","tags":["reporting","esqueue","queue-worker","error"],"pid":6,"message":"k38wm4mh00061c4d9516rogr - job querying failed: Error: No Living connections\n at sendReqWithConnection (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:266:15)\n at next (/usr/share/kibana/node_modules/elasticsearch/src/lib/connection_pool.js:243:7)\n at process._tickCallback (internal/process/next_tick.js:61:11)"}

is that the log that you meaned?

Could not create APM Agent configuration: No Living connections
kibana | {"type":"log","@timestamp":"2019-11-21T16:02:15Z","tags":["error","elasticsearch","data"],"pid":6,"message":"Request error, retrying\nGET http://client-elasticsearch:9200/_xpack => connect ECONNREFUSED 192.168.208.3:9200"}

{"type":"log","@timestamp":"2019-11-21T16:02:15Z","tags":["error","elasticsearch","admin"],"pid":6,"message":"Request error, retrying\nGET http://client-elasticsearch:9200/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip => connect ECONNREFUSED 192.168.208.3:9200"}

Can you connect to Elasticsearch directly through your web browser or curl? What do your Elasticsearch logs look like?

This is what I see if I call `localhost:9200`:

{
  "name": "client-elasticsearch",
  "cluster_name": "docker-cluster",
  "cluster_uuid": "6UcyOjEuRQmqkoXemVNIzQ",
  "version": {
    "number": "7.4.2",
    "build_flavor": "default",
    "build_type": "docker",
    "build_hash": "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
    "build_date": "2019-10-28T20:40:44.881551Z",
    "build_snapshot": false,
    "lucene_version": "8.2.0",
    "minimum_wire_compatibility_version": "6.8.0",
    "minimum_index_compatibility_version": "6.0.0-beta1"
  },
  "tagline": "You Know, for Search"
}

I also received a different log object:

kibana                     | {"type":"log","@timestamp":"2019-11-21T17:10:50Z","tags":["status","plugin:index_management@7.4.2","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:10:50Z","tags":["status","plugin:index_lifecycle_management@7.4.2","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:10:50Z","tags":["status","plugin:rollup@7.4.2","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:10:50Z","tags":["status","plugin:remote_clusters@7.4.2","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:10:50Z","tags":["status","plugin:cross_cluster_replication@7.4.2","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:10:50Z","tags":["status","plugin:file_upload@7.4.2","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:10:50Z","tags":["status","plugin:snapshot_restore@7.4.2","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:10:50Z","tags":["info","monitoring","kibana-monitoring"],"pid":6,"message":"Starting monitoring stats collection"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:10:50Z","tags":["status","plugin:maps@7.4.2","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:10:55Z","tags":["reporting","warning"],"pid":6,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:10:55Z","tags":["status","plugin:reporting@7.4.2","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:10:56Z","tags":["listening","info"],"pid":6,"message":"Server running at http://localhost:5601"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:10:56Z","tags":["info","http","server","Kibana"],"pid":6,"message":"http server running at http://localhost:5601"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:10:57Z","tags":["status","plugin:spaces@7.4.2","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:10:57Z","tags":["warning","telemetry"],"pid":6,"message":"Error scheduling task, received [task:oss_telemetry-vis_telemetry]: version conflict, document already exists (current version [3]): [version_conflict_engine_exception] [task:oss_telemetry-vis_telemetry]: version conflict, document already exists (current version [3]), with { index_uuid=\"Uq6DolVLRji79SA_UveA1w\" & shard=\"0\" & index=\".kibana_task_manager_1\" }"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:10:57Z","tags":["warning","maps"],"pid":6,"message":"Error scheduling telemetry task, received [task:Maps-maps_telemetry]: version conflict, document already exists (current version [3]): [version_conflict_engine_exception] [task:Maps-maps_telemetry]: version conflict, document already exists (current version [3]), with { index_uuid=\"Uq6DolVLRji79SA_UveA1w\" & shard=\"0\" & index=\".kibana_task_manager_1\" }"}

Kibana looks healthy in your latest log file. Is it working now?

No, localhost:5601 just doesn't loading - 'Page is not working'

Now, I'm receiving the follwoing error:

kibana | {"type":"log","@timestamp":"2019-11-21T17:51:02Z","tags":["warning","telemetry"],"pid":6,"message":"Error scheduling task, received [task:oss_telemetry-vis_telemetry]: version conflict, document already exists (current version [3]): [version_conflict_engine_exception] [task:oss_telemetry-vis_telemetry]: version conflict, document already exists (current version [3]), with { index_uuid=\"Uq6DolVLRji79SA_UveA1w\" & shard=\"0\" & index=\".kibana_task_manager_1\" }"}
kibana | {"type":"log","@timestamp":"2019-11-21T17:51:02Z","tags":["warning","maps"],"pid":6,"message":"Error scheduling telemetry task, received [task:Maps-maps_telemetry]: version conflict, document already exists (current version [3]): [version_conflict_engine_exception] [task:Maps-maps_telemetry]: version conflict, document already exists (current version [3]), with { index_uuid=\"Uq6DolVLRji79SA_UveA1w\" & shard=\"0\" & index=\".kibana_task_manager_1\" }"}

and the funniest thing is that Kibana is running in the localhost:5601 due to this log:

kibana                     | {"type":"log","@timestamp":"2019-11-21T17:51:00Z","tags":["reporting","warning"],"pid":6,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:51:00Z","tags":["status","plugin:reporting@7.4.2","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:51:01Z","tags":["listening","info"],"pid":6,"message":"Server running at http://localhost:5601"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:51:01Z","tags":["info","http","server","Kibana"],"pid":6,"message":"http server running at http://localhost:5601"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:51:01Z","tags":["status","plugin:spaces@7.4.2","info"],"pid":6,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:51:02Z","tags":["warning","telemetry"],"pid":6,"message":"Error scheduling task, received [task:oss_telemetry-vis_telemetry]: version conflict, document already exists (current version [3]): [version_conflict_engine_exception] [task:oss_telemetry-vis_telemetry]: version conflict, document already exists (current version [3]), with { index_uuid=\"Uq6DolVLRji79SA_UveA1w\" & shard=\"0\" & index=\".kibana_task_manager_1\" }"}
kibana                     | {"type":"log","@timestamp":"2019-11-21T17:51:02Z","tags":["warning","maps"],"pid":6,"message":"Error scheduling telemetry task, received [task:Maps-maps_telemetry]: version conflict, document already exists (current version [3]): [version_conflict_engine_exception] [task:Maps-maps_telemetry]: version conflict, document already exists (current version [3]), with { index_uuid=\"Uq6DolVLRji79SA_UveA1w\" & shard=\"0\" & index=\".kibana_task_manager_1\" }"}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.