Remote Access Available But Local Access Not Available

Hi Everyone,
I can access it remotely elasticsearch but I can't access local with PHP driver.

    <?php

    ini_set('display_errors', 1);
    ini_set('display_startup_errors', 1);
    error_reporting(E_ALL);
    require 'vendor/autoload.php';

    use Elasticsearch\ClientBuilder;

    $hosts = [
                         '10.21.8.146:9200'         // IP + Port
                     ];


                                      $client = ClientBuilder::create()->setHosts($hosts)->build();
                                      $indices = $client->cat()->indices(array('index' => '*'));
                                      var_dump($indices);
    print_r($indices[0]);

this code successfully working, return to result but

    <?php

    ini_set('display_errors', 1);
    ini_set('display_startup_errors', 1);
    error_reporting(E_ALL);
    require 'vendor/autoload.php';

    use Elasticsearch\ClientBuilder;

    $hosts = [
                         'localhost:9200'         // IP + Port
                     ];


                                      $client = ClientBuilder::create()->setHosts($hosts)->build();
                                      $indices = $client->cat()->indices(array('index' => '*'));
                                      var_dump($indices);
    print_r($indices[0]);

this code return to me this error.

`**Fatal error** : Uncaught Elasticsearch\Common\Exceptions\NoNodesAvailableException: No alive nodes found in your cluster in /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/ConnectionPool/StaticNoPingConnectionPool.php:51 Stack trace: #0 /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/Transport.php(72): Elasticsearch\ConnectionPool\StaticNoPingConnectionPool->nextConnection() #1 /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/Transport.php(90): Elasticsearch\Transport->getConnection() #2 /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/Connections/Connection.php(256): Elasticsearch\Transport->performRequest('GET', '/_cat/indices/%...', Array, NULL, Array) #3 /var/www/vendor/react/promise/src/FulfilledPromise.php(28): Elasticsearch\Connections\Connection->Elasticsearch\Connections\{closure}(Array) #4 /var/www/vendor/guzzlehttp/ringphp/src/Future/CompletedFutureValue.php(55): React\Promise\FulfilledPromise->then(Object(Closure), NULL, NULL) #5 /var/www/vendor/guzzlehttp/rin in **/var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/ConnectionPool/StaticNoPingConnectionPool.php** on line **51**`

this is my elasticsearch.yml file

    # ======================== Elasticsearch Configuration =========================
    #
    # NOTE: Elasticsearch comes with reasonable defaults for most settings.
    #       Before you set out to tweak and tune the configuration, make sure you
    #       understand what are you trying to accomplish and the consequences.
    #
    # The primary way of configuring a node is via this file. This template lists
    # the most important settings you may want to configure for a production cluster.
    #
    # Please consult the documentation for further information on configuration options:
    # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
    #
    # ---------------------------------- Cluster -----------------------------------
    #
    # Use a descriptive name for your cluster:
    #
    cluster.name: zamanbaz
    #
    # ------------------------------------ Node ------------------------------------
    #
    # Use a descriptive name for the node:
    #
    node.name: zamanbaz-1
    #
    # Add custom attributes to the node:
    #
    #node.attr.rack: r1
    #
    # ----------------------------------- Paths ------------------------------------
    #
    # Path to directory where to store the data (separate multiple locations by comma):
    #
    path.data: /var/lib/elasticsearch
    #
    # Path to log files:
    #
    path.logs: /var/log/elasticsearch
    #
    # ----------------------------------- Memory -----------------------------------
    #
    # Lock the memory on startup:
    #
    #bootstrap.memory_lock: true
    #
    # Make sure that the heap size is set to about half the memory available
    # on the system and that the owner of the process is allowed to use this
    # limit.
    #
    # Elasticsearch performs poorly when the system is swapping the memory.
    #
    # ---------------------------------- Network -----------------------------------
    #
    # Set the bind address to a specific IP (IPv4 or IPv6):
    #
    network.host: 0.0.0.0
    # Set a custom port for HTTP:
    #
    #discovery.seed_hosts: 0.0.0.0

    #transport.tcp.port: 9300
    http.port: 9200
    #
    # For more information, consult the network module documentation.
    #
    # --------------------------------- Discovery ----------------------------------
    #
    # Pass an initial list of hosts to perform discovery when new node is started:
    # The default list of hosts is ["127.0.0.1", "[::1]"]
    #
    #discovery.zen.ping.unicast.hosts: ["host1", "host2"]
    #
    # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
    #
    #discovery.zen.minimum_master_nodes:
    #
    # For more information, consult the zen discovery module documentation.
    #
    # ---------------------------------- Gateway -----------------------------------
    #
    # Block initial recovery after a full cluster restart until N nodes are started:
    #
    #gateway.recover_after_nodes: 3
    #
    # For more information, consult the gateway module documentation.
    #
    # ---------------------------------- Various -----------------------------------
    #
    # Require explicit names when deleting indices:
    #
    #action.destructive_requires_name: true
    #action.auto_create_index: .monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*
    cluster.initial_master_nodes: node-1

this URL return success message
10.21.8.146

> {
>   "name" : "zamanbaz-1",
>   "cluster_name" : "zamanbaz",
>   "cluster_uuid" : "mvGdjQNCQsKnRFUyqQtAvw",
>   "version" : {
>     "number" : "7.12.0",
>     "build_flavor" : "default",
>     "build_type" : "rpm",
>     "build_hash" : "78722783c38caa25a70982b5b042074cde5d3b3a",
>     "build_date" : "2021-03-18T06:17:15.410153305Z",
>     "build_snapshot" : false,
>     "lucene_version" : "8.8.0",
>     "minimum_wire_compatibility_version" : "6.8.0",
>     "minimum_index_compatibility_version" : "6.0.0-beta1"
>   },
>   "tagline" : "You Know, for Search"
> }

CLI command successfully working.
ex: curl -X GET "http://localhost:9200"

I tried almost everything but I cannot access local with PHP driver.
How can I fix this issue

Does anybody have any idea about this issue?

May be PHP does not know how to resolve localhost to something like 127.0.0.1?

Hi @dadoonet,
Sorry for my late answer.
I had tried so many different ways. But always returned to me, no alive nodes in your cluster, in local. But I try remote access, correctly working.
I tried to curl with PHP but same again.

127.0.0.1, localhost, 0.0.0.0 or machine IP address. tried all of the same again, the result below

This build works on centos 7

But you said that it worked with the exact IP, didn't you?

If that's not the case, please explain again from start what is working and what is not.

yes, it works with IP. But it works on a remote machine, not local.
But when I run it on a remote machine it works fine.
this code is on the remote machine (this machine IP address 10.21.8.164)

<? php

    ini_set ('display_errors', 1);
    ini_set ('display_startup_errors', 1);
    error_reporting (E_ALL);
    require 'vendor / autoload.php';

    use Elasticsearch \ ClientBuilder;

    $ hosts = [
                         '10 .21.8.146: 9200 '// IP + Port
                     ];


                                      $ client = ClientBuilder :: create () -> setHosts ($ hosts) -> build ();
                                      $ indices = $ client-> cat () -> indices (array ('index' => '*'));
                                      var_dump ($ indices);
    print_r ($ indices [0]); 

If this code is running in local (this machine address 10.21.8.146)

<? php
    ini_set ('display_errors', 1);
    ini_set ('display_startup_errors', 1);
    error_reporting (E_ALL);
    require 'vendor / autoload.php';

    use Elasticsearch \ ClientBuilder;

    $ hosts = [
                         'localhost: 9200' // IP + Port
                     ];


                                      $ client = ClientBuilder :: create () -> setHosts ($ hosts) -> build ();
                                      $ indices = $ client-> cat () -> indices (array ('index' => '*'));
                                      var_dump ($ indices);
    print_r ($ indices [0]);

The error returns when running in local is as follows.

** Fatal error **: Uncaught Elasticsearch \ Common \ Exceptions \ NoNodesAvailableException: No alive nodes found in your cluster in /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/ConnectionPool/StaticNoPingConnectionPool.php:51 Stack trace: # 0 /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/Transport.php(72): Elasticsearch \ ConnectionPool \ StaticNoPingConnectionPool-> nextConnection () # 1 / var / www / vendor / elasticsearch / elasticsearch / src / Elasticsearch /Transport.php(90): Elasticsearch \ Transport-> getConnection () # 2 /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/Connections/Connection.php(256): Elasticsearch \ Transport-> performRequest (' GET ',' / _cat / indices /% ... ', Array, NULL, Array) # 3 /var/www/vendor/react/promise/src/FulfilledPromise.php(28): Elasticsearch \ Connections \ Connection-> Elasticsearch \ Connections \ {closure} (Array) # 4 /var/www/vendor/guzzlehttp/ringphp/src/Future/CompletedFutureValue.php(55): React \ Promise \ FulfilledPromise-> then (Object (Closure), NU LL, NULL) # 5 / var / www / vendor / guzzlehttp / rin in ** / var / www / vendor / elasticsearch / elasticsearch / src / Elasticsearch / ConnectionPool / StaticNoPingConnectionPool.php ** on line ** 51 **

This is working remote machine

This is local machine

In the 2 screen captures, I can see 10.21.8.146 and 10.21.8.164.

So I guess that elasticsearch is running on machine 10.21.8.146. When you try to connect to elasticsearch from 10.21.8.164 using 10.21.8.146:9200, it works well.
But when you try to connect from 10.21.8.146 using 10.21.8.146:9200 it does not work. Am I correct? Or is it not working only when you try to connect from 10.21.8.146 using localhost:9200?

Could you share the full elasticsearch logs (from the start)?

It works when I connect with 10.21.8.164. that's right.

If I try to connect with 10.21.8.146. not working.
I tried localhost and 127.0.0.1, and with 10.218.146.
If I write, network.host: 0.0.0.0 in elasticsearch.yml
Network accessibility successfully working.

This new file

[2021-04-07T00:00:00,064][INFO ][o.e.c.m.MetadataCreateIndexService] [zamanbaz-1] [zamanbaz-2021.04.07] creating index, cause [auto(bulk api)], templates [], shards [1]/[1]
[2021-04-07T00:00:00,467][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] create_mapping [Loglar]
[2021-04-07T00:00:01,323][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T00:00:03,897][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T00:00:09,084][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T00:00:10,993][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T00:00:25,781][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T00:00:33,640][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T00:00:42,135][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T00:00:52,720][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T00:00:52,724][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T00:00:52,768][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T00:00:52,814][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T00:00:52,882][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T00:00:56,853][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T00:16:36,917][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T00:21:32,006][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T00:36:53,618][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T00:42:00,000][INFO ][o.e.x.m.MlDailyMaintenanceService] [zamanbaz-1] triggering scheduled [ML] maintenance tasks
[2021-04-07T00:42:00,003][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [zamanbaz-1] Deleting expired data
[2021-04-07T00:42:00,009][INFO ][o.e.x.m.j.r.UnusedStatsRemover] [zamanbaz-1] Successfully deleted [0] unused stats documents
[2021-04-07T00:42:00,010][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [zamanbaz-1] Completed deletion of expired ML data
[2021-04-07T00:42:00,010][INFO ][o.e.x.m.MlDailyMaintenanceService] [zamanbaz-1] Successfully completed [ML] maintenance task: triggerDeleteExpiredDataTask
[2021-04-07T00:46:52,390][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T02:40:27,213][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T04:30:00,001][INFO ][o.e.x.s.SnapshotRetentionTask] [zamanbaz-1] starting SLM retention snapshot cleanup task
[2021-04-07T04:30:00,002][INFO ][o.e.x.s.SnapshotRetentionTask] [zamanbaz-1] there are no repositories to fetch, SLM retention snapshot cleanup task complete
[2021-04-07T08:46:43,729][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T08:50:20,983][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T11:02:11,687][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T11:02:11,792][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]
[2021-04-07T11:09:05,823][INFO ][o.e.c.m.MetadataMappingService] [zamanbaz-1] [zamanbaz-2021.04.07/i4SJyWkVRpeJ97s8diKKIw] update_mapping [Loglar]

This old file
http://46.101.186.8/elasticsearch.log
I could not add this file here, it is stuck on character limit.

I'm missing the last restart if any.

Could you stop the node, clean the logs and start the node again?

You can share your full logs on gist.GitHub.com as well

All log files in this here -> gc.log · GitHub

I'm surprised by the logs. Did you change the settings since you shared them in the first post?

I just change network.host, http.port.
except these, I added transport.host, transport.tcp.port and removed

Could you share the current file?

Yes sure,

cluster.name: zamanbaz
node.name: zamanbaz-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
transport.host: localhost
transport.tcp.port: 9300
http.port: 9200

Thanks. I edited your post to remove all the comments so it's easier to see the changes.

So this line:

transport.host: localhost

produces this log:

[2021-04-07T22:08:29,801][INFO ][o.e.t.TransportService   ] [zamanbaz-1] publish_address {localhost/127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}

And this line:

network.host: 0.0.0.0

produces this log:

[2021-04-07T22:08:31,445][INFO ][o.e.h.AbstractHttpServerTransport] [zamanbaz-1] publish_address {10.21.8.146:9200}, bound_addresses {[::]:9200}

To me that means that any request to the local network card, 10.21.8.146:9200 or 127.0.0.1:9200 should work.

Could you ssh in this machine 10.21.8.146 and run:

curl 10.21.8.146:9200
curl 127.0.0.1:9200

And share the output?

Then, something you could try then but I don't think it will change anything is to add this to elasticsearch.yml:

http.bind_host: ["_site_", "_local_"]

Or

http.bind_host: ["10.21.8.146", "127.0.0.1"]

run this and output


run this and output

And Add this in .yml file

restart service after outputs

and browser responses


Could you share the output of your web browser (not images please but just code formatted text): when you run you application in machine 10.21.8.146 with the following code:

$ hosts = [ '10.21.8.146:9200' ];

and then with:

$ hosts = [ '127.0.0.1:9200' ];

Could you also double check that you did not put spaces as you wrote with:

$ hosts = [ '10 .21.8.146: 9200 ' ];

if i added this address

<?php

    ini_set('display_errors', 1);
    ini_set('display_startup_errors', 1);
    error_reporting(E_ALL);
    require 'vendor/autoload.php';

    use Elasticsearch\ClientBuilder;

    $hosts = ['10.21.8.146:9200'];
    $client = ClientBuilder::create()->setHosts($hosts)->build();
    $indices = $client->cat()->indices(array('index' => '*'));
    var_dump($indices);
    print_r($indices[0]);

gives this error

Fatal error: Uncaught Elasticsearch\Common\Exceptions\NoNodesAvailableException: No alive nodes found in your cluster in /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/ConnectionPool/StaticNoPingConnectionPool.php:64 Stack trace: #0 /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/Transport.php(82): Elasticsearch\ConnectionPool\StaticNoPingConnectionPool->nextConnection() #1 /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/Transport.php(99): Elasticsearch\Transport->getConnection() #2 /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/Connections/Connection.php(297): Elasticsearch\Transport->performRequest('GET', '/_cat/indices/%...', Array, NULL, Array) #3 /var/www/vendor/react/promise/src/FulfilledPromise.php(28): Elasticsearch\Connections\Connection->Elasticsearch\Connections\{closure}(Array) #4 /var/www/vendor/ezimuel/ringphp/src/Future/CompletedFutureValue.php(55): React\Promise\FulfilledPromise->then(Object(Closure), NULL, NULL) #5 /var/www/vendor/ezimuel/ringphp/s in /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/ConnectionPool/StaticNoPingConnectionPool.php on line 64

if I add this address

<?php

    ini_set('display_errors', 1);
    ini_set('display_startup_errors', 1);
    error_reporting(E_ALL);
    require 'vendor/autoload.php';

    use Elasticsearch\ClientBuilder;

    $hosts = ['127.0.0.1:9200'];
    $client = ClientBuilder::create()->setHosts($hosts)->build();
    $indices = $client->cat()->indices(array('index' => '*'));
    var_dump($indices);
    print_r($indices[0]);

gives this error

Fatal error: Uncaught Elasticsearch\Common\Exceptions\NoNodesAvailableException: No alive nodes found in your cluster in /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/ConnectionPool/StaticNoPingConnectionPool.php:64 Stack trace: #0 /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/Transport.php(82): Elasticsearch\ConnectionPool\StaticNoPingConnectionPool->nextConnection() #1 /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/Transport.php(99): Elasticsearch\Transport->getConnection() #2 /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/Connections/Connection.php(297): Elasticsearch\Transport->performRequest('GET', '/_cat/indices/%...', Array, NULL, Array) #3 /var/www/vendor/react/promise/src/FulfilledPromise.php(28): Elasticsearch\Connections\Connection->Elasticsearch\Connections\{closure}(Array) #4 /var/www/vendor/ezimuel/ringphp/src/Future/CompletedFutureValue.php(55): React\Promise\FulfilledPromise->then(Object(Closure), NULL, NULL) #5 /var/www/vendor/ezimuel/ringphp/s in /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/ConnectionPool/StaticNoPingConnectionPool.php on line 64

And now, from machine 10.21.8.164 if you run this code:

<?php

    ini_set('display_errors', 1);
    ini_set('display_startup_errors', 1);
    error_reporting(E_ALL);
    require 'vendor/autoload.php';

    use Elasticsearch\ClientBuilder;

    $hosts = ['10.21.8.146:9200'];
    $client = ClientBuilder::create()->setHosts($hosts)->build();
    $indices = $client->cat()->indices(array('index' => '*'));
    var_dump($indices);
    print_r($indices[0]);

What is happening?

If this exact same code works from machine 10.21.8.164 but not from machine 10.21.8.146 although curl commands are working well from 10.21.8.146, the only thing that we can tell from that is that:

  • Elasticsearch is working well as accessible from both machine using the exact IP or localhost.
  • Calling Elasticsearch from PHP works when the code is ran remotely
  • Calling Elasticsearch from PHP does not work using the exact same code when this code is ran locally

From that we can say that there is a problem with the PHP server or code or behavior.
I'm not a PHP expert so I think I can't help more but may be you could try something without using at all the Elasticsearch client which is running a simple HTTP GET call to 10.21.8.146:9200/ and see if that is working? Something like:

<?php

  $json = file_get_contents("http://10.21.8.146/");
  print_r($json);