Salut,
Quand je rentre la commande:
root@elasticsearch:~# curl --noproxy "*" http://127.0.0.1:9200
J'obtiens l'erreur suivante:
curl: (7) Failed to connect to 127.0.0.1 port 9200: Connection refused
Pareil avec :
root@elasticsearch:~# curl --noproxy "*" http://localhost:9200
et
root@elasticsearch:~# curl --noproxy "*" http://10.50.40.184:9200
Mon elasticsearch.yml:
> # ======================== Elasticsearch Configuration =========================
> #
> # NOTE: Elasticsearch comes with reasonable defaults for most settings.
> # Before you set out to tweak and tune the configuration, make sure you
> # understand what are you trying to accomplish and the consequences.
> #
> # The primary way of configuring a node is via this file. This template lists
> # the most important settings you may want to configure for a production cluster.
> #
> # Please consult the documentation for further information on configuration options:
> # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
> #
> # ---------------------------------- Cluster -----------------------------------
> #
> # Use a descriptive name for your cluster:
> #
> cluster.name: mon_cluster
> #
> # ------------------------------------ Node ------------------------------------
> #
> # Use a descriptive name for the node:
> #
> node.name: mon_node-1
> #
> # Add custom attributes to the node:
> #
> #node.attr.rack: r1
> #
> # ----------------------------------- Paths ------------------------------------
> #
> # Path to directory where to store the data (separate multiple locations by comma):
> #
> path.data: /var/lib/elasticsearch
> #
> # Path to log files:
> #
> path.logs: /var/log/elasticsearch
> #
> # ----------------------------------- Memory -----------------------------------
> #
> # Lock the memory on startup:
> #
> #bootstrap.memory_lock: true
> #
> # Make sure that the heap size is set to about half the memory available
> # on the system and that the owner of the process is allowed to use this
> # limit.
> #
> # Elasticsearch performs poorly when the system is swapping the memory.
> #
> # ---------------------------------- Network -----------------------------------
> #
> # Set the bind address to a specific IP (IPv4 or IPv6):
> #
> network.host: 0.0.0.0
> #
> # Set a custom port for HTTP:
> #
> http.port: 9200
> #
> # For more information, consult the network module documentation.
> #
> # --------------------------------- Discovery ----------------------------------
> #
> # Pass an initial list of hosts to perform discovery when this node is started:
> # The default list of hosts is ["127.0.0.1", "[::1]"]
> #
> #discovery.seed_hosts: ["host1", "host2"]
> #
> # Bootstrap the cluster using an initial set of master-eligible nodes:
> #
> #cluster.initial_master_nodes: ["node-1", "node-2"]
> #
> # For more information, consult the discovery and cluster formation module documentation.
> #
> # ---------------------------------- Gateway -----------------------------------
> #
> # Block initial recovery after a full cluster restart until N nodes are started:
> #
> #gateway.recover_after_nodes: 3
> #
> # For more information, consult the gateway module documentation.
> #
> # ---------------------------------- Various -----------------------------------
> #
> # Require explicit names when deleting indices:
> #
> #action.destructive_requires_name: true
>
>
> network:
> host: 0.0.0.0
> http:
> port: 9200
la commande
netstat -natp
renvoie:
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 673/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1287/sshd
tcp 0 0 10.50.40.184:5601 0.0.0.0:* LISTEN 1748/node
tcp 0 64 10.50.40.184:22 10.50.40.194:53837 ESTABLISHED 1539/sshd: elastics
tcp6 0 0 :::22 :::* LISTEN 1287/sshd
on voit que 10.50.40.184:5601 qui correspond à kibana est écouté et il n'y a pas de 10.50.40.184:9200
à noté que j'ai déjà regardé moult forum sans qu'il y ai une réponse convenable.
Bref, si vous avez une solution.
Busan