Can't access UI (ES/Kibana at aws in VPC)

Hi! I'm starting with the stack at a linux 2 AWS instance (rhel based) The instance it's in a VPC, public subnet, EIP, IG and NATGW (using sensu stack on the same host and works perfect):

-ES installed:

# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
#
# Set a custom port for HTTP:
#
http.port: 9200

-Kibana installed:

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "localhost"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://localhost:9200"

-Nginx:

server {
        listen 80;
        server_name mydns;
        access_log  /var/log/kibana/proxy.access-kbn.log  main;
        auth_basic "Restricted Access";
        auth_basic_user_file /etc/nginx/htpasswd.users;

        location / {
            proxy_pass http://localhost:5601;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
        }
}

But I can't access the UI and having some other issues with ES, here are the logs and checks I did:

[myuser@myhost]#curl localhost:9200/_cluster/health?pretty
{
  "cluster_name" : "elasticsearch",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

[myuser@myhost]#curl localhost:5601
<script>var hashRoute = '/app/kibana';
var defaultRoute = '/app/kibana';

var hash = window.location.hash;
if (hash.length) {
  window.location = hashRoute + hash;
} else {
  window.location = defaultRoute;

Not using fw, had that error of kibana port in use, kill the proc, started again but same msg, although ES/Kibana are up and running, but can't access it.
Before this, I had an error where kibana couldn't connect to ES, I found the port 9200 only listening at tcp6 so I added :
-Djava.net.preferIPv4Stack=true
to
/etc/elasticsearch/jvm.options

Logstash's installed but stopped for now.

Any advice? I'll keep trying to make it work and will post if solved.
Thanks!

Adding more info:

[myuser@myhost]# tail kibana.stdout
    {"type":"log","@timestamp":"2018-05-02T15:49:40Z","tags":["status","plugin:timelion@6.2.4","info"],"pid":20109,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    {"type":"log","@timestamp":"2018-05-02T15:49:40Z","tags":["status","plugin:console@6.2.4","info"],"pid":20109,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    {"type":"log","@timestamp":"2018-05-02T15:49:40Z","tags":["status","plugin:metrics@6.2.4","info"],"pid":20109,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    {"type":"log","@timestamp":"2018-05-02T15:49:40Z","tags":["fatal"],"pid":20109,"message":"Port 5601 is already in use. Another instance of Kibana may be running!"}


    [myuser@myhost elasticsearch]# cat elasticsearch.log
    [2018-05-02T15:45:21,429][INFO ][o.e.n.Node               ] initialized
    [2018-05-02T15:45:21,430][INFO ][o.e.n.Node               ] [yPx8RNG] starting ...
    [2018-05-02T15:45:24,417][INFO ][o.e.t.TransportService   ] [yPx8RNG] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
    [2018-05-02T15:45:27,906][INFO ][o.e.c.s.MasterService    ] [yPx8RNG] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {yPx8RNG}{yPx8RNGpTnOzN3jW3RKpnA}{Oo0JA0CeR5mmAndyT1F7tw}{localhost}{127.0.0.1:9300}
    [2018-05-02T15:45:27,999][INFO ][o.e.c.s.ClusterApplierService] [yPx8RNG] new_master {yPx8RNG}{yPx8RNGpTnOzN3jW3RKpnA}{Oo0JA0CeR5mmAndyT1F7tw}{localhost}{127.0.0.1:9300}, reason: apply cluster state (from master [master {yPx8RNG}{yPx8RNGpTnOzN3jW3RKpnA}{Oo0JA0CeR5mmAndyT1F7tw}{localhost}{127.0.0.1:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
    [2018-05-02T15:45:28,544][INFO ][o.e.h.n.Netty4HttpServerTransport] [yPx8RNG] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
    [2018-05-02T15:45:28,544][INFO ][o.e.n.Node               ] [yPx8RNG] started
    [2018-05-02T15:45:28,605][INFO ][o.e.g.GatewayService     ] [yPx8RNG] recovered [0] indices into cluster_state
     [myuser@myhost elasticsearch]#

    [myuser@myhost]# netstat -tulpn | grep LISTEN
    tcp        0      0 127.0.0.1:3031          0.0.0.0:*               LISTEN      3521/ruby
    tcp        0      0 0.0.0.0:4567            0.0.0.0:*               LISTEN      3520/ruby
    tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      3417/master
    tcp        0      0 127.0.0.1:5601          0.0.0.0:*               LISTEN      20628/node
    tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      3158/beam.smp
    tcp        0      0 127.0.0.1:6379          0.0.0.0:*               LISTEN      3157/redis-server 1
    tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      21403/nginx: master
    tcp        0      0 127.0.0.1:9200          0.0.0.0:*               LISTEN      19339/java
    tcp        0      0 0.0.0.0:4369            0.0.0.0:*               LISTEN      3420/epmd
    tcp        0      0 127.0.0.1:9300          0.0.0.0:*               LISTEN      19339/java
    tcp        0      0 127.0.0.1:3030          0.0.0.0:*               LISTEN      3521/ruby
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      3527/sshd
    tcp6       0      0 :::3000                 :::*                    LISTEN      3577/uchiwa
    tcp6       0      0 :::5672                 :::*                    LISTEN      3158/beam.smp
    tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd
    tcp6       0      0 :::80                   :::*                    LISTEN      21403/nginx: master
    tcp6       0      0 :::4369                 :::*                    LISTEN      3420/epmd
    tcp6       0      0 :::22                   :::*                    LISTEN      3527/sshd

Removed bundles folder:

    {"type":"log","@timestamp":"2018-05-02T16:18:59Z","tags":["info","optimize"],"pid":23042,"message":"Optimizing and caching bundles for stateSessionStorageRedirect, status_page, timelion and kibana. This may take a few minutes"}
    {"type":"log","@timestamp":"2018-05-02T16:21:40Z","tags":["info","optimize"],"pid":23042,"message":"Optimization of bundles for stateSessionStorageRedirect, status_page, timelion and kibana complete in 160.31 seconds"}
    {"type":"log","@timestamp":"2018-05-02T16:21:40Z","tags":["status","plugin:kibana@6.2.4","info"],"pid":23042,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    {"type":"log","@timestamp":"2018-05-02T16:21:40Z","tags":["status","plugin:elasticsearch@6.2.4","info"],"pid":23042,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
    {"type":"log","@timestamp":"2018-05-02T16:21:40Z","tags":["status","plugin:timelion@6.2.4","info"],"pid":23042,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    {"type":"log","@timestamp":"2018-05-02T16:21:40Z","tags":["status","plugin:console@6.2.4","info"],"pid":23042,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    {"type":"log","@timestamp":"2018-05-02T16:21:40Z","tags":["status","plugin:metrics@6.2.4","info"],"pid":23042,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    {"type":"log","@timestamp":"2018-05-02T16:21:40Z","tags":["fatal"],"pid":23042,"message":"Port 5601 is already in use. Another instance of Kibana may be running!"}

    ==> kibana.stderr <==
    FATAL Port 5601 is already in use. Another instance of Kibana may be running!

This setting for elasticsearch means that it is only listening on the loopback interface, and thus no other hosts can connect to it.

same here for Kibana.

You can change this to 0.0.0.0 for listening on all interfaces, or set it to a specific IP to limit it to a single interface.

Thanks! Bill, it's working now. I misunderstood this, got mixed up between the localhost since everything's on the same machine and nginx rules trying to pointing it out to kibana's port and such.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.