Can't see Kibana logs

I had set up Elasticsearch and Kibana on Rocky Linux few months ago for a workshop. Everything worked fine then. I shut down the server after the workshop and restarted it last week.
I don't see the logs appearing in Kibana dashboard now.

I have not made any changes to any config files. The public IP address of the server has not changed. I can see elasticsearch, kibana and filebeat are running fine.
What could have possibly gone wrong? Where do I start looking?

Contents of /var/log/filebeat/

filebeat-20230418-1.ndjson

{"log.level":"info","@timestamp":"2023-04-18T19:56:43.124Z","log.origin":{"file.name":"instance/beat.go","file.line":724},"message":"Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-04-18T19:56:43.124Z","log.origin":{"file.name":"instance/beat.go","file.line":732},"message":"Beat ID: c06fa241-4682-4624-bc78-45475e7cbd3c","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2023-04-18T19:56:43.607Z","log.logger":"add_cloud_metadata","log.origin":{"file.name":"add_cloud_metadata/provider_aws_ec2.go","file.line":97},"message":"error when check request status for getting IMDSv2 token: http request status 404. No token in the metadata request will be used.","service.name":"filebeat","ecs.version":"1.6.0"}

filebeat-20230418.ndjson

{"log.level":"info","@timestamp":"2023-04-18T19:58:00.360Z","log.origin":{"file.name":"instance/beat.go","file.line":724},"message":"Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-04-18T19:58:00.361Z","log.origin":{"file.name":"instance/beat.go","file.line":732},"message":"Beat ID: c06fa241-4682-4624-bc78-45475e7cbd3c","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2023-04-18T19:58:00.768Z","log.logger":"add_cloud_metadata","log.origin":{"file.name":"add_cloud_metadata/provider_aws_ec2.go","file.line":97},"message":"error when check request status for getting IMDSv2 token: http request status 404. No token in the metadata request will be used.","service.name":"filebeat","ecs.version":"1.6.0"}

In Kibana Dev Tools Run the following and share the output

GET _cat/health?v 
GET _cluster/health?v
GET _cat/nodes/?v=true&h=name,du,dt,dup,hp,rp,r

I get this error when running GET _cat/health?v

Can't connect to _cat:80 (Name or service not known)

Name or service not known at /usr/share/perl5/vendor_perl/LWP/Protocol/http.pm line 50.

Here's the content of /usr/share/perl5/vendor_perl/LWP/Protocol/http.pm

1 package LWP::Protocol::http;
      2
      3 use strict;
      4
      5 our $VERSION = '6.34';
      6
      7 require HTTP::Response;
      8 require HTTP::Status;
      9 require Net::HTTP;
     10
     11 use base qw(LWP::Protocol);
     12
     13 our @EXTRA_SOCK_OPTS;
     14 my $CRLF = "\015\012";
     15
     16 sub _new_socket
     17 {
     18     my($self, $host, $port, $timeout) = @_;
     19
     20     # IPv6 literal IP address should be [bracketed] to remove
     21     # ambiguity between ip address and port number.
     22     if ( ($host =~ /:/) && ($host !~ /^\[/) ) {
     23       $host = "[$host]";
     24     }
     25
     26     local($^W) = 0;  # IO::Socket::INET can be noisy
     27     my $sock = $self->socket_class->new(PeerAddr => $host,
     28                                         PeerPort => $port,
     29                                         LocalAddr => $self->{ua}{local_address},
     30                                         Proto    => 'tcp',
     31                                         Timeout  => $timeout,
     32                                         KeepAlive => !!$self->{ua}{conn_cache},
     33                                         SendTE    => $self->{ua}{send_te},
     34                                         $self->_extra_sock_opts($host, $port),
     35                                        );
     36
     37     unless ($sock) {
     38         # IO::Socket::INET leaves additional error messages in $@
     39         my $status = "Can't connect to $host:$port";
     40         if ($@ =~ /\bconnect: (.*)/ ||
     41             $@ =~ /\b(Bad hostname)\b/ ||
     42             $@ =~ /\b(nodename nor servname provided, or not known)\b/ ||
     43             $@ =~ /\b(certificate verify failed)\b/ ||
     44             $@ =~ /\b(Crypt-SSLeay can't verify hostnames)\b/
     45         ) {
     46             $status .= " ($1)";
     47         } elsif ($@) {
     48             $status .= " ($@)";
     49         }
     50         die "$status\n\n$@";
     51     }
     52
     53     # perl 5.005's IO::Socket does not have the blocking method.
     54     eval { $sock->blocking(0); };
     55
     56     $sock;
     57 }
     58
     59 sub socket_type
     60 {
     61     return "http";
     62 }

curl -X GET 'http://localhost:9200/_cat/indices?v' gives me

health status index                                uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .ds-filebeat-8.4.1-2022.10.30-000004 bj_KwtDfS_K-_PlDJPTh7g   1   1          0            0       225b           225b
yellow open   read_me                              UoEdxnguQlWRbN9pmU_IcA   1   1          1            0      5.2kb          5.2kb
yellow open   .ds-filebeat-8.4.1-2022.12.29-000007 i58yOxlIRd2TpKYZyT6gLw   1   1          0            0       225b           225b
yellow open   .ds-filebeat-8.4.1-2023.04.12-000008 t_C_n1adQciJSOBfVILo5w   1   1          0            0       225b           225b
yellow open   .ds-filebeat-8.7.0-2023.04.18-000001 EQkC3WdzTr6wCWhzTplCLw   1   1          0            0       225b           225b
yellow open   .ds-filebeat-8.4.1-2022.11.29-000006 8B4qYkszSmiptQ_X-bwx7Q   1   1          0            0       225b           225b

Let me know if it helps

Sorry those were short hand and intended to be run in the Kiaban Dev Tools

curl -X GET 'http://localhost:9200/ _cat/health?v 
curl -X GET 'http://localhost:9200/ _cluster/health?v
curl -X GET 'http://localhost:9200/ _cat/nodes/?v=true&h=name,du,dt,dup,hp,rp,r

curl -X GET 'http://localhost:9200/_cat/health?v'

epoch      timestamp cluster       status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1681920178 16:02:58  elasticsearch yellow          1         1     27  27    0    0        6             0                  -                 81.8%

curl -X GET 'http://localhost:9200/_cluster/health?v'
Can you please check if above command is correct?

Running this on Kibana Dev tool
GET _cluster/health

{
  "cluster_name": "elasticsearch",
  "status": "yellow",
  "timed_out": false,
  "number_of_nodes": 1,
  "number_of_data_nodes": 1,
  "active_primary_shards": 27,
  "active_shards": 27,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 6,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 81.81818181818183
}

curl -X GET 'http://localhost:9200/_cat/nodes/?v=true&h=name,du,dt,dup,hp,rp,r'

name                            du     dt   dup hp rp r
loganalysisclass1.novalocal 16.4gb 19.9gb 82.29 10 83 cdfhilmrstw

Your disk is over 80% full and nowhere to put new shards...
You are in read-only mode
You need to clean up some indices for space or increase the disk space

What version are you on?

This will list the indices in descending order of size.

curl -X GET "http://localhost:9200/_cat/indices/?v&s=pri.store.size:desc"

I am on Elasticsearch version 8.7. OS is Rocky 8.7.

How do i change the read-only mode?

I removed couple of indices. It has increased the disk quota from 81 to 86%. How can i clear that? Or where it has to be removed from?

{
  "cluster_name": "elasticsearch",
  "status": "yellow",
  "timed_out": false,
  "number_of_nodes": 1,
  "number_of_data_nodes": 1,
  "active_primary_shards": 25,
  "active_shards": 25,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 4,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 86.20689655172413
}

So if that is your entire set of indices ...

You only have ~3KB or so indices something else is taking all the space on your Disk / OS.
You need to figure it out and clean up some other space.

This says you have a 19.9GB Disk which is 82% full...

name                            du     dt   dup hp rp r
loganalysisclass1.novalocal 16.4gb 19.9gb 82.29 10 83 cdfhilmrstw

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.