No Elasticsearch Data in Kibana and I don't know why

I am running the whole elk stack and filebeats to evaluate logs a program is producing. The program is running on my PC. Filebeats, Logstash, Elasticsearch and Kibana are in Docker containers. Filebeats seems to be working perfectly. It sends the data to the container logstash is in where I get console output like this (some data removed for privacy):

Sadly from here on out I don't really understand what is going wrong. I'm managing the logstash, elasticsearch and kibana docker containers with docker compose. Logstash should be putting out data sets for kibana to display but I can't see anything in kibana.
I tried making the index with the fields from the kibana console but it didn't work. When I go to index managment in kibana I get redirected to this: http://prntscr.com/kjtzle
I put the docker-compose.yml and my logstash config in another gist:
https://gist.github.com/dklenke/94cdb639847276023e343c9d88330861
(I removed the filters in hopes that something would come through)

Feel free to ask for more details. I will be away from the system this is running on for the weekend but will still try to answer best I can.

I'm asking my colleague who knows docker containers better to help in this : @jarpy

Cheers
Rashmi

Thanks for sharing the configs, that helps a lot.

Where are you getting the images? It looks like you are building them locally. Are the builds based on our official images?

I see that you are setting the ELASTICSEARCH_URL environment variable for Kibana. Setting options from the environment is not natively supported by Kibana, but support is added by our official image. If the image you are using does not contain the needed logic to support environment-based configuration, then setting the environment variable would have no effect. Perhaps this is happening?

Stepping back a bit, is data arriving in Elasticsearch? Do you get results from a simple search? Something like:

curl http://elasticsearch:9200/new/_search

Thank you for the reply.

The builds are to my knowledge based on the official image. In the respective Dockerfiles for Logstash, Elasticsearch and Kibana the images are pulled with lines such as these:
FROM docker.elastic.co/logstash/logstash-oss:${ELK_VERSION}
ELK_VERSION is defined as 6.3.2 . I put all relevant files connected to this in a gist if needed: https://gist.github.com/dklenke/80dc87e61d1f7b1ec2663218ef565f35

When I try to run your recommended search in my shell I get:

curl: (6) Could not resolve host: elasticsearch

Edit, regarding your recommended search:
I called it incorrectly. The actual output I get when I execute it is:

{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"new","index_uuid":"_na_","index":"new"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"new","index_uuid":"_na_","index":"new"},"status":404}

The builds are to my knowledge based on the official image.

Oh good! I just had a hunch, but I'm glad to know that it was incorrect in this case.

When I try to run your recommended search in my shell[...]

Fair enough. The elasticsearch hostname is only resolvable through Docker's DNS, so other containers on the same network can resolve it, but the host machine can't. However, since you have mapped port 9200, you could replace elasticsearch with localhost, like:

curl http://localhost:9200/new/_search

You could also test resolution and connectivity in the Kibana container with something like:

$ docker-compose exec kibana curl http://elasticsearch:9200
{
  "name" : "GuB6So-",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "EJ273DPqTh22xVpCBzDRIw",
  "version" : {
    "number" : "6.3.2",
    "build_flavor" : "oss",
    "build_type" : "tar",
    "build_hash" : "053779d",
    "build_date" : "2018-07-20T05:20:23.451332Z",
    "build_snapshot" : false,
    "lucene_version" : "7.3.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

Incidentally, that output is from my test system where I was trying to reproduce the problem that you see. Unfortunately, I couldn't reproduce it. Logstash indexes data into the new index and Kibana can see it:

$ docker-compose exec kibana curl 'http://elasticsearch:9200/new/_search?size=1&pretty'
{
  "took" : 3,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 817,
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "new",
        "_type" : "doc",
        "_id" : "MtymXmUBC7X9GEWV-PZU",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "host" : "b4998635f655",
          "message" : "ok",
          "@timestamp" : "2018-08-21T22:42:05.414Z"
        }
      }
    ]
  }
}

Here is my test configuration, based on yours.

I only just saw your edit, sorry. I read the original comment as an email, so the edit wasn't there.

Edit, regarding your recommended search:
I called it incorrectly. The actual output I get when I execute it is:
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index"[...]

So it looks like Logstash is not indexing, or not indexing to the correct index.

It might be interesting to check the Logstash logs thoroughly and to have a look at any indices that may exist:

$ docker-compose exec elasticsearch curl http://localhost:9200/_cat/indices
green  open .kibana PtreipniQ9u3h4yIivEZxA 1 0    2 0 9.9kb 9.9kb
yellow open new     XLWKe3VSQEqjYZH7KbKmlA 5 1 1362 0 178kb 178kb
$ docker-compose exec elasticsearch curl http://localhost:9200/_cat/indices

returns absolutely nothing unfortunately. No error but also no indices. Not even the .kibana one which (to my knowledge) should always be there.
While testing I also noticed that when I intentionally misspell elasticsearch in my logstash.conf like this:

output {
	eeelasticsearch {
		hosts => "elasticsearch:9200"
		index => "new"
	}
}

I get no errors what so ever. I am sure that shouldn't be the case. On the other hand I know not all my output is being ignored since the stdout output to console works flawlessly.
Whatever it is this seems to be a problem with logstashs ouput, the Elasticsearch output plugin for logstash or mabye with elasticsearch. Just definitly not a problem with kibana. Sorry for creating this thread in the wrong place.

I also did what you recommended before you saw my edit:

$ docker-compose exec kibana curl http://elasticsearch:9200
{
  "name" : "oxz6m3A",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "7iWVbf0GQxy950fN9yqvSQ",
  "version" : {
    "number" : "6.3.2",
    "build_flavor" : "oss",
    "build_type" : "tar",
    "build_hash" : "053779d",
    "build_date" : "2018-07-20T05:20:23.451332Z",
    "build_snapshot" : false,
    "lucene_version" : "7.3.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

aswell as:

$ docker-compose exec kibana curl 'http://elasticsearch:9200/new/_search?size=1&pretty'
{
  "error" : {
    "root_cause" : [
      {
        "type" : "index_not_found_exception",
        "reason" : "no such index",
        "resource.type" : "index_or_alias",
        "resource.id" : "new",
        "index_uuid" : "_na_",
        "index" : "new"
      }
    ],
    "type" : "index_not_found_exception",
    "reason" : "no such index",
    "resource.type" : "index_or_alias",
    "resource.id" : "new",
    "index_uuid" : "_na_",
    "index" : "new"
  },
  "status" : 404
}

I think I have found the solution. Like all errors you try to solve for days on end it was.. a simple syntax error :grin:
My logstash.conf was not being used since I hadn't defined the volumes properly in docker-compose.yml
Where it used to say:

  logstash:
    build:
      context: logstash/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - /home/user/docker-elk/logstash/config/
      - /home/user/docker-elk/logstash/pipeline/

it should have instead said:

  logstash:
    build:
      context: logstash/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - /home/user/docker-elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
      - /home/user/docker-elk/logstash/pipeline:/usr/share/logstash/pipeline:ro

Thank you @jarpy for helping me.

Excellent!

I was just coming here to say "I think your Logstash configuration is not being used at all". That's quite counterintuitive, because you were getting pretty much two thirds of the results you are looking for. The trick is that the image ships with a default configuration that looks a lot like what you had defined. In fact, if you don't make sure to overwrite that configuration, it may still run after being merged with your own.

I'm really glad you found the answer and hope I was a least some help.

This.

This is our life.

:rofl:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.