Can't get filebeat log data into Elasticsearch

I'm trying out file beats but can't seem to get the data into Elasticsearch. I don't see issues on the filebeat side but don't see the data in Elasticsearch. My demo repo is at https://github.com/thnk2wn/elastic-logging-test

docker-compose.yml

version: '3.7'

services:

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
    container_name: elasticsearch
    environment:
      - discovery.type=single-node
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    cap_add:
      - IPC_LOCK
    volumes:
      - elasticsearch-data:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
      - 9300:9300

  kibana:
    container_name: kibana
    image: docker.elastic.co/kibana/kibana-oss:7.10.2
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
    ports:
      - 5601:5601
    depends_on:
      - elasticsearch

  filebeat:
    container_name: filebeat
    image: docker.elastic.co/beats/filebeat-oss:7.10.2
    user: root
    volumes:
        - ${CONFIG_PATH}/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
        - ${CONFIG_PATH}/inputs.d:/usr/share/filebeat/inputs.d:ro
        - ${APP_PATH}:/app:ro
        - /var/lib/docker:/var/lib/docker:ro
        - /var/run/docker.sock:/var/run/docker.sock

volumes:
  elasticsearch-data:
    driver: local

run-elk.sh

#!/bin/bash
set -a

root_dir="$(dirname $(dirname $(realpath "$0")))"
export APP_PATH="${root_dir}/app/bin"
export CONFIG_PATH="${root_dir}/filebeat/config/local"

echo "APP_PATH = $APP_PATH, CONFIG_PATH = $CONFIG_PATH"
pushd "$(dirname $(realpath $0))"

rm -f $APP_PATH/app.log

echo "Starting up..."
docker compose up

Filebeat.yml

filebeat.config.inputs:
  enabled: true
  path: inputs.d/*.yml

output.elasticsearch:
  enabled: true
  hosts: ["http://elasticsearch:9200"]

inputs.d/app.yml

- type: log
  enabled: true
  paths:
    - /app/app.log
  fields:
    app: my-app
    env: local
    host: localhost

When I check logs for filebeat, it does seem like it's harvesting the log file (full logs https://pastebin.com/PL3zCVhB):

...
2021-01-22T16:51:48.259Z	INFO	log/input.go:157	Configured paths: [/app/app.log]
2021-01-22T16:51:48.259Z	INFO	cfgfile/reload.go:224	Loading of config files completed.
2021-01-22T16:51:58.229Z	INFO	log/harvester.go:302	Harvester started for file: /app/app.log
2021-01-22T16:51:59.231Z	INFO	[publisher]	pipeline/retry.go:219	retryer: send unwait signal to consumer
2021-01-22T16:51:59.231Z	INFO	[publisher]	pipeline/retry.go:223	  done
2021-01-22T16:51:59.231Z	INFO	[publisher_pipeline_output]	pipeline/output.go:143	Connecting to backoff(elasticsearch(http://elasticsearch:9200))
2021-01-22T16:51:59.269Z	INFO	[esclientleg]	eslegclient/connection.go:314	Attempting to connect to Elasticsearch version 7.10.2
2021-01-22T16:51:59.271Z	INFO	[esclientleg]	eslegclient/connection.go:314	Attempting to connect to Elasticsearch version 7.10.2
2021-01-22T16:51:59.284Z	INFO	template/load.go:97	Template filebeat-7.10.2 already exists and will not be overwritten.
2021-01-22T16:51:59.284Z	INFO	[index-management]	idxmgmt/std.go:298	Loaded index template.
2021-01-22T16:51:59.286Z	INFO	[publisher_pipeline_output]	pipeline/output.go:151	Connection to backoff(elasticsearch(http://elasticsearch:9200)) established
...

Maybe the problem is more on the Elasticsearch side? Those logs (https://pastebin.com/ngntSPwW) do contain the following warning but I didn't think critical since it's configured as single node.

{"type": "server", "timestamp": "2021-01-22T16:51:57,111Z", "level": "WARN", "component": "r.suppressed", "cluster.name": "docker-cluster", "node.name": "4786cf48ef06", "message": "path: /.kibana/_count, params: {index=.kibana}", "cluster.uuid": "MN-0ptJQR8GVs_PSFGZ3lw", "node.id": "w7t3IxGOTgaSsN5HnWBRig" ,
"stacktrace": ["org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:568) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:324) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:603) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:400) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.lambda$performPhaseOnShard$0(AbstractSearchAsyncAction.java:236) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction$2.doRun(AbstractSearchAsyncAction.java:303) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:743) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:832) [?:?]"] }

Appreciate any input on what I might be missing. Thanks

Quick look at the filebeat logs looks like no logs are being written by the app.

Think this is line 41,42 in your paste bin.

2021-01-22T16:56:47.925Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":180,"time":{"ms":9}},"total":{"ticks":290,"time":{"ms":14},"value":290},"user":{"ticks":110,"time":{"ms":5}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":10},"info":{"ephemeral_id":"0fd762f8-59f4-4bd2-90cc-86bc2fe9a10c","uptime":{"ms":300062}},"memstats":{"gc_next":7573920,"memory_alloc":4172336,"memory_total":15439632},"runtime":{"goroutines":28}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":1}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":1}},"system":{"load":{"1":0,"15":0,"5":0.03,"norm":{"1":0,"15":0,"5":0.0038}}}}}}

2021-01-22T16:57:02.961Z INFO log/harvester.go:333 File is inactive: /app/app.log. Closing because close_inactive of 5m0s reached.

also you can check the health of elastic by running which I think is good, it will be yellow becuase of 1 node and file beat will try to write a replica which requires 2 nodes (1 for primary shard + 1 for the replica) but this is fine for testing.

curl http://localhost:9200

@stephenb Thanks. If I execute into the container I see the log file:

❯ docker exec -it dev_filebeat_1 bash
[root@946d2322deda filebeat]# cd /app
[root@946d2322deda app]# ls
app  app.log
[root@946d2322deda app]# cat app.log
localhost12:55:42.529 main ▶ INFO 001 The time is now Fri, 22 Jan 2021 12:55:42 EST
[root@946d2322deda app]#

Although it's strange that it only has one line. The file in the path mounted on the host has much more...

Never mind, a docker volume problem on my end. Looks like I had docker compose up; changed to docker-compose up -V and now it's working. Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.