I'm trying out file beats but can't seem to get the data into Elasticsearch. I don't see issues on the filebeat side but don't see the data in Elasticsearch. My demo repo is at https://github.com/thnk2wn/elastic-logging-test
docker-compose.yml
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
container_name: elasticsearch
environment:
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana-oss:7.10.2
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
filebeat:
container_name: filebeat
image: docker.elastic.co/beats/filebeat-oss:7.10.2
user: root
volumes:
- ${CONFIG_PATH}/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- ${CONFIG_PATH}/inputs.d:/usr/share/filebeat/inputs.d:ro
- ${APP_PATH}:/app:ro
- /var/lib/docker:/var/lib/docker:ro
- /var/run/docker.sock:/var/run/docker.sock
volumes:
elasticsearch-data:
driver: local
run-elk.sh
#!/bin/bash
set -a
root_dir="$(dirname $(dirname $(realpath "$0")))"
export APP_PATH="${root_dir}/app/bin"
export CONFIG_PATH="${root_dir}/filebeat/config/local"
echo "APP_PATH = $APP_PATH, CONFIG_PATH = $CONFIG_PATH"
pushd "$(dirname $(realpath $0))"
rm -f $APP_PATH/app.log
echo "Starting up..."
docker compose up
Filebeat.yml
filebeat.config.inputs:
enabled: true
path: inputs.d/*.yml
output.elasticsearch:
enabled: true
hosts: ["http://elasticsearch:9200"]
inputs.d/app.yml
- type: log
enabled: true
paths:
- /app/app.log
fields:
app: my-app
env: local
host: localhost
When I check logs for filebeat, it does seem like it's harvesting the log file (full logs https://pastebin.com/PL3zCVhB):
...
2021-01-22T16:51:48.259Z INFO log/input.go:157 Configured paths: [/app/app.log]
2021-01-22T16:51:48.259Z INFO cfgfile/reload.go:224 Loading of config files completed.
2021-01-22T16:51:58.229Z INFO log/harvester.go:302 Harvester started for file: /app/app.log
2021-01-22T16:51:59.231Z INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
2021-01-22T16:51:59.231Z INFO [publisher] pipeline/retry.go:223 done
2021-01-22T16:51:59.231Z INFO [publisher_pipeline_output] pipeline/output.go:143 Connecting to backoff(elasticsearch(http://elasticsearch:9200))
2021-01-22T16:51:59.269Z INFO [esclientleg] eslegclient/connection.go:314 Attempting to connect to Elasticsearch version 7.10.2
2021-01-22T16:51:59.271Z INFO [esclientleg] eslegclient/connection.go:314 Attempting to connect to Elasticsearch version 7.10.2
2021-01-22T16:51:59.284Z INFO template/load.go:97 Template filebeat-7.10.2 already exists and will not be overwritten.
2021-01-22T16:51:59.284Z INFO [index-management] idxmgmt/std.go:298 Loaded index template.
2021-01-22T16:51:59.286Z INFO [publisher_pipeline_output] pipeline/output.go:151 Connection to backoff(elasticsearch(http://elasticsearch:9200)) established
...
Maybe the problem is more on the Elasticsearch side? Those logs (https://pastebin.com/ngntSPwW) do contain the following warning but I didn't think critical since it's configured as single node.
{"type": "server", "timestamp": "2021-01-22T16:51:57,111Z", "level": "WARN", "component": "r.suppressed", "cluster.name": "docker-cluster", "node.name": "4786cf48ef06", "message": "path: /.kibana/_count, params: {index=.kibana}", "cluster.uuid": "MN-0ptJQR8GVs_PSFGZ3lw", "node.id": "w7t3IxGOTgaSsN5HnWBRig" ,
"stacktrace": ["org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:568) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:324) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:603) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:400) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.lambda$performPhaseOnShard$0(AbstractSearchAsyncAction.java:236) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction$2.doRun(AbstractSearchAsyncAction.java:303) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:743) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:832) [?:?]"] }
Appreciate any input on what I might be missing. Thanks