Trouble with sending logs from filebeat to elasticsearch

Hello, everyone!
I have an Elasticsearch with kibana started in docker. And I also have filebeat running running on the server and sending logs to Elasticsearch. So, I have there is no logs in kibana, but all dashboards have been created automaticly.

It seems to me that trouble is in filebeat with sending logs to Elasticsearch. Could you tell me please, what's wrong?

So, here is my docker-compose file with elastic & kibana:

version: '2.2'
services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
    container_name: es01
    environment:
      - node.name=es01
      - cluster.name=es-docker-cluster
        #- discovery.type=single-node
        #- discovery.seed_hosts=es02,es03
      - cluster.initial_master_nodes=es01
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms4096m -Xmx4096m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data01:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - elastic

  kib01:
    image: docker.elastic.co/kibana/kibana:7.15.2
    container_name: kib01
    ports:
      - 5601:5601
    environment:
      ELASTICSEARCH_URL: http://es01:9200
      ELASTICSEARCH_HOSTS: '["http://es01:9200"]'
    networks:
      - elastic

volumes:
  data01:
    driver: local

networks:
  elastic:
    driver: bridge

And here is my filebeat.yml:

filebeat.modules:
- module: auditd
  log:
    enabled: true

setup.template.settings:
  index.number_of_shards: 1

setup.dashboards.enabled: true

setup.kibana:
    host: "localhost:5601"
 
output.elasticsearch:
   hosts: ["localhost:9200"]

processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

logging.level: debug

Welcome to our community! :smiley:

What do your Filebeat logs show?

Hi Mark, thanks for your reply!

Sorry for such a long answer, it was night here :sweat_smile:

I don't really know what logs you are waiting for, so I will put here everything that Filebeat ouputs :smiley:
in /var/log/filebeat/filebeat and /var/log/filebeat/filebeat.1 there are the same two strings:

2021-11-19T12:44:50.158Z        INFO    instance/beat.go:665    Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2021-11-19T12:44:50.160Z        INFO    instance/beat.go:673    Beat ID: 4559f684-2e7f-4ae2-b28c-0c2884bb4218

systemctl status filebeat outputs:

Nov 22 09:10:02 mon-log filebeat[321304]: 2021-11-22T09:10:02.401Z        DEBUG        [input]        input/input.go:139        Run input
Nov 22 09:10:02 mon-log filebeat[321304]: 2021-11-22T09:10:02.401Z        DEBUG        [input]        log/input.go:215        Start next scan        {"input_id": "265eeb58-0274-4cd7-a9c3-7d46669d2adb"}
Nov 22 09:10:02 mon-log filebeat[321304]: 2021-11-22T09:10:02.401Z        DEBUG        [input]        log/input.go:279        input states cleaned up. Before: 0, After: 0, Pending: 0        {"input_id": "265eeb58-0274-4cd7-a9c3-7d46669d2adb"}
Nov 22 09:10:12 mon-log filebeat[321304]: 2021-11-22T09:10:12.402Z        DEBUG        [input]        input/input.go:139        Run input
Nov 22 09:10:12 mon-log filebeat[321304]: 2021-11-22T09:10:12.402Z        DEBUG        [input]        log/input.go:215        Start next scan        {"input_id": "265eeb58-0274-4cd7-a9c3-7d46669d2adb"}
Nov 22 09:10:12 mon-log filebeat[321304]: 2021-11-22T09:10:12.402Z        DEBUG        [input]        log/input.go:279        input states cleaned up. Before: 0, After: 0, Pending: 0        {"input_id": "265eeb58-0274-4cd7-a9c3-7d46669d2adb"}
Nov 22 09:10:22 mon-log filebeat[321304]: 2021-11-22T09:10:22.403Z        DEBUG        [input]        input/input.go:139        Run input
Nov 22 09:10:22 mon-log filebeat[321304]: 2021-11-22T09:10:22.403Z        DEBUG        [input]        log/input.go:215        Start next scan        {"input_id": "265eeb58-0274-4cd7-a9c3-7d46669d2adb"}
Nov 22 09:10:22 mon-log filebeat[321304]: 2021-11-22T09:10:22.403Z        DEBUG        [input]        log/input.go:279        input states cleaned up. Before: 0, After: 0, Pending: 0        {"input_id": "265eeb58-0274-4cd7-a9c3-7d46669d2adb"}
Nov 22 09:10:23 mon-log filebeat[321304]: 2021-11-22T09:10:23.847Z        INFO        [monitoring]        log/log.go:184        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpuacct":{"total":{"ns":8028099}}},"cpu":{"system":{"ticks":11120,"time":{"ms":5}},"total":{"ticks":30100,"time":{"ms":10},"value":30100},"user":{"ticks":18980,"time":{"ms":5}}},"handles":{"limit":{"hard":524288,"soft":1024},"open":11},"info":{"ephemeral_id":"ef987479-1c1e-4215-aa17-c47256968419","uptime":{"ms":77223078},"version":"7.15.2"},"memstats":{"gc_next":19876992,"memory_alloc":12002136,"memory_total":3150127944,"rss":112156672},"runtime":{"goroutines":29}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":0.02,"15":0.08,"5":0.08,"norm":{"1":0.01,"15":0.04,"5":0.04}}}}}}

When I also started filebeat setup -e there were a lot of messages about dashboards set up in Kibana, but I couldn't see anything about logs.

I have found the solution of my problem by myself.
Elasticsearch.output changing that helped me:

output.elasticsearch:
  # Array of hosts to connect to.
   hosts: ["localhost:9200"]
   index: "filebeat-%{+yyyy.MM.dd}"

But I also have another one question. @warkolm, could you tell me please how can I create Elasticsearch indecies automaticly without using Elasticsearch API?

I have set up

setup.template:
  name: "myapp"
  pattern: "myapp-*"
  settings:
    index.number_of_shards: 1
    index.number_of_replicas: 1
    index.codec: best_compression
    _source.enabled: false

setup.template:
  name: "myapp"
  pattern: "myapp-*"
  settings:
    index.number_of_shards: 1
    index.number_of_replicas: 1
    index.codec: best_compression
    _source.enabled: false

But that didn't help me

By default, Elasticsearch will create any index that is requested by an API call.

So, and how cat I trigger Elasticsearch API from filebeat.yml?

It does that automatically.

It might be better if you start a new topic with the problem you are trying to solve :slight_smile:

Okay, thanks a lot, Mark!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.