Filebeat and Elasticsearch combination not showing realtime logs in Kibana

I am using Filebeat, ElasticSearch and Kibana for scraping all the logs in our Docker Swarm. We used to use logspout for scraping logs then ELK but it consumes too much resources so we switched to Filebeat. But the problem now with using Filebeat is when I check my Kibana it does not show the thousands of logs being scraped in my containers unlike when I used logspout. It only scrapes few.. I tried monitoring A certain time range from 9:37:01 a.m. to 9:37:30 a.m where at first it has only like 15 hits then 20 minutes later it became 140 . Am I missing something in my configuration?

filebeat.yml

filebeat.inputs:
- type: docker
  combine_partial: true
  close_inactive: 1m
  close_timeout: 5m
  # scan_frequency: 1s  # take note that this can hurt CPU usage via dockerd
  json.message_key: log
  json.ignore_decoding_error: true
  multiline.pattern: '^\s'
  multiline.match: after
  processors:
  - add_docker_metadata: ~
  containers:
    ids:
      - "*"
#==================== Elasticsearch template setting ==========================
setup.template.settings:
  index.number_of_shards: 3
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["elasticsearch:9200"]

elasticsearch.yml on node 1 cluster

cluster.name: "docker-cluster"
network.host: 0.0.0.0

# minimum_master_nodes need to be explicitly set when bound on a public IP
# set to 1 to allow single node clusters
# Details: https://github.com/elastic/elasticsearch/pull/17288
discovery.zen.minimum_master_nodes: 1

thread_pool.write.size: 5
thread_pool.write.queue_size: 1000

elasticsearch.yml on node 2 cluster

cluster.name: "docker-cluster"
network.host: 0.0.0.0

# minimum_master_nodes need to be explicitly set when bound on a public IP
# set to 1 to allow single node clusters
# Details: https://github.com/elastic/elasticsearch/pull/17288
discovery.zen.minimum_master_nodes: 1

thread_pool.write.size: 3
thread_pool.write.queue_size: 500

TAKE NOTE: I have 50 nodes in my swarm so I have 50 filebeat containers sending to my elasticsearch master node.

My elasticsearch node 1 has 4Core and 28GB Memory
My elasticsearch node 2 has 2cores and 16GB Memory

This blog post about tuning some filebeat settings might interest you: https://www.elastic.co/blog/how-to-tune-elastic-beats-performance-a-practical-example-with-batch-size-worker-count-and-more

So it's a filebeat issue and not on elasticsearch's side?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.