Filebeat is not sending data to Elasticsearch

I'm using running Filebeat version 7.10.0 as container version to ship docker container logs as well as system logs directly to Elasticsearch. Docker logs are shipped fine but system logs are not getting shipped. When I deploy them again, sometimes they are shipped but most of the times they are not shipped. Can anyone please help me about this weird behaviour of Filebeat?

Also is this related to folder permission issue? What is the expected permission to give to folder so that logs can be shipped? Any help would be appreciated!

filebeat.inputs:

- type: log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  enabled: true
  paths:
    - /var/log/auth.log
  fields_under_root: true
  fields:
    data_type: "auth"

- type: log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  enabled: true
  paths:
    - /var/log/syslog
  fields_under_root: true
  fields:
    data_type: "syslog"

- type: log
  enabled: true
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  paths:
    - /var/log/kern.log
  fields_under_root: true
  fields:
   data_type: "kern"

- type: docker
  enabled: true
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  combine_partial: true
  exclude_files: ['container-cached.log$']
  containers:
    path: "/var/lib/docker/containers"
    stream: "stdout"
    ids:
      - "*"


setup.template.enabled: true
setup.template.name: "system-1"
setup.template.pattern: "system-1-*"
setup.template.fields: "fields.yml"
setup.template.overwrite: false

  #fields:
  #  level: debug
  #  review: 1

setup.kibana:
  host: 


output.elasticsearch:
  hosts:  

  indices:
    - index: "system-1-%{[data_type]}"

    
    - index: "container-1-%{+yyyy.MM.dd}"
      when.or:
        - equals:
            container.image.name:  grafana

    - index: "container-2-%{+yyyy.MM.dd}"
      when.or:
        - equals:
            container.image.name: nginx

   
    - index: "misc-container.logs-%{+yyyy.MM.dd}"


  username: 
  password: 
  #logging.json: true
  #logging.metrics.enabled: false


processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

setup.ilm.enabled: false

@stephenb : sorry to tag you directly here but please help me!

Please don't ping people that aren't already part of your topic.

7.0 is EOL and no longer supported, you need to upgrade please.

sorry, it was a typo- I am using Filebeat version: 7.10.0

anyone please help!!

Hi @Akanksha_Pandey

As a reminder please do not directly tag me. This will be the last time I respond to that. I think you need to continue to learn more about docker...

Remember all paths in the filebeat.yml are "Inside / Relative" to the docker container not the host it is running on... to read from the host you must mount the volume...

I suspect your problem is that you did not mount the host syslog directory as a volume to your docker container thus you were just trying to read syslog from within the docker container...

Perhaps you should read up on that... here

I basically followed the instructions here and just added the syslogs volume

My test of containers and syslog which worked

my filebeat.docker.yml

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true

filebeat.inputs:
  - type: log
    enabled: true
    paths:
      - /var/log/syslog <!-- This is the path INSIDE docker not the host!
    fields_under_root: true
    fields:
      data_type: "syslog"


processors:
- add_cloud_metadata: ~

output.elasticsearch:
  hosts: '${ELASTICSEARCH_HOSTS:host.docker.internal:9200}'
  username: '${ELASTICSEARCH_USERNAME:}'
  password: '${ELASTICSEARCH_PASSWORD:}'

And my docker command NOTE mounting the volume to be read

docker run -d \
  --name=filebeat \
  --user=root \
  --volume="$(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" \
  --volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
  --volume="/var/log/:/var/log" \    <!--- Volume Mount the Host syslog directory
  --volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
  docker.elastic.co/beats/filebeat:7.16.0 filebeat -e -strict.perms=false \
  -E output.elasticsearch.hosts=["host.docker.internal:9200"]

This collected the container logs and syslogs

You can put those mounts in a docker-compose as well.

2 Likes

@stephenb - Thanks a ton! I had mounted the syslog path in my docker-compose file but in the filebeat.yml gave the host path instead of docker container path. I updated it and now I'm getting the logs. You saved me from dying, I was figuring out the issue since a long time but couldn't get the answer.

Also i'm sending logs directly to Elasticsearch. Earlier we were using Logstash for log parsing and aggregation but since it was consuming space and resources on our production systems so we removed the logstash container. Is there any drawback of sending logs directly to Elasticsearch via Filebeat? Any delay/buffer in sending the logs? My understanding of Logstash container is for advance filtering. dynamic index naming and log aggregation. (Please correct me if I'm wrong)

One last thing, in my Kibana console I'm always getting the unassigned shards issue. So I did these settings in Kibana console but nothing is working. At that moment when I apply the setting I'm getting the zero unassigned shards but after a while the problem comes again. Can you please take a look at this what I'm missing?

And thanks again! You literally saved me again and aplogies for tagging you directly. :slightly_smiling_face:

PUT /_cluster/settings
{"transient":{"cluster.max_shards_per_node":2000}}


PUT /*/_settings
{
"index" : {
"number_of_replicas":0,
"auto_expand_replicas": false
}
}


PUT /_cluster/settings
{

    "transient" : {

        "cluster.routing.allocation.enable" : "all"

    }

}

If you want 0 replicas ... you will need to change the templates, I suspect every new index is getting assigned 1 replica because that is how it is defined in the filebeat template.

Of course with 1 node and 0 replicas you are at risk of data loss.

Putting Max Shards per node to 2000 is a bad bad bad idea your node will not work well, if you have 30GB Heap 600 Shards Max ... 20 Max Shards per 1 GB heap.

Filebeat vs Logstash there are pros and cons to each and depends on many things
Filebeat to Elasticsearch is perfectly fine... should not introduce delay, I have no clue of the of your volume etc

Elasticserach heap size is 4gb.

And since I was suffering from unassigned shards and ElasticsearchError error="400 - Rejected by Elasticsearch [error type]: validation_exception [reason]: 'Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [2000]/[2000] maximum shards open

the solution on various forums was to set the number of replicas to 0.

Can you please suggest me what to do as I face these 2 Elasticsearch errors very frequently. This is impacting the production servers.

Thanks in advance.

In short I think you need to learn a little bit more about how Elasticsearch works, There is free training on our site.

It seems that you're just trying whatever setting you read from a topic or stack overflow That's probably not going to get you where you want to get..

4 GB heap means you should not have more than 80 shards maybe a 100 if you're pushing it on your single 4GB of heap node.. 2000 is massively wrong.

So in short your Elasticsearch
maybe undersized undersized And I suspect your Also creating very many tiny indexes and shards which is very unefficient. The proper sizing is a combination of how much it ingests how much it's queried how many shards etc 4GB will work if you have the proper shard sizing and reasonable CPU for the ingest and query.

Setting the max shard count to 2000 does not mean the node can actually handle it in fact you're actually making it worse.

What you need to do is read the docs about how should I size my shard and nodes You are way off with respect to the size of node you need or the number of shards or how to manage the indexes.

In short you need to manage your shard count You are what's known as extremely oversharded.

The number of shards that can be used on a node is not infinite amount on a node and with 4GB I'm surprised it's even working for you.

Please read a little bit more and take some training I can't solve all these through discuss.

sure, thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.