Failed to start Filebeat sends log files to Logstash or directly to Elasticsearch

First it is not really best practice / community protocol to directly '@' people asking for answers. This is a community forum where there are many questions to answer from many people with many needs.

If you need timely help perhaps you should consider purchasing some training, consulting or a support contract OR even taking some of the numerous free training and webinars.

You have both elasticsearch output and logstash output enabled that is not allowed, only 1 output.

Also none of the log sources are enabled.

And as the document states you can test your config with :
filebeat test config

1 Like

@Marius_Iversen thanks! It worked. Can you help me how to see these logs on Kibana UI. I'm not able to see them. I meant how it will create index at Kibana UI? I haven't specified any index name.

filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
   Loaded: loaded (/lib/systemd/system/filebeat.service; disabled; vendor preset: enabled)
   Active: active (running) since Wed 2021-06-16 12:50:25 PDT; 24ms ago
     Docs: https://www.elastic.co/beats/filebeat
 Main PID: 19032 (filebeat)
    Tasks: 1 (limit: 4915)
   CGroup: /system.slice/filebeat.service
           └─19032 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.log

Jun 16 12:50:25 picktrack-1b systemd[1]: Started Filebeat sends log files to Logstash or directly to Elasticsearch..

My apologies. Will keep that in mind. Can you please tell me how to see the logs in Kibana UI, i'm not able to see them. I meant how it will create index at Kibana UI? I haven't specified any index name

filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
   Loaded: loaded (/lib/systemd/system/filebeat.service; disabled; vendor preset: enabled)
   Active: active (running) since Wed 2021-06-16 12:50:25 PDT; 24ms ago
     Docs: https://www.elastic.co/beats/filebeat
 Main PID: 19032 (filebeat)
    Tasks: 1 (limit: 4915)
   CGroup: /system.slice/filebeat.service
           └─19032 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.log

Jun 16 12:50:25 picktrack-1b systemd[1]: Started Filebeat sends log files to Logstash or directly to Elasticsearch..

Thanks

Did you follow the quick start guide in the filebeat docs

Did you run filebeat setup?

If so the index pattern the mappings everything will be created for you.
The will be in filebeat-* index pattern.

When I run filebeat setup -e, I get the following error-

2021-06-17T01:04:54.187-0700	INFO	instance/beat.go:665	Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2021-06-17T01:04:54.188-0700	DEBUG	[beat]	instance/beat.go:723	Beat metadata path: /var/lib/filebeat/meta.json
2021-06-17T01:04:54.188-0700	INFO	instance/beat.go:673	Beat ID: 39605256-578b-442b-b5f1-18a3498f9dac
2021-06-17T01:04:54.192-0700	DEBUG	[conditions]	conditions/conditions.go:98	New condition contains: map[]
2021-06-17T01:04:54.193-0700	DEBUG	[conditions]	conditions/conditions.go:98	New condition !contains: map[]
2021-06-17T01:04:54.193-0700	DEBUG	[docker]	docker/client.go:48	Docker client will negotiate the API version on the first request.
2021-06-17T01:04:54.193-0700	DEBUG	[add_cloud_metadata]	add_cloud_metadata/providers.go:128	add_cloud_metadata: starting to fetch metadata, timeout=3s
2021-06-17T01:04:54.226-0700	DEBUG	[add_docker_metadata]	add_docker_metadata/add_docker_metadata.go:90	add_docker_metadata: docker environment detected
2021-06-17T01:04:54.226-0700	DEBUG	[add_docker_metadata]	docker/watcher.go:211	Start docker containers scanner
2021-06-17T01:04:54.226-0700	DEBUG	[add_docker_metadata]	docker/watcher.go:374	List containers
2021-06-17T01:04:54.228-0700	DEBUG	[add_docker_metadata]	docker/watcher.go:264	Fetching events since 2021-06-17 01:04:54.227904356 -0700 PDT m=+0.196088347
2021-06-17T01:04:54.228-0700	DEBUG	[kubernetes]	add_kubernetes_metadata/kubernetes.go:138	Could not create kubernetes client using in_cluster config: unable to build kube config due to error: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable	{"libbeat.processor": "add_kubernetes_metadata"}
2021-06-17T01:04:55.576-0700	DEBUG	[add_cloud_metadata]	add_cloud_metadata/providers.go:165	add_cloud_metadata: received disposition for aws after 1.381970874s. result=[provider:aws, error=failed requesting aws metadata: Get "http://169.254.169.254/2014-02-25/dynamic/instance-identity/document": dial tcp 169.254.169.254:80: connect: no route to host, metadata={}]
2021-06-17T01:04:55.576-0700	DEBUG	[add_cloud_metadata]	add_cloud_metadata/providers.go:165	add_cloud_metadata: received disposition for openstack after 1.382505959s. result=[provider:openstack, error=failed requesting openstack metadata: Get "http://169.254.169.254/2009-04-04/meta-data/instance-id": dial tcp 169.254.169.254:80: connect: no route to host, metadata={}]
2021-06-17T01:04:55.576-0700	DEBUG	[add_cloud_metadata]	add_cloud_metadata/providers.go:165	add_cloud_metadata: received disposition for openstack after 1.382616906s. result=[provider:openstack, error=failed requesting openstack metadata: Get "https://169.254.169.254/2009-04-04/meta-data/placement/availability-zone": dial tcp 169.254.169.254:443: connect: no route to host, metadata={}]
2021-06-17T01:04:55.576-0700	DEBUG	[add_cloud_metadata]	add_cloud_metadata/providers.go:165	add_cloud_metadata: received disposition for azure after 1.382720013s. result=[provider:azure, error=failed requesting azure metadata: Get "http://169.254.169.254/metadata/instance/compute?api-version=2017-04-02": dial tcp 169.254.169.254:80: connect: no route to host, metadata={}]
2021-06-17T01:04:55.576-0700	DEBUG	[add_cloud_metadata]	add_cloud_metadata/providers.go:165	add_cloud_metadata: received disposition for digitalocean after 1.382807215s. result=[provider:digitalocean, error=failed requesting digitalocean metadata: Get "http://169.254.169.254/metadata/v1.json": dial tcp 169.254.169.254:80: connect: no route to host, metadata={}]
2021-06-17T01:04:55.576-0700	DEBUG	[add_cloud_metadata]	add_cloud_metadata/providers.go:165	add_cloud_metadata: received disposition for gcp after 1.382884721s. result=[provider:gcp, error=failed requesting gcp metadata: Get "http://169.254.169.254/computeMetadata/v1/?recursive=true&alt=json": dial tcp 169.254.169.254:80: connect: no route to host, metadata={}]
2021-06-17T01:04:55.576-0700	DEBUG	[add_cloud_metadata]	add_cloud_metadata/providers.go:131	add_cloud_metadata: fetchMetadata ran for 1.383037397s
2021-06-17T01:04:55.577-0700	INFO	[add_cloud_metadata]	add_cloud_metadata/add_cloud_metadata.go:101	add_cloud_metadata: hosting provider type not detected.
2021-06-17T01:04:55.577-0700	DEBUG	[processors]	processors/processor.go:120	Generated new processors: add_host_metadata=[netinfo.enabled=[true], cache.ttl=[5m0s]], condition=!contains: map[], add_cloud_metadata={}, add_docker_metadata=[match_fields=[] match_pids=[process.pid, process.ppid]], add_kubernetes_metadata
2021-06-17T01:04:55.577-0700	INFO	[beat]	instance/beat.go:1014	Beat info	{"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "39605256-578b-442b-b5f1-18a3498f9dac"}}}
2021-06-17T01:04:55.577-0700	INFO	[beat]	instance/beat.go:1023	Build info	{"system_info": {"build": {"commit": "686ba416a74193f2e69dcfa2eb142f4364a79307", "libbeat": "7.13.2", "time": "2021-06-10T21:04:13.000Z", "version": "7.13.2"}}}
2021-06-17T01:04:55.577-0700	INFO	[beat]	instance/beat.go:1026	Go runtime info	{"system_info": {"go": {"os":"linux","arch":"arm64","max_procs":6,"version":"go1.15.13"}}}
2021-06-17T01:04:55.580-0700	INFO	[beat]	instance/beat.go:1030	Host info	{"system_info": {"host": {"architecture":"aarch64","boot_time":"2021-06-15T11:27:07-07:00","containerized":false,"name":"picktrack-1b","ip":["127.0.0.1/8","::1/128","192.0.2.3/24","fe80::4ab0:2dff:fe3a:f3ea/64","10.1.10.47/24","2603:3024:1810:d00:167:97ba:bea5:dbe4/64","2603:3024:1810:d00::6a7f/128","2603:3024:1810:d00:accb:af47:d455:f16f/64","2603:3024:1810:d00:c7a5:8dda:279e:bc80/64","fe80::9456:54f7:a987:d92/64","172.17.0.1/16"],"kernel_version":"4.9.140+","mac":["3a:df:65:e3:d8:c6","48:b0:2d:3a:f3:ea","1a:4c:a9:26:f2:05","1a:4c:a9:26:f2:05","1a:4c:a9:26:f2:07","08:36:c9:7c:93:a3","02:42:1d:88:cd:54"],"os":{"type":"linux","family":"debian","platform":"ubuntu","name":"Ubuntu","version":"18.04.4 LTS (Bionic Beaver)","major":18,"minor":4,"patch":4,"codename":"bionic"},"timezone":"PDT","timezone_offset_sec":-25200,"id":"a3d9197b765643568af09eb2bd3e5ce7"}}}
2021-06-17T01:04:55.582-0700	INFO	[beat]	instance/beat.go:1059	Process info	{"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"ambient":null}, "cwd": "/etc/filebeat", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 12945, "ppid": 12944, "seccomp": {"mode":"disabled"}, "start_time": "2021-06-17T01:04:53.120-0700"}}}
2021-06-17T01:04:55.582-0700	INFO	instance/beat.go:309	Setup Beat: filebeat; Version: 7.13.2
2021-06-17T01:04:55.582-0700	DEBUG	[beat]	instance/beat.go:335	Initializing output plugins
2021-06-17T01:04:55.583-0700	DEBUG	[publisher]	pipeline/consumer.go:148	start pipeline event consumer
2021-06-17T01:04:55.583-0700	INFO	[publisher]	pipeline/module.go:113	Beat name: picktrack-1b
2021-06-17T01:04:55.585-0700	WARN	beater/filebeat.go:178	Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2021-06-17T01:04:55.585-0700	ERROR	instance/beat.go:989	Exiting: Index management requested but the Elasticsearch output is not configured/enabled
Exiting: Index management requested but the Elasticsearch output is not configured/enabled

When you run setup filebeat output needs to point to elasticsearch not logstash. Once setup is complete then you can point it back to log stash.

Here is the exact steps / process i would recommend If you want to run thus architecture.

Filebeat -> Logstash -> Elasticsearch

Follow the same steps just use filebeat instead of metricbeat. And use the filebeat quick start guide instead of the metricbeat quick start guide.

I enabled logstash by running this command filebeat modules enable logstash post which I created this folder structure /etc/logstash/conf.d and created a logstash.yml file there. I disabled Kibana setup and the output.logstash configurations as well. And enabled output.elasticsearch. Now, when I run sudo filebeat setup -e , I'm getting the errors as mentioned below.

Do I need to enable elasticsearch.yml module as well?

I'm just confused, please help me what I'm missing?

2021-06-17T07:34:16.296-0700	INFO	instance/beat.go:309	Setup Beat: filebeat; Version: 7.13.2
2021-06-17T07:34:16.296-0700	DEBUG	[beat]	instance/beat.go:335	Initializing output plugins
2021-06-17T07:34:16.296-0700	INFO	[index-management]	idxmgmt/std.go:184	Set output.elasticsearch.index to 'filebeat-7.13.2' as ILM is enabled.
2021-06-17T07:34:16.297-0700	INFO	eslegclient/connection.go:99	elasticsearch url: http://3.143.72.87:9200
2021-06-17T07:34:16.297-0700	DEBUG	[publisher]	pipeline/consumer.go:148	start pipeline event consumer
2021-06-17T07:34:16.297-0700	INFO	[publisher]	pipeline/module.go:113	Beat name: picktrack-1b
2021-06-17T07:34:16.300-0700	INFO	eslegclient/connection.go:99	elasticsearch url: http://3.143.72.87:9200
2021-06-17T07:34:16.301-0700	DEBUG	[esclientleg]	eslegclient/connection.go:290	ES Ping(url=http://3.143.72.87:9200)
2021-06-17T07:35:46.302-0700	DEBUG	[esclientleg]	eslegclient/connection.go:294	Ping request failed with: Get "http://3.143.72.87:9200": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2021-06-17T07:35:46.302-0700	ERROR	[esclientleg]	eslegclient/connection.go:261	error connecting to Elasticsearch at http://3.143.72.87:9200: Get "http://3.143.72.87:9200": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2021-06-17T07:35:46.302-0700	ERROR	instance/beat.go:989	Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://3.143.72.87:9200: Get "http://3.143.72.87:9200": context deadline exceeded (Client.Timeout exceeded while awaiting headers)]
Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://3.143.72.87:9200: Get "http://3.143.72.87:9200": context deadline exceeded (Client.Timeout exceeded while awaiting headers)]

And below is my logstash.yml file-

input

{

   beats {

        port => 5044

    }

}



output {

        if [beat][hostname] == "myhost-172-31-30-178"{

                elasticsearch {

                        hosts => "localhost:9200"

                        manage_template => false

                        index => "polar-%{+YYYY.MM.dd}"

                        document_type => "%{[@metadata][type]}"

                }

        }

        else {

                elasticsearch {

                        hosts => "localhost:9200"

                        manage_template => false

                        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"

                        document_type => "%{[@metadata][type]}"

                }

        }

}



      stdout {

             codec => rubydebug

      }

}

Yup I think you're confused. :slight_smile:

You do not enable the filebeat logstash modules if you want to send logs from filebeat through logstash to elasticsearch please look at the post I posted in detail it gives you the exact steps.

The filebeat logstash modules is to collect the logstash logs not what you want to do. Yes Perhaps a little confusing but that is not what I think you want to do.

Please look at the post I referenced it gives you the step by step directions to do file beat to logstash to elasticsearch.

Come back after you've repeated the steps from that post just using filebeat instead. You will note in those steps nowhere did I say enable logstash module.

I followed all the steps mentioned here quick style guide for the filebeat docs except step 2- how to find the cloud.id and cloud.auth of my elasticsearch service ?

Also should I disable the logtsash (and remove logstash.conf file) and elasticsearch module?

Hi I have lost track what you are trying to do. You do not really explain what you are trying to accomplish just snippets.

A good post would be like.

I am trying to collect logs with filebeat and send them through logstash to elasticsearch. I am doing this because I want logstash to act as an aggregator and forwarder and here are the problems I am having and here are my configs.

So I don't know what you are trying to do? so I can not tell you what to remove or not,

are you trying to do

A) Filbeat -> Elasticsearch

or

B) Filebeeat -> Logstash -> Elasticsearch

And if B) which is fine Why? / What are you trying to accomplish.

sorry, My aim is to monitor the logs on Kibana (Kibana server is present on another server). The log file is present on the client machine-there I have setup Filebeat.

Can you please tell me which option(A or B) will be applicable here?

I would use architecture A you appear to have no need for Logstash as far as I can tell.

Comment out

# output.logstash:
  # The Logstash hosts
  # hosts: ["myhost:5043"]

I you followed the quick start there was no steps that involved logstash so I'm not sure how you thought you needed logstash.

Thanks for your pateince and support. You have really helped me a lot!

I followed all the steps mentioned in the quick start document. After that I'm getting this error-

2021-06-17T11:19:36.574-0700	INFO	template/load.go:123	template with name 'filebeat-7.13.2' loaded.
2021-06-17T11:19:36.574-0700	INFO	[index-management]	idxmgmt/std.go:297	Loaded index template.
2021-06-17T11:19:36.574-0700	DEBUG	[esclientleg]	eslegclient/connection.go:364	GET http://3.143.72.87:9200/_alias/filebeat-7.13.2  <nil>
2021-06-17T11:19:36.659-0700	INFO	[index-management.ilm]	ilm/std.go:121	Index Alias filebeat-7.13.2 exists already.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
2021-06-17T11:19:36.659-0700	INFO	kibana/client.go:119	Kibana url: http://3.143.72.87:5601
2021-06-17T11:19:40.243-0700	INFO	kibana/client.go:119	Kibana url: http://3.143.72.87:5601
2021-06-17T11:19:40.986-0700	DEBUG	[dashboards]	dashboards/kibana_loader.go:156	Initialize the Kibana 7.9.2 loader
2021-06-17T11:19:40.986-0700	DEBUG	[dashboards]	dashboards/kibana_loader.go:156	Kibana URL http://3.143.72.87:5601
2021-06-17T11:19:41.919-0700	ERROR	instance/beat.go:989	Exiting: 1 error: error loading index pattern: returned 413 to import file: <nil>. Response: {"statusCode":413,"error":"Request Entity Too Large","message":"Payload content length greater than maximum allowed: 1048576"}
Exiting: 1 error: error loading index pattern: returned 413 to import file: <nil>. Response: {"statusCode":413,"error":"Request Entity Too Large","message":"Payload content length greater than maximum allowed: 1048576"}

Also- on my Kibana UI- under index management- an index with name filebeat filebeat-7.13.2-2021.06.18-000001 is created. When I create an index pattern for this manually as filebeat-*, and under Discover I see this index- it shows me the logs of previously loaded file and not the actual logs. And my Kibana dashboard hangs/freezes for sometime as well.

Please help me. Thanks

I searched fot the solution of this error and they say to configure thekibana.yml file by passing this parameterserver.maxPayloadBytes: _some_number > 1048576

I also noticed I think your Kibana and Elasticsearch is 7.9.2 and you are trying to use filebeat 7.13.2

I'm not completely sure where that will work well or not It would have probably be better to match the version most likely.

yes, you are right. The version is different. How to match the versions then? Should I unisntall Filebeat 7.13.2 and intsall Filebeat 7.9.2?

I would ...

I'm not saying that it absolutely could not work but it seems like you're struggling and simple matching of version should help.

That also means you really should be looking at at version of the documents. Software evolves and documents change.

oh okay. Let me try that. I'll let you know about the results

You were right! This worked and versioning was the actual cause-which made me struggle uptil now!

Thanks a lot, I would have never be able to do this without your help! I do not have enough words to thank you. You really helped me a lot and saved me :slight_smile:

1 Like

In my docker-compose.yaml file, when I change the path of source from conf to config- my Kibana alerts are disabled and it says to enable TLS:

PS. I am changing the source path from conf to config in order to enable user login in Kibana

Here is my docker-compose.yaml file:

services: 
  elasticsearch: 
    build:
      context: elasticsearch/
    container_name: elasticsearch
    volumes:
      - type: bind
        source: ./elasticsearch/conf/elasticsearch.yml
        target: /usr/share/elasticsearch/config/elasticsearch.yml
        read_only: true
      - type: volume
        source: elasticsearch
        target: /usr/share/elasticsearch/data
    ports:
      - "9200:9200"
    environment:
      ES_JAVA_OPTS: "-Xmx2g -Xms2g"
      ELASTIC_PASSWORD: 
      ELASTIC_USERNAME : 
      # Use single node discovery in order to disable production mode and avoid bootstrap checks.
      # see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
      discovery.type: single-node
    networks:
      - elastic
    restart: always
  
  logstash:
    container_name: logstash
    build: 
      context: logstash/
    # command: logstash -f /conf/logstash.conf
    volumes:
      - type: bind
        source: ./logstash/conf/logstash.yml
        target: /usr/share/logstash/config/logstash.yml
        read_only: true
      - type: bind
        source: ./logstash/pipeline
        target: /usr/share/logstash/pipeline
        read_only: true
    ports:
     - "5043:5043"
     - "5044:5044"
    environment:
      LS_JAVA_OPTS: "-Xmx1g -Xms1g"
    networks:
      - elastic
    depends_on:
     - elasticsearch
    restart: always

  kibana:
    build:
      context: kibana/
    container_name: 
    environment:
      XPACK_APM_SERVICEMAPENABLED: "true"
      XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: " "
    volumes:
      - type: bind
        source: ./kibana/conf/kibana.yml
        target: /usr/share/kibana/config/kibana.yml
        read_only: true
    ports:
      - "5601:5601"
    networks:
        - elastic
    depends_on:
      - elasticsearch
    restart: always

Can you please help me what I'm missing?