Failed to start Filebeat sends log files to Logstash or directly to Elasticsearch

Hi,

I have installed native Filebeat and configured filebeat.yml accordingley but when I start the service it gives the below error:

Jun 16 10:16:03 picktrack-1b systemd[1]: filebeat.service: Service hold-off time over, scheduling restart.
Jun 16 10:16:03 picktrack-1b systemd[1]: filebeat.service: Scheduled restart job, restart counter is at 5.
Jun 16 10:16:03 picktrack-1b systemd[1]: Stopped Filebeat sends log files to Logstash or directly to Elasticsearch..
Jun 16 10:16:03 picktrack-1b systemd[1]: filebeat.service: Start request repeated too quickly.
Jun 16 10:16:03 picktrack-1b systemd[1]: filebeat.service: Failed with result 'exit-code'.
Jun 16 10:16:03 picktrack-1b systemd[1]: Failed to start Filebeat sends log files to Logstash or directly to Elasticsearch..

and this is my filebeat.yml file. Please help me !

@stephenb @Marius_Iversen


# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after

# filestream is an experimental input. It is going to replace log input in the future.
- type: filestream

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: true

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "myhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
    hosts: ["myhost:9200"]

  # Protocol - either `http` (default) or `https`.
  # protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["myhost:5043"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

And when I checked the logs using sudo /usr/share/filebeat/bin/filebeat test output --path.config /etc/filebeat, I got the below error:

Error initializing beat: error unpacking config data: more than one namespace configured accessing 'output' (source:'/etc/filebeat/filebeat.yml')

Hello @Akanksha_Pandey .

Its because you have both output.elasticsearch and output.logstash, you need to comment on of them out in filebeat.yml

First it is not really best practice / community protocol to directly '@' people asking for answers. This is a community forum where there are many questions to answer from many people with many needs.

If you need timely help perhaps you should consider purchasing some training, consulting or a support contract OR even taking some of the numerous free training and webinars.

You have both elasticsearch output and logstash output enabled that is not allowed, only 1 output.

Also none of the log sources are enabled.

And as the document states you can test your config with :
filebeat test config

1 Like

@Marius_Iversen thanks! It worked. Can you help me how to see these logs on Kibana UI. I'm not able to see them. I meant how it will create index at Kibana UI? I haven't specified any index name.

filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
   Loaded: loaded (/lib/systemd/system/filebeat.service; disabled; vendor preset: enabled)
   Active: active (running) since Wed 2021-06-16 12:50:25 PDT; 24ms ago
     Docs: https://www.elastic.co/beats/filebeat
 Main PID: 19032 (filebeat)
    Tasks: 1 (limit: 4915)
   CGroup: /system.slice/filebeat.service
           └─19032 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.log

Jun 16 12:50:25 picktrack-1b systemd[1]: Started Filebeat sends log files to Logstash or directly to Elasticsearch..

My apologies. Will keep that in mind. Can you please tell me how to see the logs in Kibana UI, i'm not able to see them. I meant how it will create index at Kibana UI? I haven't specified any index name

filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
   Loaded: loaded (/lib/systemd/system/filebeat.service; disabled; vendor preset: enabled)
   Active: active (running) since Wed 2021-06-16 12:50:25 PDT; 24ms ago
     Docs: https://www.elastic.co/beats/filebeat
 Main PID: 19032 (filebeat)
    Tasks: 1 (limit: 4915)
   CGroup: /system.slice/filebeat.service
           └─19032 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.log

Jun 16 12:50:25 picktrack-1b systemd[1]: Started Filebeat sends log files to Logstash or directly to Elasticsearch..

Thanks

Did you follow the quick start guide in the filebeat docs

Did you run filebeat setup?

If so the index pattern the mappings everything will be created for you.
The will be in filebeat-* index pattern.

When I run filebeat setup -e, I get the following error-

2021-06-17T01:04:54.187-0700	INFO	instance/beat.go:665	Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2021-06-17T01:04:54.188-0700	DEBUG	[beat]	instance/beat.go:723	Beat metadata path: /var/lib/filebeat/meta.json
2021-06-17T01:04:54.188-0700	INFO	instance/beat.go:673	Beat ID: 39605256-578b-442b-b5f1-18a3498f9dac
2021-06-17T01:04:54.192-0700	DEBUG	[conditions]	conditions/conditions.go:98	New condition contains: map[]
2021-06-17T01:04:54.193-0700	DEBUG	[conditions]	conditions/conditions.go:98	New condition !contains: map[]
2021-06-17T01:04:54.193-0700	DEBUG	[docker]	docker/client.go:48	Docker client will negotiate the API version on the first request.
2021-06-17T01:04:54.193-0700	DEBUG	[add_cloud_metadata]	add_cloud_metadata/providers.go:128	add_cloud_metadata: starting to fetch metadata, timeout=3s
2021-06-17T01:04:54.226-0700	DEBUG	[add_docker_metadata]	add_docker_metadata/add_docker_metadata.go:90	add_docker_metadata: docker environment detected
2021-06-17T01:04:54.226-0700	DEBUG	[add_docker_metadata]	docker/watcher.go:211	Start docker containers scanner
2021-06-17T01:04:54.226-0700	DEBUG	[add_docker_metadata]	docker/watcher.go:374	List containers
2021-06-17T01:04:54.228-0700	DEBUG	[add_docker_metadata]	docker/watcher.go:264	Fetching events since 2021-06-17 01:04:54.227904356 -0700 PDT m=+0.196088347
2021-06-17T01:04:54.228-0700	DEBUG	[kubernetes]	add_kubernetes_metadata/kubernetes.go:138	Could not create kubernetes client using in_cluster config: unable to build kube config due to error: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable	{"libbeat.processor": "add_kubernetes_metadata"}
2021-06-17T01:04:55.576-0700	DEBUG	[add_cloud_metadata]	add_cloud_metadata/providers.go:165	add_cloud_metadata: received disposition for aws after 1.381970874s. result=[provider:aws, error=failed requesting aws metadata: Get "http://169.254.169.254/2014-02-25/dynamic/instance-identity/document": dial tcp 169.254.169.254:80: connect: no route to host, metadata={}]
2021-06-17T01:04:55.576-0700	DEBUG	[add_cloud_metadata]	add_cloud_metadata/providers.go:165	add_cloud_metadata: received disposition for openstack after 1.382505959s. result=[provider:openstack, error=failed requesting openstack metadata: Get "http://169.254.169.254/2009-04-04/meta-data/instance-id": dial tcp 169.254.169.254:80: connect: no route to host, metadata={}]
2021-06-17T01:04:55.576-0700	DEBUG	[add_cloud_metadata]	add_cloud_metadata/providers.go:165	add_cloud_metadata: received disposition for openstack after 1.382616906s. result=[provider:openstack, error=failed requesting openstack metadata: Get "https://169.254.169.254/2009-04-04/meta-data/placement/availability-zone": dial tcp 169.254.169.254:443: connect: no route to host, metadata={}]
2021-06-17T01:04:55.576-0700	DEBUG	[add_cloud_metadata]	add_cloud_metadata/providers.go:165	add_cloud_metadata: received disposition for azure after 1.382720013s. result=[provider:azure, error=failed requesting azure metadata: Get "http://169.254.169.254/metadata/instance/compute?api-version=2017-04-02": dial tcp 169.254.169.254:80: connect: no route to host, metadata={}]
2021-06-17T01:04:55.576-0700	DEBUG	[add_cloud_metadata]	add_cloud_metadata/providers.go:165	add_cloud_metadata: received disposition for digitalocean after 1.382807215s. result=[provider:digitalocean, error=failed requesting digitalocean metadata: Get "http://169.254.169.254/metadata/v1.json": dial tcp 169.254.169.254:80: connect: no route to host, metadata={}]
2021-06-17T01:04:55.576-0700	DEBUG	[add_cloud_metadata]	add_cloud_metadata/providers.go:165	add_cloud_metadata: received disposition for gcp after 1.382884721s. result=[provider:gcp, error=failed requesting gcp metadata: Get "http://169.254.169.254/computeMetadata/v1/?recursive=true&alt=json": dial tcp 169.254.169.254:80: connect: no route to host, metadata={}]
2021-06-17T01:04:55.576-0700	DEBUG	[add_cloud_metadata]	add_cloud_metadata/providers.go:131	add_cloud_metadata: fetchMetadata ran for 1.383037397s
2021-06-17T01:04:55.577-0700	INFO	[add_cloud_metadata]	add_cloud_metadata/add_cloud_metadata.go:101	add_cloud_metadata: hosting provider type not detected.
2021-06-17T01:04:55.577-0700	DEBUG	[processors]	processors/processor.go:120	Generated new processors: add_host_metadata=[netinfo.enabled=[true], cache.ttl=[5m0s]], condition=!contains: map[], add_cloud_metadata={}, add_docker_metadata=[match_fields=[] match_pids=[process.pid, process.ppid]], add_kubernetes_metadata
2021-06-17T01:04:55.577-0700	INFO	[beat]	instance/beat.go:1014	Beat info	{"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "39605256-578b-442b-b5f1-18a3498f9dac"}}}
2021-06-17T01:04:55.577-0700	INFO	[beat]	instance/beat.go:1023	Build info	{"system_info": {"build": {"commit": "686ba416a74193f2e69dcfa2eb142f4364a79307", "libbeat": "7.13.2", "time": "2021-06-10T21:04:13.000Z", "version": "7.13.2"}}}
2021-06-17T01:04:55.577-0700	INFO	[beat]	instance/beat.go:1026	Go runtime info	{"system_info": {"go": {"os":"linux","arch":"arm64","max_procs":6,"version":"go1.15.13"}}}
2021-06-17T01:04:55.580-0700	INFO	[beat]	instance/beat.go:1030	Host info	{"system_info": {"host": {"architecture":"aarch64","boot_time":"2021-06-15T11:27:07-07:00","containerized":false,"name":"picktrack-1b","ip":["127.0.0.1/8","::1/128","192.0.2.3/24","fe80::4ab0:2dff:fe3a:f3ea/64","10.1.10.47/24","2603:3024:1810:d00:167:97ba:bea5:dbe4/64","2603:3024:1810:d00::6a7f/128","2603:3024:1810:d00:accb:af47:d455:f16f/64","2603:3024:1810:d00:c7a5:8dda:279e:bc80/64","fe80::9456:54f7:a987:d92/64","172.17.0.1/16"],"kernel_version":"4.9.140+","mac":["3a:df:65:e3:d8:c6","48:b0:2d:3a:f3:ea","1a:4c:a9:26:f2:05","1a:4c:a9:26:f2:05","1a:4c:a9:26:f2:07","08:36:c9:7c:93:a3","02:42:1d:88:cd:54"],"os":{"type":"linux","family":"debian","platform":"ubuntu","name":"Ubuntu","version":"18.04.4 LTS (Bionic Beaver)","major":18,"minor":4,"patch":4,"codename":"bionic"},"timezone":"PDT","timezone_offset_sec":-25200,"id":"a3d9197b765643568af09eb2bd3e5ce7"}}}
2021-06-17T01:04:55.582-0700	INFO	[beat]	instance/beat.go:1059	Process info	{"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"ambient":null}, "cwd": "/etc/filebeat", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 12945, "ppid": 12944, "seccomp": {"mode":"disabled"}, "start_time": "2021-06-17T01:04:53.120-0700"}}}
2021-06-17T01:04:55.582-0700	INFO	instance/beat.go:309	Setup Beat: filebeat; Version: 7.13.2
2021-06-17T01:04:55.582-0700	DEBUG	[beat]	instance/beat.go:335	Initializing output plugins
2021-06-17T01:04:55.583-0700	DEBUG	[publisher]	pipeline/consumer.go:148	start pipeline event consumer
2021-06-17T01:04:55.583-0700	INFO	[publisher]	pipeline/module.go:113	Beat name: picktrack-1b
2021-06-17T01:04:55.585-0700	WARN	beater/filebeat.go:178	Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2021-06-17T01:04:55.585-0700	ERROR	instance/beat.go:989	Exiting: Index management requested but the Elasticsearch output is not configured/enabled
Exiting: Index management requested but the Elasticsearch output is not configured/enabled

When you run setup filebeat output needs to point to elasticsearch not logstash. Once setup is complete then you can point it back to log stash.

Here is the exact steps / process i would recommend If you want to run thus architecture.

Filebeat -> Logstash -> Elasticsearch

Follow the same steps just use filebeat instead of metricbeat. And use the filebeat quick start guide instead of the metricbeat quick start guide.

I enabled logstash by running this command filebeat modules enable logstash post which I created this folder structure /etc/logstash/conf.d and created a logstash.yml file there. I disabled Kibana setup and the output.logstash configurations as well. And enabled output.elasticsearch. Now, when I run sudo filebeat setup -e , I'm getting the errors as mentioned below.

Do I need to enable elasticsearch.yml module as well?

I'm just confused, please help me what I'm missing?

2021-06-17T07:34:16.296-0700	INFO	instance/beat.go:309	Setup Beat: filebeat; Version: 7.13.2
2021-06-17T07:34:16.296-0700	DEBUG	[beat]	instance/beat.go:335	Initializing output plugins
2021-06-17T07:34:16.296-0700	INFO	[index-management]	idxmgmt/std.go:184	Set output.elasticsearch.index to 'filebeat-7.13.2' as ILM is enabled.
2021-06-17T07:34:16.297-0700	INFO	eslegclient/connection.go:99	elasticsearch url: http://3.143.72.87:9200
2021-06-17T07:34:16.297-0700	DEBUG	[publisher]	pipeline/consumer.go:148	start pipeline event consumer
2021-06-17T07:34:16.297-0700	INFO	[publisher]	pipeline/module.go:113	Beat name: picktrack-1b
2021-06-17T07:34:16.300-0700	INFO	eslegclient/connection.go:99	elasticsearch url: http://3.143.72.87:9200
2021-06-17T07:34:16.301-0700	DEBUG	[esclientleg]	eslegclient/connection.go:290	ES Ping(url=http://3.143.72.87:9200)
2021-06-17T07:35:46.302-0700	DEBUG	[esclientleg]	eslegclient/connection.go:294	Ping request failed with: Get "http://3.143.72.87:9200": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2021-06-17T07:35:46.302-0700	ERROR	[esclientleg]	eslegclient/connection.go:261	error connecting to Elasticsearch at http://3.143.72.87:9200: Get "http://3.143.72.87:9200": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2021-06-17T07:35:46.302-0700	ERROR	instance/beat.go:989	Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://3.143.72.87:9200: Get "http://3.143.72.87:9200": context deadline exceeded (Client.Timeout exceeded while awaiting headers)]
Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://3.143.72.87:9200: Get "http://3.143.72.87:9200": context deadline exceeded (Client.Timeout exceeded while awaiting headers)]

And below is my logstash.yml file-

input

{

   beats {

        port => 5044

    }

}



output {

        if [beat][hostname] == "myhost-172-31-30-178"{

                elasticsearch {

                        hosts => "localhost:9200"

                        manage_template => false

                        index => "polar-%{+YYYY.MM.dd}"

                        document_type => "%{[@metadata][type]}"

                }

        }

        else {

                elasticsearch {

                        hosts => "localhost:9200"

                        manage_template => false

                        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"

                        document_type => "%{[@metadata][type]}"

                }

        }

}



      stdout {

             codec => rubydebug

      }

}

Yup I think you're confused. :slight_smile:

You do not enable the filebeat logstash modules if you want to send logs from filebeat through logstash to elasticsearch please look at the post I posted in detail it gives you the exact steps.

The filebeat logstash modules is to collect the logstash logs not what you want to do. Yes Perhaps a little confusing but that is not what I think you want to do.

Please look at the post I referenced it gives you the step by step directions to do file beat to logstash to elasticsearch.

Come back after you've repeated the steps from that post just using filebeat instead. You will note in those steps nowhere did I say enable logstash module.

I followed all the steps mentioned here quick style guide for the filebeat docs except step 2- how to find the cloud.id and cloud.auth of my elasticsearch service ?

Also should I disable the logtsash (and remove logstash.conf file) and elasticsearch module?

Hi I have lost track what you are trying to do. You do not really explain what you are trying to accomplish just snippets.

A good post would be like.

I am trying to collect logs with filebeat and send them through logstash to elasticsearch. I am doing this because I want logstash to act as an aggregator and forwarder and here are the problems I am having and here are my configs.

So I don't know what you are trying to do? so I can not tell you what to remove or not,

are you trying to do

A) Filbeat -> Elasticsearch

or

B) Filebeeat -> Logstash -> Elasticsearch

And if B) which is fine Why? / What are you trying to accomplish.

sorry, My aim is to monitor the logs on Kibana (Kibana server is present on another server). The log file is present on the client machine-there I have setup Filebeat.

Can you please tell me which option(A or B) will be applicable here?

I would use architecture A you appear to have no need for Logstash as far as I can tell.

Comment out

# output.logstash:
  # The Logstash hosts
  # hosts: ["myhost:5043"]

I you followed the quick start there was no steps that involved logstash so I'm not sure how you thought you needed logstash.

Thanks for your pateince and support. You have really helped me a lot!

I followed all the steps mentioned in the quick start document. After that I'm getting this error-

2021-06-17T11:19:36.574-0700	INFO	template/load.go:123	template with name 'filebeat-7.13.2' loaded.
2021-06-17T11:19:36.574-0700	INFO	[index-management]	idxmgmt/std.go:297	Loaded index template.
2021-06-17T11:19:36.574-0700	DEBUG	[esclientleg]	eslegclient/connection.go:364	GET http://3.143.72.87:9200/_alias/filebeat-7.13.2  <nil>
2021-06-17T11:19:36.659-0700	INFO	[index-management.ilm]	ilm/std.go:121	Index Alias filebeat-7.13.2 exists already.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
2021-06-17T11:19:36.659-0700	INFO	kibana/client.go:119	Kibana url: http://3.143.72.87:5601
2021-06-17T11:19:40.243-0700	INFO	kibana/client.go:119	Kibana url: http://3.143.72.87:5601
2021-06-17T11:19:40.986-0700	DEBUG	[dashboards]	dashboards/kibana_loader.go:156	Initialize the Kibana 7.9.2 loader
2021-06-17T11:19:40.986-0700	DEBUG	[dashboards]	dashboards/kibana_loader.go:156	Kibana URL http://3.143.72.87:5601
2021-06-17T11:19:41.919-0700	ERROR	instance/beat.go:989	Exiting: 1 error: error loading index pattern: returned 413 to import file: <nil>. Response: {"statusCode":413,"error":"Request Entity Too Large","message":"Payload content length greater than maximum allowed: 1048576"}
Exiting: 1 error: error loading index pattern: returned 413 to import file: <nil>. Response: {"statusCode":413,"error":"Request Entity Too Large","message":"Payload content length greater than maximum allowed: 1048576"}

Also- on my Kibana UI- under index management- an index with name filebeat filebeat-7.13.2-2021.06.18-000001 is created. When I create an index pattern for this manually as filebeat-*, and under Discover I see this index- it shows me the logs of previously loaded file and not the actual logs. And my Kibana dashboard hangs/freezes for sometime as well.

Please help me. Thanks

I searched fot the solution of this error and they say to configure thekibana.yml file by passing this parameterserver.maxPayloadBytes: _some_number > 1048576

I also noticed I think your Kibana and Elasticsearch is 7.9.2 and you are trying to use filebeat 7.13.2

I'm not completely sure where that will work well or not It would have probably be better to match the version most likely.

yes, you are right. The version is different. How to match the versions then? Should I unisntall Filebeat 7.13.2 and intsall Filebeat 7.9.2?

I would ...

I'm not saying that it absolutely could not work but it seems like you're struggling and simple matching of version should help.

That also means you really should be looking at at version of the documents. Software evolves and documents change.

oh okay. Let me try that. I'll let you know about the results