Hello,
I'm trying to get some log files that in a .txt format to my ElasticSearch instance in AWS. I'm using Filebeat to send the output to my ES cluster. However when I initiate Filebeat on the Windows host, I get the below error in the filebeat logs.
I've provided below how my filebeat.yml file is configured for this communication. Is there something I'm missing that is causing the below error?
|2018-09-10T14:44:02.158-0500|INFO|elasticsearch/client.go:690|Connected to Elasticsearch version 6.2.3|
|---|---|---|---|
|2018-09-10T14:44:02.205-0500|INFO|template/load.go:73|Template already exists and will not be overwritten.|
|2018-09-10T14:44:03.347-0500|ERROR|pipeline/output.go:92|Failed to publish events: temporary bulk send failure|
Below is how I have the prospector set up:
>
> #=========================== Filebeat prospectors =============================
>
> filebeat.prospectors:
>
> # Each - is a prospector. Most options can be set at the prospector level, so
> # you can use different prospectors for various configurations.
> # Below are the prospector specific configurations.
>
> - type: log
>
> # Change to true to enable this prospector configuration.
> enabled: true
>
> # Paths that should be crawled and fetched. Glob based paths.
> paths:
> #- /var/log/*.log
> - C:\Log\File\Path\LogFile.txt
> exclude_lines: ['#']
> #document_type: iis
>
> # Exclude lines. A list of regular expressions to match. It drops the lines that are
> # matching any regular expression from the list.
> #exclude_lines: ['^DBG']
>
> # Include lines. A list of regular expressions to match. It exports the lines that are
> # matching any regular expression from the list.
> #include_lines: ['^ERR', '^WARN']
>
> # Exclude files. A list of regular expressions to match. Filebeat drops the files that
> # are matching any regular expression from the list. By default, no files are dropped.
> #exclude_files: ['.gz$']
>
> # Optional additional fields. These fields can be freely picked
> # to add additional information to the crawled log files for filtering
> #fields:
> # level: debug
> # review: 1
>
> ### Multiline options
>
> # Mutiline can be used for log messages spanning multiple lines. This is common
> # for Java Stack Traces or C-Line Continuation
>
> # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
> #multiline.pattern: ^\[
>
> # Defines if the pattern set under pattern should be negated or not. Default is false.
> #multiline.negate: false
>
> # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
> # that was (not) matched before or after or as long as a pattern is not matched based on negate.
> # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
> #multiline.match: after
>
>
> #============================= Filebeat modules ===============================
>
>
Below is the template settings I have set up:
>
> #==================== Elasticsearch template setting ==========================
>
> setup.template.name: "cvp_activitylogs-%{+yyyy.MM.dd}"
> setup.template.pattern: "cvp_activitylogs"
>
> setup.template.settings:
> index.number_of_shards: 2
> #index.codec: best_compression
> #_source.enabled: false
>
>
Below is the Output section of my Filebeat.yml file.
>
> #-------------------------- Elasticsearch output ------------------------------
> output.elasticsearch:
> # Array of hosts to connect to.
> hosts: ["ElasticSearch-AWS-Link:443"]
> index: "cvp_activitylogs-%{+yyyy.MM.dd}"
> pipeline: cvp_logs
> template.enabled: false
> template.name: "cvp_activitylogs"
>
> # Optional protocol and basic auth credentials.
> protocol: "https"
> #username: "elastic"
> #password: "changeme"