Connection between logstash and filebeat

Architecture setup:
logstash : logstash-6.4.0-1.noarch (installed on RHEL7.5)
filebeat : filebeat version 6.4.0 (installed on Windows 2016 (64 bit OS))
elasticsearch : elasticsearch-oss-6.4.0-1.noarch (installed on RHEL7.5)

Issue is, filebeat is not connect to logstash or elasticsearch despite of enablement of telnet (port 5044)from the remote host to ES server.

Please find the below configurations of filebeat and logstash

filebeat.yml

#=========================== Filebeat inputs =============================

filebeat.inputs:

Each - is an input. Most options can be set at the input level, so

you can use different inputs for various configurations.

Below are the input specific configurations.

  • type: log

    Change to true to enable this input configuration.

    enabled: true

    Paths that should be crawled and fetched. Glob based paths.

    paths:
    #- /var/log/*.log

    • E:\vrautu_logs\sample_logs\logs

#============================= Filebeat modules ===============================

filebeat.config.modules:

Glob pattern for configuration loading

path: ${path.config}/modules.d/*.yml

Set to true to enable config reloading

reload.enabled: false

Period on which files under path should be checked for changes

#reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false

#============================== Dashboards =====================================

These settings control loading the sample dashboards to the Kibana index. Loading

the dashboards is disabled by default and can be enabled either by setting the

options here, or by using the -setup CLI flag or the setup command.

#setup.dashboards.enabled: false

The URL from where to download the dashboards archive. By default this URL

has a value which is computed based on the Beat name and version. For released

versions, this URL points to the dashboard archive on the artifacts.elastic.co

website.

#setup.dashboards.url:

#================================ Outputs =====================================

Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:

Array of hosts to connect to.

#hosts: ["localhost:9200"]

Optional protocol and basic auth credentials.

#----------------------------- Logstash output --------------------------------
output.logstash:

The Logstash hosts

hosts: ["https://10.33.X.X:5044"]

Optional SSL. By default is off.

List of root certificates for HTTPS server verifications

ssl.certificate_authorities: ['E:\logstash\logstash.cer']

Certificate for SSL client authentication

#ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key

#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================

Sets log level. The default log level is info.

Available log levels are: error, warning, info, debug

logging.level: debug

At debug level, you can selectively enable logging only for some components.

To enable all selectors use ["*"]. Examples of other selectors are "beat",

"publish", "service".

#logging.selectors: ["*"]

Do you see any errors in the logs? if so what are they..

Please share your filebeat and logstash config. Its hard to tell why it is not working this way..

input
{
beats
{
port => 5044
ssl => true
ssl_certificate => "/etc/pki/logstash/logstash.crt"
ssl_key => "/etc/pki/logstash/logstash.key"
}
}

filter
{
if ("ERROR" and "writeError" in [message])
{
if ("Policy record data" in [message])
{
grok
{
match => { "message" => "(%{TIMESTAMP_ISO8601:time_stamp}) - ERROR - (%{GREEDYDATA:error_url}) - writeError - Policy record data - (%{GREEDYDATA:record_details})"}
remove_field => [ "message"]
}
}
else
{
grok
{
match => { "message" => "(%{TIMESTAMP_ISO8601:time_stamp}) - ERROR - (%{GREEDYDATA:error_url}) - writeError - (%{GREEDYDATA:error_message}) Caused By - (%{GREEDYDATA:caused_by}): (%{GREEDYDATA:exception_cause1})"}
remove_field => [ "message"]
}
}
}

 else if("ERROR" and "<init>" in [message])
   {
     grok 
              {
         match => { "message" => "(%{TIMESTAMP_ISO8601:time_stamp}) - ERROR - (%{GREEDYDATA:error_url}) - <init> - Severity: (%{WORD:severity_level}); Error Code: (%{GREEDYDATA:error_code}); Message: (%{GREEDYDATA:error_message}); Caused By:(%{GREEDYDATA:caused_by}): (%{GREEDYDATA:exception_cause1})"}
         remove_field => [ "message"]
       }
    }
     else
      {
         drop {}
       }

}
output
{
elasticsearch
{
hosts => [ "https://x.x.x.x:9200" ]
index => "record-%{+YYYY.MM.dd}"
user => "admin"
password => "xxxx@123"
cacert => "/etc/logstash/root-ca.pem"
ssl => true
ssl_certificate_verification => false
}
}

please find the filebeat logs:

2019-01-29T13:23:47.869+0530 INFO instance/beat.go:544 Home path: [C:\Program Files\filebeat] Config path: [C:\Program Files\filebeat] Data path: [C:\ProgramData\filebeat] Logs path: [C:\ProgramData\filebeat\logs]
2019-01-29T13:23:47.878+0530 DEBUG [beat] instance/beat.go:571 Beat metadata path: C:\ProgramData\filebeat\meta.json
2019-01-29T13:23:47.878+0530 INFO instance/beat.go:551 Beat UUID: 6dfb9cab-2bb1-4f65-8a2b-d42d55988fbf
2019-01-29T13:23:47.878+0530 DEBUG [seccomp] seccomp/seccomp.go:88 Syscall filtering is only supported on Linux
2019-01-29T13:23:47.878+0530 INFO [beat] instance/beat.go:768 Beat info {"system_info": {"beat": {"path": {"config": "C:\Program Files\filebeat", "data": "C:\ProgramData\filebeat", "home": "C:\Program Files\filebeat", "logs": "C:\ProgramData\filebeat\logs"}, "type": "filebeat", "uuid": "6dfb9cab-2bb1-4f65-8a2b-d42d55988fbf"}}}
2019-01-29T13:23:47.880+0530 INFO [beat] instance/beat.go:777 Build info {"system_info": {"build": {"commit": "34b4e2cc75fbbee5e7149f3916de72fb8892d070", "libbeat": "6.4.0", "time": "2018-08-17T22:19:27.000Z", "version": "6.4.0"}}}
2019-01-29T13:23:47.880+0530 INFO [beat] instance/beat.go:780 Go runtime info {"system_info": {"go": {"os":"windows","arch":"amd64","max_procs":32,"version":"go1.10.3"}}}
2019-01-29T13:23:47.887+0530 INFO [beat] instance/beat.go:784 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2019-01-24T01:52:38.06+05:30","hostname":"OTTP16TANTGCAP1","ips":["192.168.8.65/24","::1/128","127.0.0.1/8","fe80::5efe:c0a8:841/128"],"kernel_version":"10.0.14393.2665 (rs1_release.181203-1755)","mac_addresses":["50:6b:8d:f9:5a:9f","00:00:00:00:00:00:00:e0"],"os":{"family":"windows","platform":"windows","name":"Windows Server 2016 Standard","version":"10.0","major":10,"minor":0,"patch":0,"build":"14393.2670"},"timezone":"IST","timezone_offset_sec":19800,"id":"b9acbc49-b973-454b-a9b0-e8209ded29d5"}}}
2019-01-29T13:23:47.887+0530 INFO instance/beat.go:273 Setup Beat: filebeat; Version: 6.4.0
2019-01-29T13:23:47.888+0530 DEBUG [beat] instance/beat.go:290 Initializing output plugins
2019-01-29T13:23:47.888+0530 DEBUG [processors] processors/processor.go:66 Processors:
2019-01-29T13:23:47.888+0530 DEBUG [tls] tlscommon/tls.go:155 successfully loaded CA certificate: E:\logstash\logstash.cer
2019-01-29T13:23:47.888+0530 DEBUG [publish] pipeline/consumer.go:137 start pipeline event consumer
2019-01-29T13:23:47.889+0530 INFO pipeline/module.go:98 Beat name: OTTPXTANTGCAP7
2019-01-29T13:23:47.890+0530 INFO instance/beat.go:367 filebeat start running.
2019-01-29T13:23:47.890+0530 INFO [monitoring] log/log.go:114 Starting metrics logging every 30s
2019-01-29T13:23:47.890+0530 INFO registrar/registrar.go:97 No registry file found under: C:\ProgramData\filebeat\registry. Creating a new registry file.
2019-01-29T13:23:47.890+0530 DEBUG [registrar] registrar/registrar.go:400 Write registry file: C:\ProgramData\filebeat\registry
2019-01-29T13:23:47.890+0530 DEBUG [service] service/service_windows.go:68 Windows is interactive: false
2019-01-29T13:23:47.896+0530 DEBUG [registrar] registrar/registrar.go:393 Registry file updated. 0 states written.
2019-01-29T13:23:47.896+0530 INFO registrar/registrar.go:134 Loading registrar data from C:\ProgramData\filebeat\registry
2019-01-29T13:23:47.896+0530 INFO registrar/registrar.go:141 States Loaded from registrar: 0
2019-01-29T13:23:47.896+0530 WARN beater/filebeat.go:371 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2019-01-29T13:23:47.896+0530 DEBUG [registrar] registrar/registrar.go:267 Starting Registrar
2019-01-29T13:23:47.896+0530 INFO crawler/crawler.go:72 Loading Inputs: 1
2019-01-29T13:23:47.896+0530 DEBUG [processors] processors/processor.go:66 Processors:
2019-01-29T13:23:47.896+0530 DEBUG [input] log/config.go:201 recursive glob enabled
2019-01-29T13:23:47.896+0530 DEBUG [input] log/input.go:147 exclude_files: . Number of stats: 0
2019-01-29T13:23:47.897+0530 DEBUG [input] log/input.go:168 input with previous states loaded: 0
2019-01-29T13:23:47.897+0530 INFO log/input.go:138 Configured paths: [E:\vrautu_logs\sample_logs\logs]
2019-01-29T13:23:47.897+0530 INFO input/input.go:114 Starting input of type: log; ID: 7269183810873742127
2019-01-29T13:23:47.897+0530 DEBUG [input] log/input.go:174 Start next scan
2019-01-29T13:23:47.897+0530 DEBUG [cfgfile] cfgfile/reload.go:108 Checking module configs from: C:\Program Files\filebeat/modules.d/*.yml
2019-01-29T13:23:47.897+0530 DEBUG [input] log/input.go:195 input states cleaned up. Before: 0, After: 0, Pending: 0
2019-01-29T13:23:47.897+0530 DEBUG [cfgfile] cfgfile/reload.go:122 Number of module configs found: 0
2019-01-29T13:23:47.897+0530 INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2019-01-29T13:23:47.897+0530 INFO cfgfile/reload.go:140 Config reloader started
2019-01-29T13:23:47.897+0530 DEBUG [cfgfile] cfgfile/reload.go:166 Scan for new config files
2019-01-29T13:23:47.897+0530 DEBUG [cfgfile] cfgfile/reload.go:185 Number of module configs found: 0
2019-01-29T13:23:47.897+0530 DEBUG [reload] cfgfile/list.go:70 Starting reload procedure, current runners: 0
2019-01-29T13:23:47.897+0530 DEBUG [reload] cfgfile/list.go:88 Start list: 0, Stop list: 0
2019-01-29T13:23:47.897+0530 INFO cfgfile/reload.go:195 Loading of config files completed.
2019-01-29T13:23:57.900+0530 DEBUG [input] input/input.go:152 Run input
2019-01-29T13:23:57.900+0530 DEBUG [input] log/input.go:174 Start next scan
2019-01-29T13:23:57.900+0530 DEBUG [input] log/input.go:195 input states cleaned up. Before: 0, After: 0, Pending: 0
2019-01-29T13:24:07.916+0530 DEBUG [input] input/input.go:152 Run input
2019-01-29T13:24:07.916+0530 DEBUG [input] log/input.go:174 Start next scan
2019-01-29T13:24:07.916+0530 DEBUG [input] log/input.go:195 input states cleaned up. Before: 0, After: 0, Pending: 0

Nevermind, I was not looking very well :slight_smile:

I think you need to change this:

E:\vrautu_logs\sample_logs\logs

to

E:\vrautu_logs\sample_logs\logs\*.log (or whatever the extension is)

Yes sure pjanzen , Here it is

filebeat.yml

###################### Filebeat Configuration Example #########################

This file is an example configuration file highlighting only the most common

options. The filebeat.reference.yml file from the same directory contains all the

supported options with more comments. You can use it as a reference.

You can find the full configuration reference here:

https://www.elastic.co/guide/en/beats/filebeat/index.html

For more available modules and options, please see the filebeat.reference.yml sample

configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

Each - is an input. Most options can be set at the input level, so

you can use different inputs for various configurations.

Below are the input specific configurations.

  • type: log

    Change to true to enable this input configuration.

    enabled: true

    Paths that should be crawled and fetched. Glob based paths.

    paths:
    #- /var/log/*.log

    • E:\vrautu_logs\sample_logs\logs

#============================= Filebeat modules ===============================

filebeat.config.modules:

Glob pattern for configuration loading

path: ${path.config}/modules.d/*.yml

Set to true to enable config reloading

reload.enabled: false

Period on which files under path should be checked for changes

#reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false

#================================ General =====================================

The name of the shipper that publishes the network data. It can be used to group

all the transactions sent by a single shipper in the web interface.

#name:

The tags of the shipper are included in their own field with each

transaction published.

#tags: ["service-X", "web-tier"]

Optional fields that you can specify to add additional information to the

output.

#fields:

env: staging

#============================== Dashboards =====================================

These settings control loading the sample dashboards to the Kibana index. Loading

the dashboards is disabled by default and can be enabled either by setting the

options here, or by using the -setup CLI flag or the setup command.

#setup.dashboards.enabled: false

The URL from where to download the dashboards archive. By default this URL

has a value which is computed based on the Beat name and version. For released

versions, this URL points to the dashboard archive on the artifacts.elastic.co

website.

#setup.dashboards.url:

#============================== Kibana =====================================

Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

This requires a Kibana endpoint configuration.

setup.kibana:

Kibana Host

Scheme and port can be left out and will be set to the default (http and 5601)

In case you specify and additional path, the scheme is required: http://localhost:5601/path

IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

#host: "localhost:5601"

#================================ Outputs =====================================

Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:

Array of hosts to connect to.

#hosts: ["localhost:9200"]

Optional protocol and basic auth credentials.

#----------------------------- Logstash output --------------------------------
output.logstash:

The Logstash hosts

hosts: ["https://x.x.x.x:5044"]

Optional SSL. By default is off.

List of root certificates for HTTPS server verifications

ssl.certificate_authorities: ['E:\logstash\logstash.cer']

Certificate for SSL client authentication

#ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key

#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================

Sets log level. The default log level is info.

Available log levels are: error, warning, info, debug

logging.level: debug

At debug level, you can selectively enable logging only for some components.

To enable all selectors use ["*"]. Examples of other selectors are "beat",

"publish", "service".

#logging.selectors: ["*"]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.