Creating pipeline Using beats in Windows and sending logs to logstash in linux


I am creating a pipeline using beats in windows to extract logs in form of txt file in multiple directories and send it to the Linux(server) to logstash to es/kibana.

While running logstash conf pipeline I am getting error of Cannot assign requested address.

Below are the config.Pls let me know if any modification required in config or error coming from somewhere else.

filebeat.yml in windows system-

# ============================== Filebeat inputs ===============================
- type: log
  # Change to true to enable this input configuration.
  enabled: true
  # Paths that should be crawled and fetched. Glob based paths.
    - D:\Test_Data\*
    - D:\Hi\*
    #- c:\programdata\elasticsearch\logs\*

# ------------------------------ Logstash Output -------------------------------
  # The Logstash hosts
    hosts: [""] #Linux Host ip

logstash.conf file in Linux-

input {
         host => ""
         port => 5044


Error-Error: Cannot assign requested address

[INFO ] 2021-11-15 16:45:47.684 [[main]<beats] Server - Starting server on port: 5044
[ERROR] 2021-11-15 16:45:53.729 [[main]<beats] javapipeline - A plugin had an unrecoverable error. Will restart this plugin.
  Plugin: <LogStash::Inputs::Beats host=>"", port=>5044, id=>"<>", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"<>", enable_metric=>true, charset=>"UTF-8">, ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>4>
  Error: Cannot assign requested address
  Exception: Java::JavaNet::BindException
  Stack: Method)

ref link-
[Configure the Logstash output | Filebeat Reference [7.15] | Elastic]
Thank you!!!

make sure the port is open between these two machines

Correct @ahmed_charafouddine, It worked.

A Lil query,how can I keep logstash and filebeat pipeline running 24*7 to get real-time logs fetched in elastic,

.\filebeat.exe -c filebeat.yml

sudo bin/logstash -f /etc/logstash/conf.d/<...>.conf

I run filebeat in PowerShell and got start, stop commands for filebeat, but didn't get a command for how to check the status of filebeat, unlike logstash.

sudo systemctl status logstash.service

PS> Start-Service filebeat

PS > Stop-Service filebeat

While I run logstash conf file manually, results are getting stored in kibana.

When I schedule pipeline >>schedule => { every => "15s"} or with cron scheduler,it's not working.

input {
         port => <....>
  schedule =>  { cron => "1 * * * *  UTC"}

Currently running filebeat and logstash pipeline manually for testing and working fine.


There is no schedule in the beats input, remove that line.

Should be:

input {
       beats {
         port => "port-number"

Hi @leandrojmp ,

Got it. But currently, to fetch data to kibana I am running logstash conf file manually.It's working fine.

sudo bin/logstash -f /etc/logstash/conf.d/<...>.conf

Is there anything I can do to schedule it or it will run automatically and ingest data to Elasticsearch immediately after a new log is ingested in the directory?

Or is there any way to keep logstash pipeline running in Linux 24*7 to resolve this.

Filebeat is running on windows (server)using this command.

PS> Start-Service filebeat

logstash is present in linux server and running.


Is there anything I can do to schedule it or it will run automatically and ingest data to Elasticsearch immediately after a new log is ingested in the directory?

After stopping and starting logstash the inputs beats plugin start sending data via logstash to Elasticsearch automatically, so no need to schedule it as it will send data once new data present in directory.

Start/stop logstash does the work.

systemctl stop service.logstash
systemctl start service.logstash

Thanks once again

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.