Sent logs to Logstash using Filebeat

I am trying to send log files using Filebeat to logstash. Filebeat and logstash are on different servers so Filebeat ip is like this 100.100.100.100 and logstash is like 200.200.200.200

My filebeat conf:

output.logstash:
  # The Logstash hosts
  hosts: ["200.200.200.200:5044"]

My Logstash conf:

input {
	beats {
		host => "100.100.100.100"
		port => "5044"
	 }
}

My Error is this:

[ERROR][logstash.pipeline        ] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Beats host=>"100.100.100.100", port=>5044, type=>"test-xml", id=>"d747b023b96621fddcc34e8c7cf0e82f1e4c1f3dc01af6aa2af877d245b83b55", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_5018c0e0-0843-4cf8-b826-8d5803548e1d", enable_metric=>true, charset=>"UTF-8">, ssl=>false, ssl_verify_mode=>"none", include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>32>
  Error: Cannot assign requested address: bind
  Exception: Java::JavaNet::BindException
  Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:433)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:425)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:223)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:128)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:558)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1283)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:501)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:486)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:989)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:254)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:364)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:163)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:403)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:463)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:858)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:748)

Remove the host part from the beats module:

input {
	beats {
		port => "5044"
	 }
}
1 Like

I have same problem. will watch this thread for any work aoround.
I have same setup

on client side
2019-02-15T00:07:12.184+0800 INFO log/harvester.go:255 Harvester started for file: /tmp/sensors.log

2019-02-15T00:07:38.528+0800 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":17020,"time":{"ms":6}},"total":{"ticks":35260,"time":{"ms":12},"value":35260},"user":{"ticks":18240,"time":{"ms":6}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"d70b72e6-c2de-4f3a-aff4-33b5e9b11b98","uptime":{"ms":158250031}},"memstats":{"gc_next":4293824,"memory_alloc":2154176,"memory_total":1610054080}},"filebeat":{"events":{"added":1,"done":1},"harvester":{"open_files":1,"running":1,"started":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0,"filtered":1,"total":1}}},"registrar":{"states":{"current":1,"update":1},"writes":{"success":1,"total":1}},"system":{"load":{"1":5.54,"15":9.64,"5":4.98,"norm":{"1":0.2308,"15":0.4017,"5":0.2075}}}}}}

but nothing on logstash server

do I have to start filebeat on logstash server?

You do not need to tell logstash the ip of the client. It will accept input from filebeats running on any machine.

Then something is not setup correct for me and original poster.

I have follwoing on system where log is being shipped from
cat filebeat.yml |grep -v '#' |sed '/^$/d'
filebeat.inputs:

  • type: log
    enabled: true
    paths:
    • /tmp/sensors*.log
      scan_frequency: 10s
      filebeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: false
      output.logstash:
      hosts: ["elktst01:5044"]
      console:
      pretty: true
      logging.level: debug

cat beats.conf |grep -v '#' | sed '/^$/d'
input {
beats {
port => "5044"
}
}
filter {
output {
stdout { codec => rubydebug }
}

But nothing is showing up on logstash output.
no output at all.

filebeat says it is harvesting file. but nothing on other side

So filebeat is on windows machine and logstash running on CentOS. There will be no need to give any ip ?

Not quite, in you logstash input filter you do not have to specify an ip address. It will listen on all interfaces it will find. Filebeat however, still needs to know where to send the data. In your logstash output you need to configure an ip the server is listening on and is reachable from where ever you run filebeat on.

Now I can able to harwest files with filebeat however, logstash is not listening filebeat. And after every harwest I am seeing something like this

2019-02-15T09:41:18.796+0200    ERROR   pipeline/output.go:74   Failed to connect: dial tcp 200.200.200.200:5044: connect
ex: A connection attempt failed because the connected party did not properly respond after a period of time, or establis
hed connection failed because connected host has failed to respond.

My logstash config is:

input {
	beats {
		port => "5044"
	 }
}
filter {
		mutate {
			gsub => ["message", "xsi:\w+=(?<grp1>\"|')\w+(:\w+)?\k<grp1>\s*", ""]
	}
}
filter {
	if( [message] =~ "^.*- Request -.*<"){
		mutate { gsub => [ "message", "^[^<]+<", "<" ] }
			xml {
				remove_namespaces => true
				store_xml => true
				source => "message"
				target => "Request"
				force_array => false
			}
			#prune
			mutate {
			  remove_field => ["message",
							   "[Request][Header]",
							   "[Request][xmlns:xsi]",
							   "[Request][xmlns:soap]",
							   "[Request][xmlns:tns]",
							   "[Request][xmlns:types]",
							   "[Request][xmlns:xsd]",
							   "[Request][xmlns:env]",
							   "[Request][xmlns:soapenc]",
							   "[Request][Body][soap:encodingStyle]",
							   "[Request][Body][env:encodingStyle]"
							  ]
			}
	}
	else if ( [message] =~ "^.*- Response -.*<"){
		mutate { gsub => [ "message", "^[^<]+<", "<" ] }
		xml {
			remove_namespaces => true
			store_xml => true
			source => "message"
			target => "Response"
			force_array => false
		}
		#prune
		mutate {
		  remove_field => ["message",
						   "[Response][Header]",
						   "[Response][xmlns:xsi]",
						   "[Response][xmlns:soap]",
						   "[Response][xmlns:tns]",
						   "[Response][xmlns:types]",
						   "[Response][xmlns:xsd]",
						   "[Response][xmlns:env]",
						   "[Response][xmlns:soapenc]",
						   "[Response][Body][soap:encodingStyle]"
						  ]
		}
	}
}

output {
	elasticsearch {
  		hosts => ["******.***********.com:9200"]
  		index => "testloger-%{+yyyy.MM.dd}"
  	}
}

Is the ip 200.200.200.200 the ip that logstash is listening on? From the output you show I highly doubt that...

Not sure what type of machine you are using (linux/unix or windows) but you need to get the actual ip or hostname of the box where logstash is running on and put that in the filebeat.yml file at the logstash output.

Thats filebeat.yml which is running on windows:

###################### Filebeat Configuration Example #########################

    #=========================== Filebeat prospectors =============================

    filebeat.prospectors:

    # Each - is a prospector. Most options can be set at the prospector level, so
    # you can use different prospectors for various configurations.
    # Below are the prospector specific configurations.

    - type: log

      # Change to true to enable this prospector configuration.
      enabled: true

      # Paths that should be crawled and fetched. Glob based paths.
      paths:
        - C:\Logs\Kurumlar\*.log
        #- c:\programdata\elasticsearch\logs\*

      ### Multiline options

     
      multiline.pattern: '^\['
      multiline.negate: true
      multiline.match: after

    #============================= Filebeat modules ===============================

    filebeat.config.modules:
      # Glob pattern for configuration loading
      path: ${path.config}/modules.d/*.yml

      # Set to true to enable config reloading
      reload.enabled: true

      # Period on which files under path should be checked for changes
      reload.period: 10s

    #==================== Elasticsearch template setting ==========================

    setup.template.settings:
      index.number_of_shards: 3
      #index.codec: best_compression
      #_source.enabled: false

    #================================ General =====================================

    #============================== Dashboards =====================================
    
    #============================== Kibana =====================================

    #============================= Elastic Cloud ==================================

    #================================ Outputs =====================================

    #-------------------------- Elasticsearch output ------------------------------

    #----------------------------- Logstash output --------------------------------
    output.logstash:
      # The Logstash hosts
      hosts: ["200.200.200.200:5044"]
    #setup.template: 
    #  name: "testLog"
    #  pattern: "testLog-*"  
      # Optional SSL. By default is off.
      # List of root certificates for HTTPS server verifications
      #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

      # Certificate for SSL client authentication
      #ssl.certificate: "/etc/pki/client/cert.pem"

      # Client Certificate Key
      #ssl.key: "/etc/pki/client/cert.key"

Logstash .conf file which is run on CentOS:

input {
	beats {
		port => "5044"
	 }
}
filter {
		mutate {
			gsub => ["message", "xsi:\w+=(?<grp1>\"|')\w+(:\w+)?\k<grp1>\s*", ""]
	}
}
filter {
	if( [message] =~ "^.*- Request -.*<"){
		mutate { gsub => [ "message", "^[^<]+<", "<" ] }
			xml {
				remove_namespaces => true
				store_xml => true
				source => "message"
				target => "Request"
				force_array => false
			}
			#prune
			mutate {
			  remove_field => ["message",
							   "[Request][Header]",
							   "[Request][xmlns:xsi]",
							   "[Request][xmlns:soap]",
							   "[Request][xmlns:tns]",
							   "[Request][xmlns:types]",
							   "[Request][xmlns:xsd]",
							   "[Request][xmlns:env]",
							   "[Request][xmlns:soapenc]",
							   "[Request][Body][soap:encodingStyle]",
							   "[Request][Body][env:encodingStyle]"
							  ]
			}
	}
	else if ( [message] =~ "^.*- Response -.*<"){
		mutate { gsub => [ "message", "^[^<]+<", "<" ] }
		xml {
			remove_namespaces => true
			store_xml => true
			source => "message"
			target => "Response"
			force_array => false
		}
		#prune
		mutate {
		  remove_field => ["message",
						   "[Response][Header]",
						   "[Response][xmlns:xsi]",
						   "[Response][xmlns:soap]",
						   "[Response][xmlns:tns]",
						   "[Response][xmlns:types]",
						   "[Response][xmlns:xsd]",
						   "[Response][xmlns:env]",
						   "[Response][xmlns:soapenc]",
						   "[Response][Body][soap:encodingStyle]"
						  ]
		}
	}
}

output {
	elasticsearch {
  		hosts => ["*********.**********.com:9200"]
  		index => "testloger-%{+yyyy.MM.dd}"
  	}
}

If you do

ifconfig -a

Do you see the ip 200.200.200.200 anywhere?

You can check if the centOS box is listening for port 5044 with

sudo netstat -tlpnu

With ifconfig -a I can able to see 200.200.200.200

ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 200.200.200.200  netmask 255.255.255.0  broadcast 192.168.100.100
        inet6 fe80::e551:6b1e:d3f1:5a36  prefixlen 64  scopeid 0x20<link>
        ether 00:50:56:8f:ae:99  txqueuelen 1000  (Ethernet)
        RX packets 847254  bytes 57322607 (54.6 MiB)
        RX errors 0  dropped 974  overruns 0  frame 0
        TX packets 5559  bytes 811732 (792.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 64  bytes 5792 (5.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 64  bytes 5792 (5.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

with sudo netstat -tlpnu

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:2              0.0.0.0:*               LISTEN      6080/sshd
tcp        0      0 17.0.0.1:25            0.0.0.0:*               LISTEN      6313/master
tcp6       0      0 :::5044                 :::*                    LISTEN      9162/java
tcp6       0      0 :::22                   :::*                    LISTEN      6080/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      6313/master
tcp6       0      0 17.0.0.1:9600          :::*                    LISTEN      9162/java
udp        0      0 17.0.0.1:323           0.0.0.0:*                           5549/chronyd
udp6       0      0 ::1:323                 :::*                                5549/chronyd

I wonder if you have a problem with you network setup on the box.

If you're using ip block 200.200.200.0/24 you cannot have a broadcast address 192.168.100.100

If I do a ipcalc in the displayed network I get te following.

pjanzen@Pauls-MacBook-Pro:~$ ipcalc 200.200.200.200 255.255.255.0
Address:   200.200.200.200      11001000.11001000.11001000. 11001000
Netmask:   255.255.255.0 = 24   11111111.11111111.11111111. 00000000
Wildcard:  0.0.0.255            00000000.00000000.00000000. 11111111
=>
Network:   200.200.200.0/24     11001000.11001000.11001000. 00000000
HostMin:   200.200.200.1        11001000.11001000.11001000. 00000001
HostMax:   200.200.200.254      11001000.11001000.11001000. 11111110
Broadcast: 200.200.200.255      11001000.11001000.11001000. 11111111
Hosts/Net: 254                   Class C

So I am unsure how I can help you solve this issue...

Apparently the problem was firewall in CentOS. I turned it off and it just worked

Cool, happy testing then :slight_smile:

I guess I open another thread as my problem is still not fixed.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.