Logstash with beats issue

Installed Logstash beats plug in

./logstash-plugin install logstash-input-beats
Validating logstash-input-beats
Installing logstash-input-beats
Installation successful

When i run the logstash with configuration file which is going to read input from filebeats, I am getting below error.

"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-06-27T15:54:05,768][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://dhrg:9200/"]}
[2018-06-27T15:54:06,668][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"10.78.102.73:5044"}
[2018-06-27T15:54:06,708][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x6adadd10 run>"}
[2018-06-27T15:54:06,879][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-06-27T15:54:06,909][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-06-27T15:54:07,344][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-06-27T15:54:13,237][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
[2018-06-27T15:54:21,485][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-06-27T15:54:27,705][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Beats port=>5044, host=>"10.78.102.73", id=>"6f44a9e3029c51148a1228b248e138fa8e73491a8b31870ddd1246d8f8f1d26c", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_0a34a7a5-b2eb-4e25-9a67-f16008df7496", enable_metric=>true, charset=>"UTF-8">, ssl=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>16>
Error: Cannot assign requested address
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:433)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:425)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:223)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:128)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:558)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1283)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:501)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:486)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:989)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:254)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:364)

When i installed on windows, i am able to read data from the filebeat using logstash.

Can you please help what it is missing.

Is 10.78.102.73 the address of the Logstash host?

It is file beat host address.

Then you have misunderstood what the host option means. It's the network interface on the Logstash host on which to listen. Just remove that option; you don't need it.

1 Like

Thank you Mag. I was thinking this is the IP address of filebeat host. I have removed. Now the error is gone but it is not creating any index by reading the log files. Is that normal to show "0.0.0.0" as a address of beats?

[2018-06-28T09:22:10,384][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-06-28T09:22:10,421][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x6939d4df run>"}
[2018-06-28T09:22:10,545][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-06-28T09:22:10,562][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-06-28T09:22:10,973][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

Do you think the filebeat registry causing issue?. Please advise.

I also tried to dispaly using rubydebug but nothing is displaying on the screen

Logstash.Conf entry

stdout{
codec=> "rubydebug"
}

Logstash while runinng

[2018-06-28T11:59:48,090][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-06-28T11:59:48,123][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x7fed6c70 run>"}
[2018-06-28T11:59:48,244][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-06-28T11:59:48,244][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-06-28T11:59:48,673][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

Filebeat

I deleted filebeat registry before running logstash. When i run logstash, i can see data in the registry.

Mag, Further more, i have also observed in the filebeat log file as below

2018-06-28T14:44:32.802-0400 ERROR logstash/async.go:235 Failed to publish events caused by: write tcp 10.78.102.73:48056->10.78.102.106:5044: write: connection reset by peer
2018-06-28T14:44:33.802-0400 ERROR pipeline/output.go:92 Failed to publish events: write tcp 10.78.102.73:48056->10.78.102.106:5044: write: connection reset by peer

Is that normal to show "0.0.0.0" as a address of beats?

Yes. That address means "listen on all network interfaces".

2018-06-28T14:44:32.802-0400 ERROR logstash/async.go:235 Failed to publish events caused by: write tcp 10.78.102.73:48056->10.78.102.106:5044: write: connection reset by peer
2018-06-28T14:44:33.802-0400 ERROR pipeline/output.go:92 Failed to publish events: write tcp 10.78.102.73:48056->10.78.102.106:5044: write: connection reset by peer

It looks like Logstash is closing the connection. Are you sure there aren't any log entries related to this? Make sure your SSL configuration is symmetrical in Logstash and Filebeat, i.e. it should either be on in both places or off in both places.

Thank you Mag. Finally it worked by changing the logstash server name with IP address in the filebeat.yml.

I observed two issues after filebeat is working.

  1. It looks its reading only last lines of the file rather than all the files of the file.
  2. I have two logstash config files for different delivery.Both use the same source files coming from the filebeat. While i am running logstash with 2nd config file, it says " Error: Address already in use". Can you please advise how to we run two logstash processes with different config files both are consuming same file beat data.

It looks its reading only last lines of the file rather than all the files of the file.

If you want to reprocess an old file with Filebeat you need to delete its registry file from disk.

I have two logstash config files for different delivery.Both use the same source files coming from the filebeat. While i am running logstash with 2nd config file, it says " Error: Address already in use". Can you please advise how to we run two logstash processes with different config files both are consuming same file beat data.

Do you really need to run multiple Logstash instances? It would be much easier if you just ran everything in a single instance.

Mag, I removed registry however somehow its not reading full content of the file.

In my scenario, i need to have two logstash instances which are going to have different grok logic inside logstash conf file. I should have two logstash instances in my scenario.

Please do let me know what is the best way to work around.

I also tried filebeat sending log data to server folder. From there i am using logstash to read the server folder but the thing is Filebeat is storing "prosperctor" header info along with log record. Is there anyway we can avoid storing filebeat "prospector" while filebeat is sending data to file.

{"prospector":{"type":"log"},"host":{"name":"dhrg"},"@version":"1","beat":{"version":"6.3.0","hostname":"dsrc","name":"dsrc"},"message": "LOG ROW data"

From the above, i would like to put only "LOG ROW data" in the target file from logstash output file plugin. The input is beats.

Mag, I removed registry however somehow its not reading full content of the file.

That's probably something you should ask the Filebeat group about.

In my scenario, i need to have two logstash instances which are going to have different grok logic inside logstash conf file. I should have two logstash instances in my scenario.

To make a good suggestion of how to deal with the situation I need to understand why you need two instances.

I also tried filebeat sending log data to server folder. From there i am using logstash to read the server folder but the thing is Filebeat is storing "prosperctor" header info along with log record. Is there anyway we can avoid storing filebeat "prospector" while filebeat is sending data to file.

I don't know, but you can always delete unwanted fields on the Logstash side.

Sure Mag. I will post this to filebeat group.

The reason i have multiple logstash instances is due to having different logic for different consumers. Different people write logstash logic to different consumer applications. Hope this gives you enough info.

Even though i delete unwanted columns in the logstash, filebeat metadata is still showing in the output.

Ex: {"input":{"type":"log"},"@timestamp":"2018-06-29T19:23:13.456Z","message":

I am trying to remove the metadata fields and also wanted to see just actual message content in the logstash output file ie. i wanted to move filebeat data to file and use file as a source to logstash input.

Even though i delete unwanted columns in the logstash, filebeat metadata is still showing in the output.

Ex: {"input":{"type":"log"},"@timestamp":"2018-06-29T19:23:13.456Z","message":

So how, exactly, are you deleting the fields?

I am trying to remove the metadata fields and also wanted to see just actual message content in the logstash output file ie. i wanted to move filebeat data to file and use file as a source to logstash input.

Make sure you're using the lines codec for the file output in your Logstash configuration. That codec allows you to configure the output format. I believe the default format includes the timestamp.

Thank you Mag. After adding codec line, it is able to send atleast raw log rows to output file. I observed its sending only first 500. Going through some posts to resolve this 500 limitation.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.