Filebeat service doesn't send log

Hello everyone,

I'm configuring filebeat to read logs from an xml file.

When I do tests with the exe I get the logs in logstasch/elasticsearch and I can work on it in Kibana but when I run Filebeat as a service I don't get any more logs.

This is my current configuration file.

- type: log
  enabled: true
  reload.enabled: true
  reload.period: 60s

  path: ${path.config}/modules.d/*.yml

  hosts: ["myhost:5044"]

  - add_host_metadata: ~

And here my Logstash configuration

    #Taking data over beats
input {
      beats {
			port => 5044
			tags => [ "mytag" ]

#filtring data
filter {
	if "checkmytag" in [tags] {
  	 xml { my parsing is made here }

#output in elastic search
output {
        if "cifs-auditing" in [tags] {
			elasticsearch {
				template_overwrite => true
					hosts => "localhost:9200"
					index => "myfilename-%{+YYYY.MM.dd}" 		                           
		stdout { codec => rubydebug }

What I would like to do is that the Filebeat can read the file every 5 minutes and pass the additions to logstash. I can't find the error I'm making and I've been at it for quite a while so I'm turning to you.

PS: I'm completely new to Elastic.

Thanking you in advance.
Sincerely, Romain.

Hey @Roms, welcome to discuss :slight_smile:

I think that the first thing to do would be to identify in what piece of the data chain your logs are being lost, you can try to enable debug logging in filebeat to see if it is reading the files and sending events to Logstash. If it is not doing it you will have to review the configuration.

For example I see that you are using backslashes in your paths, when these characters are used it is recommended to quote them with single quotes because they are used for escaping other characters and can lead to unexpected results. Also the paths should be configured as a list. So not sure of what kind of paths you want to collect files from, but I recommend you to add the paths like this:

    - '\\mynetworkpathing'

Also, the lines about configuration reload are not expected at the input level, they are used when including external configuration files, for example:

- type: log
  enabled: true
    - '\\mynetworkpathing'

  path: ${path.config}/modules.d/*.yml
  reload.enabled: true
  reload.period: 60s

But if you are not using modules you can fully remove this filebeat.config.modules section.

How are you using logstash in this scenario? Is "the exe" your application?

1 Like

Hello jsoriano,

I'm using the Filebeat.exe and that's send log but just one time on my Logstash.

How can I configure Filebeat to check my logs every 5 minutes?

I catched also this strange error on my Logstash:

    [2020-03-06T10:45:00,422][INFO ][][main] Starting server on port: 5044
[2020-03-06T10:45:06,845][ERROR][logstash.javapipeline    ][main] A plugin had an unrecoverable error. Will restart this plugin.
  Plugin: <LogStash::Inputs::Beats port=>5044, tags=>["mytags"], id=>"c2dc84a0d528db334d1485e14900eb9be9707e471815f0e2881c1a912af179c6", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_5e08c25a-bc2c-498c-b322-628057dfe54f", enable_metric=>true, charset=>"UTF-8">, host=>"", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"], client_inactivity_timeout=>60, executor_threads=>2>
  Error: Address already in use
  Exception: Java::JavaNet::BindException
  Stack: Method)$AbstractUnsafe.bind(io/netty/channel/$HeadContext.bind(io/netty/channel/

edit1: This is my log from Filebeat

2020-03-06T13:55:18.607+0100    DEBUG   [input] input/input.go:152      Run input
2020-03-06T13:55:18.607+0100    DEBUG   [input] log/input.go:191        Start next scan
2020-03-06T13:55:18.611+0100    DEBUG   [input] log/input.go:212        input states cleaned up. Before: 0, After: 0, Pending: 0
2020-03-06T13:57:58.601+0100    INFO    [monitoring]    log/log.go:145  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":109},"total":{"ticks":124,"value":124},"user":{"ticks":15}},"handles":{"open":226},"info":{"epheme
2020-03-06T13:58:00.898+0100    DEBUG   [input] input/input.go:152      Run input
2020-03-06T13:58:02.369+0100    DEBUG   [input] log/input.go:191        Start next scan
2020-03-06T13:58:02.369+0100    DEBUG   [input] log/input.go:212        input states cleaned up. Before: 0, After: 0, Pending: 0

What I understand after this logs, it's that's doesn't read anymore the file. The first time that I run my Filebeat it's work find and send data but after he doesn't send anymore data and I'm sure that file is growing over time because this xml catch all the things that are made on the file server.

Filebeat continuously monitors the configured paths, so if any new log is added to the file, it should be collected.

Filebeat is though for files that are sequentially written, and possibly rotated after some time (as usual log files are). How is the content of this file being written?

The error in logstash indicates that there is already a service listening on port 5044, is it possible that you are

My file look like

<EventMain >

He's allways writing in. It's a CIFS-Auditing file from Netapp.
It's why I doesn't understant why I got log only on start.
Filebeat doesn't work for somthing like that ?

And of course he got rotate every 2 hours by the log system.

Thank you for your reply.

Is the file truncated every time an XML is written? Or the file contains multiple XML objects one after the other?

It is a file in which XML lines are continuously written. When the service starts it reads the data and sends it to logstash and elasticsearch indexes it without any problem. I see them in Kibana.
But then the file is indicated as if there is no modification while it is continuously fed since every action is transcribed in the file.

That's where I'm lost, how can I get Filebeat to detect the new lines? Do I have to implement some kind of tag on the last line he read? Do I need to change the Filebeat configuration? Do I have to use another product than Filebeat?

Thank you for the support.

But are the lines appended? or they replace the content in the file?

Filebeat keeps track of the last line read of every file it reads. If new lines are appended, they are automatically collected, no special options are needed for that, this is the normal Filebeat behaviour. If written XML lines replace the existing content in the file, it may be a problem, because Filebeat only detects the changes after the last line read. If content in the file is being continuously replaced then a different strategy may be needed.

The file is written continuously and is saved under a different name after a certain period of time.

The logs are contained in the file for one day before being saved under another name.

(There is also the problem that Filebeat only reads the logs at startup I see in Kibana that the reading time remains the same and it no longer feeds the logstash. )

So logically I can't operate filebeat in these conditions and I have to turn to another tool. Which one do you think would do the job well?

It would be good to know what is special in this file, or in this deployment, you may have similar problems with any other tool if you only replace filebeat.

To try to isolate things and identify where is the problem, you might try to run filebeat with console output and check in the logs if it is collecting event from this file.

- type: log
  enabled: true
    - '\\mynetworkpathing'

  pretty: true

At the start I get my full file log and then I get this on a loop:

Edit1: I also performed actions that are logged in this file

And this line pops up from time to time.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.