How to send CSV from Filebeat to Logstash

Hello guys,

Please bear with the noobness of this thread.

My objective here is to send CSV from Filebeat to Logstash-Elasticsearch-Kibana

Here is my Filebeat.yml:

- input_type: log
  paths:
    - /var/log/domono/domono.csv

output.logstash:
  hosts: ["[ELK IP]:5044"]
  bulk_max_size: 16384
  path: "/tmp/filebeat"
  filename: filebeat
  rotate_every_kb: 10000
  pretty: true
  timeout: 10
  piplining: 1
  compression_level: 9

Here is my logstash input config:

#tcp domono stream via 5044
input {
  tcp {
    type => "domono_log"
    port => 5044
  }
}

Here is the Filebeat log I am getting:

INFO Setup Beat: filebeat; Version: 5.3.0
INFO Max Retries set to: 3
INFO Activated logstash as output plugin.
INFO Publisher name: domono
INFO Flush Interval set to: 1s
INFO Max Bulk Size set to: 16384
INFO filebeat start running.
INFO Registry file set to: /var/lib/filebeat/registry
INFO Loading registrar data from /var/lib/filebeat/registry
INFO States Loaded from registrar: 0
INFO Loading Prospectors: 1
INFO Starting Registrar
INFO Start sending events to output
INFO Prospector with previous states loaded: 0
INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
INFO Starting prospector of type: log; id: 14136332072992873344
INFO Loading and starting Prospectors completed. Enabled prospectors: 1
INFO Harvester started for file: /var/log/tiveyes/visitors.csv
ERR Failed to publish events caused by: read tcp [Filebeat IP]:45240->[ELK IP]:5044: i/o timeout
INFO Error publishing events (retrying): read tcp [Filebeat IP]:45240->[ELK IP]:5044: i/o timeout
ERR Failed to publish events caused by: read tcp [Filebeat IP]:45242->[ELK IP]:5044: i/o timeout
INFO Error publishing events (retrying): read tcp [Filebeat IP]:45242->[ELK IP]:5044: i/o timeout
INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.logstash.call_count.PublishEvents=3 libbeat.logstash.publish.read_errors=2 libbeat.logstash.publish.write_bytes=1022 libbeat.logstash.published_but_not_acked_events=32 libbeat.publisher.published_events=16
ERR Failed to publish events caused by: read tcp [Filebeat IP]:45244->[ELK IP]:5044: i/o timeout
INFO Error publishing events (retrying): read tcp [Filebeat IP]:45244->[ELK IP]:5044: i/o timeout

I've tried adding the bulk_max_size, still getting those errors.

Please help a noob here.

Use the beats input plugin in Logstash instead of the TCP plugin.

Hi Chris,

I tried installing the logstash-input-beats plugin.
Changed my logstash input config to:

input {
  beats {
    port => 5044
  }
}

I also see this line in logstash log:

[logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}

BUT, I am still getting same error on filebeat:

ERR Failed to publish events caused by: read tcp [Filebeat IP]:45240->[ELK IP]:5044: i/o timeout
INFO Error publishing events (retrying): read tcp [Filebeat IP]:45240->[ELK IP]:5044: i/o timeout

Does the Filebeat need to be on the same version of the ELK stack?
My Filebeat is 5.3
My ELK is 5.2

Cheers!

Can you telnet to port 5044 of the Logstash server from the machine where Filebeat is running?

Yes I can:

root@ubuntu:/home/ubuntu# telnet [ELK IP] 5044
Trying [ELK IP]...
Connected to [ELK IP].
Escape character is '^]'.

Is there anything in the network like a firewall / loadbalancer between FB and LS?

Hi Ruflin,

No there isn't any firewall or loadbalancer....

Can you share again your current LS config? Do you only have 1 input pr 2 inputs enabled?

Ruflin,

I have a few inputs in LS:

#tcp syslogs tream via 5140
input {
  tcp {
    type => "syslog"
    port => 5140
  }
}
#udp syslogs tream via 5140
input {
  udp {
    type => "syslog"
    port => 5140
  }
}

#filebeat domono stream via 5044
input {
  beats {
    port => 5044
  }
}

It should not have an affect, but could you try LS with just the beats-input enabled? Do you see any log message on the LS side?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.