Unable to ingest logs on remote Elasticsearch server

Hi All,

First time post so please be gentle. I am pretty new to ELK but not to Linux.

I am running:

Server A - Remote Server
filebeat v 6.8.4
OS Ubuntu 18.04

Server B - ELK Server
Elastic Search 6.8.4
OS Ubuntu 18.04
Kibana
Filebeat v 6.8.4

Issue. I am unable to get remote logs into Elasticsearch via filebeat. I receive the following error. on Server A - Remote Server from the /var/log/filebeat/filebeat log.
"2019-11-19T15:26:23.362Z ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(https://my.elasticserver.mine:9200)): Get https://my.elasticserver.mine:9200: dial tcp my.elasticserver.mine:9200: connect: connection refused"

On Server B - ELK Server I can do a TCPDUMP and I can see the connection being made so the firewalling is all ok (I think)

17:31:10.526984 IP xx.xx.xx.xx.57740 > my.elasticserver.mine.9200: Flags [S], seq 3340582548, win 29200, options [mss 1460,sackOK,TS val 761458590 ecr 0,nop,wscale 7], length 0
17:31:10.527018 IP my.elasticserver.mine.9200 > xx.xx.xx.xx.57740: Flags [R.], seq 0, ack 3340582549, win 0, length 0
17:31:10.531121 IP xx.xx.xx.xx.57742 > my.elasticserver.mine.9200: Flags [S], seq 62320025, win 29200, options [mss 1460,sackOK,TS val 761458600 ecr 0,nop,wscale 7], length 0
17:31:10.531140 IP my.elasticserver.mine.9200 > xx.xx.xx.xx.57742: Flags [R.], seq 0, ack 62320026, win 0, length 0

My Filebeat.yml file from /etc/filebeat is as follows

###################### Filebeat Configuration Example #########################
filebeat.inputs:

  • type: log
    enabled: true
    paths:
    • /var/log/syslog
    • /var/log/*.log
    • /var/log//.log
      #- c:\programdata\elasticsearch\logs*
      exclude_files: ['.gz$']
      exclude_files: ['.?']

filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false

setup.template.settings:
index.number_of_shards: 3

output.elasticsearch:
hosts: ["my.elasticserver.mine:9200"]

protocol: "https"
username: "somename"
password: some-password""

processors:

  • add_host_metadata: ~
  • add_cloud_metadata: ~

No Matter if i change passwords etc I still can not get my Server A - Remote Server logs into Server B- Elk Server.

Can anyone spot where I am going wrong? No, I am not shipping logs to logstash on Server B - ELK Server. I am shipping, or at least want to ship directly to Elastic search.

Here is the debug output from Server A - Remote Server

2019-11-19T17:16:50.036Z DEBUG [input] log/input.go:208 input states cleaned up. Before: 2, After: 2, Pending: 0
2019-11-19T17:16:50.036Z DEBUG [input] log/input.go:268 Exclude file: /var/log/syslog.2.gz
2019-11-19T17:16:50.036Z DEBUG [input] log/input.go:268 Exclude file: /var/log/syslog.3.gz
2019-11-19T17:16:50.036Z DEBUG [input] log/input.go:268 Exclude file: /var/log/syslog.4.gz
2019-11-19T17:16:50.036Z DEBUG [input] log/input.go:268 Exclude file: /var/log/syslog.5.gz
2019-11-19T17:16:50.036Z DEBUG [input] log/input.go:268 Exclude file: /var/log/syslog.6.gz
2019-11-19T17:16:50.036Z DEBUG [input] log/input.go:268 Exclude file: /var/log/syslog.7.gz
2019-11-19T17:16:50.036Z DEBUG [input] log/input.go:417 Check file for harvesting: /var/log/syslog
2019-11-19T17:16:50.036Z DEBUG [input] log/input.go:507 Update existing file for harvesting: /var/log/syslog, offset: 104857
2019-11-19T17:16:50.036Z DEBUG [input] log/input.go:559 Harvester for file is still running: /var/log/syslog
2019-11-19T17:16:50.036Z DEBUG [input] log/input.go:417 Check file for harvesting: /var/log/syslog.1
2019-11-19T17:16:50.037Z DEBUG [input] log/input.go:507 Update existing file for harvesting: /var/log/syslog.1, offset: 101093
2019-11-19T17:16:50.037Z DEBUG [input] log/input.go:559 Harvester for file is still running: /var/log/syslog.1
2019-11-19T17:16:50.037Z DEBUG [input] log/input.go:208 input states cleaned up. Before: 2, After: 2, Pending: 0
2019-11-19T17:16:58.201Z ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(https://my.elasticserver.mine:9200)): Get https://my.elasticserver.mine:9200: dial tcp my.elasticserver.mine:9200: connect: connection refused
2019-11-19T17:16:58.201Z INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(https://my.elasticserver.mine:9200)) with 15 reconnect attempt(s)
2019-11-19T17:16:58.201Z DEBUG [elasticsearch] elasticsearch/client.go:715 ES Ping(url=https://my.elasticserver.mine:9200)
2019-11-19T17:16:58.201Z INFO [publish] pipeline/retry.go:189 retryer: send unwait-signal to consumer
2019-11-19T17:16:58.201Z INFO [publish] pipeline/retry.go:191 done
2019-11-19T17:16:58.201Z INFO [publish] pipeline/retry.go:166 retryer: send wait signal to consumer
2019-11-19T17:16:58.201Z INFO [publish] pipeline/retry.go:168 done
2019-11-19T17:16:58.205Z DEBUG [elasticsearch] elasticsearch/client.go:719 Ping request failed with: Get https://77.68.95.148:9200: dial tcp my.elasticserver.mine:9200: connect: connection refused
2019-11-19T17:16:59.766Z DEBUG [input] input/input.go:152 Run input
2019-11-19T17:16:59.766Z DEBUG [input] log/input.go:187 Start next scan
2019-11-19T17:16:59.766Z DEBUG [input] log/input.go:268 Exclude file: /var/log/syslog
2019-11-19T17:16:59.767Z DEBUG [input] log/input.go:268 Exclude file: /var/log/alternatives.log
2019-11-19T17:16:59.767Z DEBUG [input] log/input.go:268 Exclude file: /var/log/auth.log
2019-11-19T17:16:59.767Z DEBUG [input] log/input.go:268 Exclude file: /var/log/bootstrap.log
2019-11-19T17:16:59.767Z DEBUG [input] log/input.go:268 Exclude file: /var/log/cloud-init-output.log
2019-11-19T17:16:59.767Z DEBUG [input] log/input.go:268 Exclude file: /var/log/cloud-init.log
2019-11-19T17:16:59.767Z DEBUG [input] log/input.go:268 Exclude file: /var/log/dpkg.log

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.