Filebeat unable to connect logstash server

Hi Team,

I have installed filebeat on a server which is having IP as 192.168.x.x and filebeat is trying to send events to two logstash servers which are having IPs as 10.20.x.x.

Currently i see filebeat service is failing continuously and unable to send logs to logstash and failing to connect to kibana as well.

Filebeat.yml

  - type: log
      fields_under_root: true
      fields:
         log_type:  federate_server1
         app_id: fs
      multiline.pattern: ^[[:space:]]+(at|\.{3})\b|^Caused by:|^java|^...|^-
      multiline.negate: true
      multiline.match: after
      paths:
        - /opt/federate/log/*
 
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
 
setup.dashboards.enabled: true
setup.kibana:
  host: "http://10.20.x.1:5601"
  username: elastic
  password: ${es_pwd}
 
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true
 
output.logstash:
  hosts: ['10.20.x.1:5044', '10.20.x.2:5044']
  loadbalance: true
[root@ ~]#
 

error logs

Sep 24 19:40:48 <hostname> filebeat[16125]: 2021-09-24T19:40:48.992+0300        ERROR        instance/beat.go:989        Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://10.20.x.1:5601/api/status fails: fail to execute the HTTP GET request: Get "http://10.20.x.1:5601/api/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers). Response: .

Sep 24 19:40:48 <hostname> systemd[1]: Unit filebeat.service entered failed state.
Sep 24 19:40:48 <hostname> systemd[1]: filebeat.service failed.

Sep 24 19:42:46 <hostname> heartbeat: 2021-09-24T19:42:46.115+0300#011ERROR#011[publisher_pipeline_output]#011pipeline/output.go:154#011Failed to connect to backoff(async(tcp://10.20.x.2:5044)): dial tcp 10.20.x.2:5044: i/o timeout
Sep 24 19:43:11 <hostname> heartbeat: 2021-09-

Sep 24 19:43:27 <hostname> metricbeat: Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://10.20.x.1:5601/api/status fails: fail to execute the HTTP GET request: Get "http://10.20.x.1:5601/api/status": dial tcp 10.20.x.1:5601: i/o timeout (Client.Timeout exceeded while awaiting headers). Response: .

complete logs -

root@<hostname> ~]#  systemctl restart filebeat;journalctl -fu filebeat
 
-- Logs begin at Wed 2021-08-18 14:14:36 +03. --
Sep 24 19:40:48 <hostname> filebeat[16125]: 2021-09-24T19:40:48.992+0300        INFO        [monitoring]        log/log.go:154        Uptime: 1m30.091989299s
Sep 24 19:40:48 <hostname> filebeat[16125]: 2021-09-24T19:40:48.992+0300        INFO        [monitoring]        log/log.go:131        Stopping metrics logging.
Sep 24 19:40:48 <hostname> filebeat[16125]: 2021-09-24T19:40:48.992+0300        INFO        instance/beat.go:470        filebeat stopped.
Sep 24 19:40:48 <hostname> filebeat[16125]: 2021-09-24T19:40:48.992+0300        ERROR        instance/beat.go:989        Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://10.20.x.1:5601/api/status fails: fail to execute the HTTP GET request: Get "http://10.20.x.1:5601/api/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers). Response: .
Sep 24 19:40:48 <hostname> filebeat[16125]: Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://10.20.x.1:5601/api/status fails: fail to execute the HTTP GET request: Get "http://10.20.x.1:5601/api/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers). Response: .
Sep 24 19:40:48 <hostname> systemd[1]: filebeat.service: main process exited, code=exited, status=1/FAILURE
Sep 24 19:40:48 <hostname> systemd[1]: Stopped Filebeat sends log files to Logstash or directly to Elasticsearch..
Sep 24 19:40:48 <hostname> systemd[1]: Unit filebeat.service entered failed state.
Sep 24 19:40:48 <hostname> systemd[1]: filebeat.service failed.
Sep 24 19:40:48 <hostname> systemd[1]: Started Filebeat sends log files to Logstash or directly to Elasticsearch..
Sep 24 19:40:49 <hostname> filebeat[16757]: 2021-09-24T19:40:49.141
 
Sep 24 19:42:19 <hostname> filebeat[16757]: 2021-09-24T19:42:19.160+0300        ERROR        instance/beat.go:989        Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://10.20.x.1:5601/api/status fails: fail to execute the HTTP GET request: Get "http://10.20.x.1:5601/api/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers). Response: .
Sep 24 19:42:19 <hostname> filebeat[16757]: Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://10.20.x.1:5601/api/status fails: fail to execute the HTTP GET request: Get "http://10.20.x.1:5601/api/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers). Response: .
Sep 24 19:42:19 <hostname> systemd[1]: filebeat.service: main process exited, code=exited, status=1/FAILURE
Sep 24 19:42:19 <hostname> systemd[1]: Unit filebeat.service entered failed state.
Sep 24 19:42:19 <hostname> systemd[1]: filebeat.service failed.
Sep 24 19:42:19 <hostname> systemd[1]: filebeat.service holdoff time over, scheduling restart.
Sep 24 19:42:19 <hostname> systemd[1]: Stopped Filebeat sends log files to Logstash or directly to Elasticsearch..
Sep 24 19:42:19 <hostname> systemd[1]: Started Filebeat sends log files to Logstash or directly to Elasticsearch..
Sep 24 19:42:19 <hostname> filebeat[17279]: 2021-09-24T19:42:19.456+0300        INFO        instance/beat.go:665        Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
Sep 24 19:42:19 <hostname> filebeat[17279]: 2021-09-24T19:42:19.460+0300        INFO        [beat]        instance/beat.go:1030        Host info        {"system_info": {"host": {"architecture":"x86_64","boot_time":"2021-08-18T14:14:34+03:00","containerized":false,"name":"<hostname>","ip":["127.0.0.1/8","::1/128","192.168.x.x/24","fe80::50f9:284e:240d:8471/64"],"kernel_version":"3.10.0-1160.36.2.el7.x86_64","mac":["ec:eb:b8:98:a2:2c","ec:eb:b8:98:a2:2d","ec:eb:b8:98:a2:2e","ec:eb:b8:98:a2:2f"],"os":{"type":"linux","family":"redhat","platform":"rhel","name":"Red Hat Enterprise Linux Server","version":"7.9 (Maipo)","major":7,"minor":9,"patch":0,"codename":"Maipo"},"timezone":"+03","timezone_offset_sec":10800,"id":"c010ccef34dc4f06bb8861c24e7ea9ad"}}}
Sep 24 19:42:19 <hostname> filebeat[17279]: 2021-09-24T19:42:19.461+0300        INFO        instance/beat.go:309        Setup Beat: filebeat; Version: 7.14.0
Sep 24 19:42:19 <hostname> filebeat[17279]: 2021-09-24T19:42:19.461+0300        INFO        [publisher]        pipeline/module.go:113        Beat name: server1
Sep 24 19:42:19 <hostname> filebeat[17279]: 2021-09-24T19:42:19.462+0300        WARN        beater/filebeat.go:178        Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
Sep 24 19:42:19 <hostname> filebeat[17279]: 2021-09-24T19:42:19.462+0300        INFO        [monitoring]        log/log.go:118        Starting metrics logging every 30s
Sep 24 19:42:19 <hostname> filebeat[17279]: 2021-09-24T19:42:19.462+0300        INFO        kibana/client.go:122        Kibana url: http://10.20.x.1:5601
Sep 24 19:42:22 <hostname> filebeat[17279]: 2021-09-24T19:42:22.460+0300        INFO        [add_cloud_metadata]        add_cloud_metadata/add_cloud_metadata.go:101        add_cloud_metadata: hosting provider type not detected.
Sep 24 19:42:49 <hostname> filebeat[17279]: 2021-09-24T19:42:49.470+0300        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":60,"time":{"ms":61}},"total":{"ticks":270,"time":{"ms":272},"value":270},"user":{"ticks":210,"time":{"ms":211}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":10},"info":{"ephemeral_id":"3abca521-e970-411d-956c-413490ed2643","uptime":{"ms":30073},"version":"7.14.0"},"memstats":{"gc_next":18712048,"memory_alloc":11335152,"memory_sys":78988296,"memory_total":55313336,"rss":105451520},"runtime":{"goroutines":16}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0},"type":"logstash"},"pipeline":{"clients":0,"events":{"active":0},"queue":{"max_events":4096}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":32},"load":{"1":0.03,"15":0.05,"5":0.02,"norm":{"1":0.0009,"15":0.0016,"5":0.0006}}}}}}
+0300        INFO        instance/beat.go:665        Home path: [/usr/share/filebeat]
 

Sep 24 19:42:19 <hostname> filebeat: 2021-09-24T19:42:19.160+0300#011ERROR#011instance/beat.go:989#011Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://10.20.x.1:5601/api/status fails: fail to execute the HTTP GET request: Get "http://10.20.x.1:5601/api/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers). Response: .
Sep 24 19:42:19 <hostname> filebeat: Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://10.20.x.1:5601/api/status fails: fail to execute the HTTP GET request: Get "http://10.20.x.1:5601/api/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers). Response: .
Sep 24 19:42:46 <hostname> heartbeat: 2021-09-24T19:42:46.115+0300#011ERROR#011[publisher_pipeline_output]#011pipeline/output.go:154#011Failed to connect to backoff(async(tcp://10.20.x.2:5044)): dial tcp 10.20.x.2:5044: i/o timeout
Sep 24 19:43:11 <hostname> heartbeat: 2021-09-24T19:43:11.412+0300#011ERROR#011[publisher_pipeline_output]#011pipeline/output.go:154#011Failed to connect to backoff(async(tcp://10.20.x.1:5044)): dial tcp 10.20.x.1:5044: i/o timeout
Sep 24 19:43:27 <hostname> metricbeat: 2021-09-24T19:43:27.573+0300#011ERROR#011instance/beat.go:989#011Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://10.20.x.1:5601/api/status fails: fail to execute the HTTP GET request: Get "http://10.20.x.1:5601/api/status": dial tcp 10.20.x.1:5601: i/o timeout (Client.Timeout exceeded while awaiting headers). Response: .
Sep 24 19:43:27 <hostname> metricbeat: Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://10.20.x.1:5601/api/status fails: fail to execute the HTTP GET request: Get "http://10.20.x.1:5601/api/status": dial tcp 10.20.x.1:5601: i/o timeout (Client.Timeout exceeded while awaiting headers). Response: .
Sep 24 19:43:49 <hostname> filebeat: 2021-09-24T19:43:49.475+0300#011ERROR#011instance/beat.go:989#011Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://10.20.x.1:5601/api/status fails: fail to execute the HTTP GET request: Get "http://10.20.x.1:5601/api/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers). Response: .
Sep 24 19:43:49 <hostname> filebeat: Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://10.20.x.1:5601/api/status fails: fail to execute the HTTP GET request: Get "http://10.20.x.1:5601/api/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers). Response: .
Sep 24 19:44:01 <hostname> heartbeat: 2021-09-24T19:44:01.699+0300#011ERROR#011[publisher_pipeline_output]#011pipeline/output.go:154#011Failed to connect to backoff(async(tcp://10.20.x.2:5044)): dial tcp 10.20.x.2:5044: i/o timeout

Checking connection from 192.168.x.x server to logstash/kibana server,

[root@ ~]# nc -v 10.20.x.1 5044

Ncat: Version 7.50 ( https://nmap.org/ncat )

Ncat: Connection timed out.


[root@ ~]# nc -v 10.20.x.2 5044

Ncat: Version 7.50 ( https://nmap.org/ncat )

Ncat: Connection timed out.



[root@ ~]# nc -v 10.20.x.1 5601

Ncat: Version 7.50 ( https://nmap.org/ncat )

Ncat: Connection timed out.


[root@ ~]# nc -v 10.20.x.2 5601

Ncat: Version 7.50 ( https://nmap.org/ncat )

Ncat: Connection timed out.



[root@ ~]# nc -v 10.20.x.1 9200

Ncat: Version 7.50 ( https://nmap.org/ncat )

Ncat: Connection timed out.


[root@ ~]# nc -v 10.20.x.2 9200

Ncat: Version 7.50 ( https://nmap.org/ncat )

Ncat: Connection timed out.
  1. It's clear that I need to open port 5044, 5601 from filebeat server to two logstash servers but do i need to open elasticsearch port 9200 also?

Is there anything else required apart from port opening for filebeat agent to successfully send events to logstash servers and to connect to kibana?

I need to send port opening request and do not want to miss anything in that.

  1. I have kibana installed on two servers. Do I need to mentioned both kibana IPs in above filebeat.yml file, the same way two logstash IPs are mentioned ?

Thanks,

What is your Logstash Input Configuration ?

It is this

input {

  beats {

    port => 5044

  }

}

Can you confirm the port is open by logstash on port 5044? Maybe via nmap ? For example i use ports 5043 - 5044 for my logstash inputs

nmap -p5043-5044 192.168.0.7 -T5

Hi @zx8086,

Thanks for your reply.

I currently do not have output of this but I can confirm logstash port and service is up and other application serves where filebeat is installed with IP 10.20.x.x can connect to logstash..

Thanks,

Not familiar with the "x" configuration in logstash file.

HI @zx8086

Sry .. I just mentioned x to hide actual no. It is still a valid ip like 10.20.20.20 for example.

Sorry for this.

Thanks,

and what about the kibana errors concerning the API ?

If you disable for troubleshooting ?

setup.dashboards.enabled: true

You need a Elasticsearch output for some of the setup configurations, usually best to run this from the command line to setup the kibana dashboards

  sudo filebeat setup -e --strict.perms=false  \
  -E output.logstash.enabled=false \
  -E output.elasticsearch.hosts=['x.x.x.x:9200'] \
  -E output.elasticsearch.username=redacted \
  -E output.elasticsearch.password=redacted \
  -E setup.kibana.host=x.x.x.x:5601

I think logs regarding kibana api also pointing that it cannot connect kibana server on port 5601.

I do not got your point on disabling that setting.

I am doing all this via automation tool and trying to avoid anything manual.

Can you tell me what that command will do?

Lets try and fix one problem at a time manually and when the issue is resolved you can automate.

It doesn't seem to be that you can run the filebeat setup configurations against a Logstash Output, It has to be Elasticsearch Output. Plus usually you run the setup command once with each new version upgrade. Therefore you shouldn't have setup configurations like setting up you dashboard on all the filebeat nodes as this will run everytime you start filebeat.

https://www.elastic.co/guide/en/beats/filebeat/current/command-line-options.html

sudo filebeat setup -e --strict.perms=false  \
  -E output.logstash.enabled=false \
  -E output.elasticsearch.hosts=['x.x.x.x:9200'] \
  -E output.elasticsearch.username=redacted \
  -E output.elasticsearch.password=redacted \
  -E setup.kibana.host=x.x.x.x:5601

Also .... https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-reference-yml.html

# ================================= Dashboards =================================

# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards are disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false```

Hi @zx8086

I think mentioning logstash output in filebeat is not wrong.

We will still require logstash output to send events from filebeat to logstash for parsing. Correct me if I am wrong. So may be this is the reason why logsrash output is there in .yml file.

( I know we can skip logstash and directly send to elasticsaech also)

While loading dashboards it connects to Elasticsearch for version check and hence logstash output needs to disbale and which is what that command is doing.

So with my above wrong configuration I should not see any filebeat dashboard in kibana. Isn't it?

  1. So you are saying not all filebeat instance has to load the dashboard (only one is enough)
  2. and this command needs to run once and therefore it should not be there in the config file as this will load the dashboard everytime when service will restart.

1 - Correct. You only need to do this with each version. The templetees, dashboards etc. dont change within a version. You only need to run this once with every version, a single time.
2. - Correct.

You can automate your configuration to use a filebeat.yml file that has the kibana setup disabled and then run the command I provided on a single host to load the templates, ILM policies and Dashboards, for an initial setup with a new upgrade.

We just use Ansible and tags (set-up) to automate this. run it once on a node and when that is done, you can run the same .

@zx8086

Thanks,

I am not doing upgrade, I am installing all from scratch.

So when that command is ran only once and on single filebeat instances (out of many) , we dont require following config in filebeat.yml correct?

setup.dashboards.enabled: true 
setup.kibana: 
     host: "http://10.20.x.1:5601" 
     username: elastic 
     password: ${es_pwd}

But then what is the reason you think they have given this option to mentioned in filebat.yml if everytimes it loads when service restart. Elasticsearch must mentioned only command in that case and not the above option.


Even if I run the command I think it will still fail as connectivity is not there.

Can you also please update on Q. 1 and 2 asked in first message.

Regardless of upgrade or starting from scratch, it's the same... you only need to load these things (Templates, ILM, Dashboards) once for the version you are using. They only potentially change with each version upgrade.

You can set your configuration and still run the command line options as those will be the variables used.

setup.dashboards.enabled: false 
setup.kibana: 
     host: "http://10.20.x.1:5601" 
     username: elastic 
     password: ${es_pwd}

my reasoning for this is that you have multiple errors in your filebeat log. getting to the root issue cause will help solve the initial issue. when you have multiple issues it is hard to see where the obvious problem is, so ideally, lets resolve the simple ones first.

You have a issue loading Kibana dashboards... solution is -> set setup.dashboards.enabled: false , because you usually cannot have those setup.* configurations in your filebeat.yml file, when you have configured your output to Logstash instead of Elasticsearch. The following allows you to setup your dashboards, ILM, Templates etc without having to modify the setup.* options in your filebeat.yml, which is why you pass those variables on the command line.

sudo filebeat setup -e --strict.perms=false
-E output.logstash.enabled=false
-E output.Elasticsearch.hosts=['x.x.x.x:9200']
-E output.Elasticsearch.username=redacted
-E output.Elasticsearch.password=redacted
-E setup.kibana.host=x.x.x.x:5601

Running this alone and the -e would log the output. If there is an issue with your kibana endpoint, which was failing in the logs we saw, this will verify that, along with your Elasticsearch endpoint, which is on the same network segment as your failing logstash endpoints.

Filebeat isn't going to work if you can't even run the setup command successfully which loads all your templates and pipelines.

Those are two completely different networks, do you have connectivity between those two networks? You need to check if you have any route for connections between servers on those different networks.

For what you shared your problem seems to be a network connectivity issue, with no relation to logstash, filebeat or kibana.

Try to check basic connectivity, start logstash on one of the servers in the 10.20.X.X network and try to connect using telnet or netcat from the server running in the network 192.168.X.X.

If you have no connection with telnet/netcat then you have a network connectivity issue and will need to solve that.

I didn't want to directly elude to a networking problem given that it was communicated all this was verified. Process of elimination and running the filebeat setup command would prove that all the endpoints on that segment were not reachable - Elasticsearch, Kibana & the initial Logstash, at the same time correcting some misunderstanding.

a tracert / telnet would also suffice from the filebeat node to those endpoints

Hi @zx8086,

Thanks, I will go through your reply.

Hi @leandrojmp,

Thanks for your reply.

I think so it's a network problem first and then what @zx8086 saying need to resolve.

  1. Its clear that logstash and kibana port need to be open. Do I need to connect filebeat instance to elasticaearch port as well? As I can see filebeat is not directly connecting to Elasticsearch by looking into filebeat.yml or logstash input configuration.

  2. Can you update on kibana q asked in first message.

As leandrojmp said, resolve the network problem if you can confirm that the Elasticsearch, kibana and logstash endpoints on ports 9200, 5601 and 5044 respectively are reachable.

Next would be to use filebeat and the setup command to load the Templates, Dashboards, ILM etc The setup command needs an Elasticsearch output, for this, so once this is done, the one time. This is why it advisable to run this via the command line as a one-off, so you don't have to change the filebeat configuration file, once to change the output to Elasticsearch for the setup and then finally for you logstash output setup. All can be automated as part a new installation / upgrade process.

You can then use logstash as the output, and then you will only need Elasticsearch in the pipeline output to index your documents.

It would be irrelevant to re-examine the errors until the network situation is confirmed and the setup command run to verify the endpoints are accessible from the filebeat node.

Again, if possible , run NMAP from the filebeat node towards the servers running Elasticsearch, logstash and kibana to see the open ports.

Hi @zx8086,

Thank you one more time.

I got your point (and I think we are coming to the end of discussion regarding filebeat), above filebeat setup command only needs to run once from any filebeat agent ( I think its best to run above command even before the task of configuring filebeat.yml file through automation tool) and after command run the above filebeat.yml can be configured as it is but by excluding below line,

setup.dashboards.enabled: true
  1. but in this case, are below lines required? I don't think so.
setup.kibana:
  host: "http://10.20.x.1:5601"
  username: elastic
  password: ${es_pwd}
  1. This is not related to above filebeat config or network error related. If you know about this, can you please reply,

I have kibana installed on two servers. Do we need to mentioned both kibana IPs in above filebeat.yml file, the same way two logstash IPs are mentioned ?

I have asked for port opening which should happen soon and then i will update here. Thanks.

Thanks,