Problem with filebeat

Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://localhost:9200: Get "http://localhost:9200": EOF]

Hi @miladmohabati Welcome to the community.

You are going to provide a lot more information if you want help.

Is elasticsearch running on HTTPS?

Did you try

curl -k -v -u elastic https://localhost:9200

What Version of the Elastic Stack?

How was / is elasticsearch installed?

You will need to share your full filebeat.yml

You will need to share your elasticsearch.yml

1 Like

Copied in from Direct Message
The indention in the .ymls is not correct but I am assuming that is cut and paste errors.

#curl -k -v -u elastic htt://localhost:9200

Enter host password for user 'elastic':

* Trying 127.0.0.1:9200...
* Connected to localhost (127.0.0.1) port 9200 (#0)
* Server auth using Basic with user 'elastic'

> GET / HTTP/1.1
> Host: localhost:9200
> Authorization: Basic ZWxhc3RpYzpuTVMtamlmZUNERnNZNHdqWjRoeg==
> User-Agent: curl/7.81.0
> Accept: */*

* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server

This needs to
#curl -k -v -u elastic htt://localhost:9200

to be

#curl -k -v -u elastic https://localhost:9200
..........................^^

Do that make sure the curl connects to elasticsearch, if it does not filebeat will not work.

and filebeat:

filebeat.inputs:

- type: filestream
id: my-filestream-id
enabled: false
paths:
  - /var/log/<em>.log
filebeat.config.modules:
path: ${path.config}/modules.d/</em>.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.elasticsearch:
  hosts: ["localhost:9200"]
  username: "elastic"
  password: "nMS-jifeCDFsY4wjZ"

processors:

- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~

and Elasticsearch.yml :

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.10.208
http.port: 9200

xpack.security.enabled: true

xpack.security.enrollment.enabled: true

xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12

xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
cluster.initial_master_nodes: ["srvsi"]

http.host: 0.0.0.0

You elasticsearch is running on https so your filebeat.yml output section needs to to be

output.elasticsearch:
  hosts: ["https://localhost:9200"]
  username: "elastic"
  password: "nMS-jifeCDFsY4wjZ"
  ssl.verification_mode: "none"

If that works come back

1 Like

after

curl -k -v -u elastic https://localhost:9200

Enter host password for user 'elastic':
*   Trying 127.0.0.1:9200...
* Connected to localhost (127.0.0.1) port 9200 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.2 (OUT), TLS header, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: CN=srvsi
*  start date: Nov 10 15:41:53 2023 GMT
*  expire date: Nov  9 15:41:53 2025 GMT
*  issuer: CN=Elasticsearch security auto-configuration HTTP CA
*  SSL certificate verify result: self-signed certificate in certificate chain (19), continuing anyway.
* Server auth using Basic with user 'elastic'
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
> GET / HTTP/1.1
> Host: localhost:9200
> Authorization: Basic ZWxhc3RpYzpuTVMtamlmZUNERnNZNHdqWjRoeg==
> User-Agent: curl/7.81.0
> Accept: */*
>
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< X-elastic-product: Elasticsearch
< content-type: application/json
< content-length: 529
<
{
  "name" : "srvsi",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "oDXqeolmTLK7ELDLYZSoFg",
  "version" : {
    "number" : "8.11.0",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "d9ec3fa628c7b0ba3d25692e277ba26814820b20",
    "build_date" : "2023-11-04T10:04:57.184859352Z",
    "build_snapshot" : false,
    "lucene_version" : "9.8.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}
* Connection #0 to host localhost left intact

my filbert.yml
root@srvsi:/etc/filebeat# cat filebeat.yml

filebeat.inputs:

- type: filestream


  id: my-filestream-id


  enabled: false


  paths:
    - /var/log/*.log

filebeat.config.modules:

  path: ${path.config}/modules.d/*.yml


  reload.enabled: false


setup.template.settings:
  index.number_of_shards: 1



setup.kibana:





output.elasticsearch:

  hosts: ["localhost:9200"]


processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

Ok curl work...Great what that shows is that Elasticsearch is up and running and had security applied. That is good.

So now your filebeat needs to use https and authentication.

So the output section in the filebeat.yml should look something like
See if it works

output.elasticsearch:
  hosts: ["https://localhost:9200"]
  username: "elastic"
  password: "nMS-jifeCDFsY4wjZ"
  ssl.verification_mode: none

Fixed

Ohh also...

You do not have any inputs enabled?

The one above is false

You need to set that to

enabled: true

Or enable a module,

hi
after change

sudo filebeat setup

Overwriting lifecycle policy is disabled. Set setup.ilm.overwrite: true to overwrite.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://localhost:5601/api/status fails: fail to execute the HTTP GET request: Get "http://localhost:5601/api/status": dial tcp 127.0.0.1:5601: connect: connection refused (status=0). Response:

And
service filebeat status

× filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
Loaded: loaded (/lib/systemd/system/filebeat.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2023-11-13 06:21:52 UTC; 2min 10s ago
Docs: Filebeat: Lightweight Log Analysis & Elasticsearch | Elastic
Process: 2654 ExecStart=/usr/share/filebeat/bin/filebeat --environment systemd $BEAT_LOG_OPTS $BEAT_CONFIG_OPTS $BEAT_PATH_OPTS (code=exited,>
Main PID: 2654 (code=exited, status=1/FAILURE)
CPU: 186ms

Nov 13 06:21:52 srvsi systemd[1]: filebeat.service: Scheduled restart job, restart counter is at 5.
Nov 13 06:21:52 srvsi systemd[1]: Stopped Filebeat sends log files to Logstash or directly to Elasticsearch..
Nov 13 06:21:52 srvsi systemd[1]: filebeat.service: Start request repeated too quickly.
Nov 13 06:21:52 srvsi systemd[1]: filebeat.service: Failed with result 'exit-code'.
Nov 13 06:21:52 srvsi systemd[1]: Failed to start Filebeat sends log files to Logstash or directly to Elasticsearch..

Please repost your entire filebeat.yml

filebeat.inputs:

- type: filestream

    id: my-filestream-id

    enabled: true

    paths:
    - /var/log/*.log
      
  path: ${path.config}/modules.d/*.yml

    reload.enabled: false

 
setup.template.settings:
  index.number_of_shards: 1
  
setup.kibana:

  
output.elasticsearch:
  hosts: ["https://localhost:9200"]

  username: "elastic"
  password: "nMS-jifeCDFsY4whz"
  ssl.verification_mode: none

processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

You need to setup Kibana config correctly...do you have Kibana running?

setup.kibana:
  host: "https://localhost:5601"
  username: "elastic"
  password: "nMS-jifeCDFsY4whz"
  ssl.verification_mode: none

Try running setup again

yes Runing Kibana

kibana.yaml
server.port: 5601

server.host: "192.168.10.208"

server.host: 192.168.10.208

elasticsearch.hosts: ['https://192.168.10.208:9200']

logging.appenders.file.type: file

logging.appenders.file.fileName: /var/log/kibana/kibana.log

logging.appenders.file.layout.type: json

logging.root.appenders: [default, file]

pid.file: /run/kibana/kibana.pid

elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE2OTk2NDI1NzcyNjM6NVJheXk2UzRUZ2ladjJhbmhTMVlTdw

elasticsearch.ssl.certificateAuthorities: [/var/lib/kibana/ca_1699642578571.crt]

xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://192.168.10.208:9200'], ca_trusted_fingerprint: 02920cabbc3c696b7ca6cfee3762adec9749b275d7f67c610bbe75f972b91585}]

Ok you need to setup kibana correctly in the filebeat.yml (I had a syntax error above)

thanks Stephenb
my problem solved

sudo filebeat setup
Overwriting lifecycle policy is disabled. Set setup.ilm.overwrite: true to overwrite.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
Loaded Ingest pipelines

########################################################

now, after start filbert.yml

sudo service filebeat status
× filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
Loaded: loaded (/lib/systemd/system/filebeat.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2023-11-14 07:31:51 UTC; 3s ago
Docs: Filebeat: Lightweight Log Analysis & Elasticsearch | Elastic
Process: 2286 ExecStart=/usr/share/filebeat/bin/filebeat --environment systemd $BEAT_LOG_OPTS $BEAT_CONFIG_OPTS $BEAT_PATH_OPTS (code=exited,>
Main PID: 2286 (code=exited, status=1/FAILURE)
CPU: 207ms

Nov 14 07:31:50 srvsi systemd[1]: filebeat.service: Failed with result 'exit-code'.
Nov 14 07:31:51 srvsi systemd[1]: filebeat.service: Scheduled restart job, restart counter is at 5.
Nov 14 07:31:51 srvsi systemd[1]: Stopped Filebeat sends log files to Logstash or directly to Elasticsearch..
Nov 14 07:31:51 srvsi systemd[1]: filebeat.service: Start request repeated too quickly.
Nov 14 07:31:51 srvsi systemd[1]: filebeat.service: Failed with result 'exit-code'.
Nov 14 07:31:51 srvsi systemd[1]: Failed to start Filebeat sends log files to Logstash or directly to Elasticsearch..

AND
journalctl -u filebeat.service

Nov 14 06:38:52 srvsi filebeat[17933]: {"log.level":"info","@timestamp":"2023-11-14T06:38:52.492Z","log.logger":"registrar","log.origin":{"file.n>
Nov 14 06:38:52 srvsi filebeat[17933]: {"log.level":"info","@timestamp":"2023-11-14T06:38:52.518Z","log.logger":"monitoring","log.origin":{"file.>
Nov 14 06:38:52 srvsi filebeat[17933]: {"log.level":"info","@timestamp":"2023-11-14T06:38:52.519Z","log.logger":"monitoring","log.origin":{"file.>
Nov 14 06:38:52 srvsi filebeat[17933]: {"log.level":"info","@timestamp":"2023-11-14T06:38:52.519Z","log.logger":"monitoring","log.origin":{"file.>
Nov 14 06:38:52 srvsi filebeat[17933]: {"log.level":"info","@timestamp":"2023-11-14T06:38:52.519Z","log.origin":{"file.name":"instance/beat.go",">
Nov 14 06:38:52 srvsi filebeat[17933]: {"log.level":"error","@timestamp":"2023-11-14T06:38:52.519Z","log.origin":{"file.name":"instance/beat.go",>
Nov 14 06:38:52 srvsi filebeat[17933]: Exiting: Failed to start crawler: creating module reloader failed: could not create module registry for fi>
Nov 14 06:38:52 srvsi systemd[1]: filebeat.service: Main process exited, code=exited, status=1/FAILURE
Nov 14 06:38:52 srvsi systemd[1]: filebeat.service: Failed with result 'exit-code'.
Nov 14 06:38:52 srvsi systemd[1]: filebeat.service: Scheduled restart job, restart counter is at 1.

filebeat -e

{"log.level":"error","@timestamp":"2023-11-14T08:24:13.135Z","log.origin":{"file.name":"instance/beat.go","file.line":1307},"message":"Exiting: Failed to start crawler: creating module reloader failed: could not create module registry for filesets: module netflow is configured but has no enabled filesets","service.name":"filebeat","ecs.version":"1.6.0"}
Exiting: Failed to start crawler: creating module reloader failed: could not create module registry for filesets: module netflow is configured but has no enabled filesets

You enabled the netflow module but you need to edit and configure the input

If you read the error messages they are generally pretty clear / instructive

The file is

modules.d/netflow.yml

So stop filebeat

Edit the file above and start it again

thanks Stephen
my problem resolved
now:
I can not see Netflow graph

Well is there any netflow data to be collected?

What is your configuration?

Spread the Time out to the Last 30 Days...

Go to Discover and see if there is data...

Keep looking / share your config

thanks Stephen
my problem resolved

now
** in the Discover i see the timestamp and log_date are different.
how i can fix that the both time are Will be the same time.
thanks**