I'm lost to why beats isn't passing logs to logstash

I really didn't want to open a new topic, but I can't seem to understand why my logstash don't receive logs from filebeat.

I'm new and used a ELK docker to setup the enviroment. After editing the logstash.conf, I ran the docker-compose up -d to start all the applications.

Logstash.conf

    input {
        beats {
                port => 5044
        }
        tcp {
                port => 5000
        }
}
## Add your filters / logstash plugins configuration here
output {
        elasticsearch {
                hosts => "elasticsearch:9200"
                user => "elastic"
                password => "**************"
                ecs_compatibility => disabled
        }
}

I went to the server that I need the logs from. Installed filebeats and edited the yaml file.

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/syslog

- type: log
  enabled: true
  paths:
    - /var/log/openvpn/*.csv

- type: filestream
  enabled: false
  paths:
    - /var/log/openvpn/*.csv

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

setup.kibana:
  host: "192.168.1.23:5601"

output.logstash:
  hosts: ["192.168.1.23:5044"]

processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

Enabled logstash module. And tested running filebeat output test shows:

image

So I assume it works, but on Kibana dashboard... there is nothing being outputed. Here a message printed on filebeat log:

2021-04-19T15:07:47.382-0400    WARN    beater/filebeat.go:178  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2021-04-19T15:07:47.382-0400    ERROR   instance/beat.go:971    Exiting: Index management requested but the Elasticsearch output is not configured/enabled

So after searching some help I decided to change hosts port to 5000 (just to see if something would occur and it worked, in parts). Now it shows updates, but not human readable.

And here a print from the Graph to when I changed back to port 5044.

Any input, or help will be appreciated. It probably is a really novice mistake, but I couldn't find the answer searching in the forum.

Hi @rodrigoross welcome to the community, these are always to the hardest ones.

First could put your ports in ""s

        beats {
                port => "5044"
        }
        tcp {
                port => "5000"
        }

Also what happens if you take out the tcp and just leave the beats.

Also any chance there's something else running on 5044 on that logstash host?

Silly question what happens if you do
docker-compose down
Then
docker-compose up

Hello @stephenb, thanks for the reply.

After doing as sugested I restarted the logstash service and unfortunately it remained with no logs on the dashboard.

However it dawned on me that I wasn't looking at the Docker logs after starting the services.

Well here is the docker log after adding "" to the port numbers

[2021-04-20T17:52:26,269][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2021-04-20T17:52:26,341][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x7b806acf run>"}
[2021-04-20T17:52:26,365][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x3e723fbb run>"}
[2021-04-20T17:52:26,383][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2021-04-20T17:52:27,613][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>1.24}
[2021-04-20T17:52:27,613][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.26}
[2021-04-20T17:52:27,685][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2021-04-20T17:52:27,696][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2021-04-20T17:52:27,986][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2021-04-20T17:52:28,005][INFO ][logstash.inputs.tcp      ][main][a9331ddcd0bdaa1761dfd3beb5ae2c88d604745bbba89b071d88870b151543b1] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>false}
[2021-04-20T17:52:28,067][INFO ][org.logstash.beats.Server][main][7f8c149d284a4416f0cb1c29d5cf0cb159ae7748dd8332f514028e3a13d8696d] Starting server on port: 5044
[2021-04-20T17:52:28,091][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:main, :".monitoring-logstash"], :non_running_pipelines=>[]}
[2021-04-20T17:52:28,895][WARN ][logstash.outputs.elasticsearch][main][fcb19c2b98331e3c6d8baa610f7b25d236655dc4aec5f5475f15bf2bc52944bb] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x22e2e08a>], :response=>{"index"=>{"_index"=>"logstash-2021.04.15-000001", "_type"=>"_doc", "_id"=>"4X5p8HgBaq5DGkKDfkTR", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text] in document with id '4X5p8HgBaq5DGkKDfkTR'. Preview of field's value: '{name=vpn}'", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:558"}}}}}
[2021-04-20T17:52:28,897][WARN ][logstash.outputs.elasticsearch][main][fcb19c2b98331e3c6d8baa610f7b25d236655dc4aec5f5475f15bf2bc52944bb] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x68038a2c>], :response=>{"index"=>{"_index"=>"logstash-2021.04.15-000001", "_type"=>"_doc", "_id"=>"4n5p8HgBaq5DGkKDfkTR", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text] in document with id '4n5p8HgBaq5DGkKDfkTR'. Preview of field's value: '{name=vpn}'", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:525"}}}}}

And the last warn just keep on repeating itself.

And to the next advice, I remove the TCP input from logstash.conf restarted the server, here the docker log.

[2021-04-20T17:59:34,757][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2021-04-20T17:59:34,827][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0xcbd255d@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125 run>"}
[2021-04-20T17:59:34,827][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x6732b85b run>"}
[2021-04-20T17:59:34,874][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2021-04-20T17:59:36,147][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>1.31}
[2021-04-20T17:59:36,150][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.31}
[2021-04-20T17:59:36,238][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2021-04-20T17:59:36,243][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2021-04-20T17:59:36,275][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2021-04-20T17:59:36,387][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
[2021-04-20T17:59:36,487][INFO ][org.logstash.beats.Server][main][60b16309ead8528901d998f7f1d44876cc4a40bbea5f48f3d8a80ed557697ea8] Starting server on port: 5044
[2021-04-20T17:59:40,117][WARN ][logstash.outputs.elasticsearch][main][1d8f7f0a9ef7254345eb94470813e1f68b0e4e806942c57fabd6275b4d987581] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x7ddf2e56>], :response=>{"index"=>{"_index"=>"logstash-2021.04.15-000001", "_type"=>"_doc", "_id"=>"GH5w8HgBaq5DGkKDE0tC", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text] in document with id 'GH5w8HgBaq5DGkKDE0tC'. Preview of field's value: '{name=vpn}'", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:555"}}}}}

The only major diference is that it didn't started the logstash.input.tcp

And for good measure I did the docker-compose down. It stopped all services:
image

And connection from filebeat to logstash was refused.

image

After docker-compose up services started as usual but logstash still refused to receive input from filebeat.

Should I try to set-up enviromment from scratch without the use of docker?

Best regards.

You have a mapper parsing exception I suspect that is why it is failing... basically you are trying to to stuff some data into a data type that it does not supported.

Are you familiar with mappings? Perhaps you should take a look and read about mappings.

Did you create a mapping? or just use the default?

Also I am very curious why you turned off

`ecs_compatibility => disabled`
[2021-04-20T17:52:28,895][WARN ][logstash.outputs.elasticsearch][main][fcb19c2b98331e3c6d8baa610f7b25d236655dc4aec5f5475f15bf2bc52944bb] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x22e2e08a>], :response=>{"index"=>{"_index"=>"logstash-2021.04.15-000001", "_type"=>"_doc", "_id"=>"4X5p8HgBaq5DGkKDfkTR", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text] in document with id '4X5p8HgBaq5DGkKDfkTR'. Preview of field's value: '{name=vpn}'", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:558"}}}}}
[2021-04-20T17:52:28,897][WARN ][logstash.outputs.elasticsearch][main][fcb19c2b98331e3c6d8baa610f7b25d236655dc4aec5f5475f15bf2bc52944bb] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x68038a2c>], :response=>{"index"=>{"_index"=>"logstash-2021.04.15-000001", "_type"=>"_doc", "_id"=>"4n5p8HgBaq5DGkKDfkTR", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text] in document with id '4n5p8HgBaq5DGkKDfkTR'. Preview of field

Give this a try... then if it works we will talk.
The data will end up in a filebeat-* index.

So if you clean up everything ... everything ...

Then use this as your logstash.conf ....

In your filebeat.yml have filebeat point to the elasticsearch and run.
filebeat setup -e

Then point it back to logstash and run your stack with this pipeline (we will go back to the other stuff later)

################################################
# beats->logstash->es default config.
################################################
input {
  beats {
    port => 5044
  }
}

output {
  if [@metadata][pipeline] {
    elasticsearch {
      hosts => "http://localhost:9200"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
      pipeline => "%{[@metadata][pipeline]}" 
      user => "elastic"
      password => "secret"
    }
  } else {
    elasticsearch {
      hosts => "http://localhost:9200"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
      user => "elastic"
      password => "secret"
    }
  }
}

Hi. I really appreciate the help, I'm going to make the changes and will reply shortly.

Honestly I jumped right into installation, but now will read further into the documentation about mappings.

I used the default instalation from the docker-elk. That is also why the ecs_compatibility is set to false.

Sorry I couldn't fit the whole update into the past reply.

But there are some updates.

I literally dropped the old environment and set up a new one with the editions on logstash.conf and a new filebeat.yml

- type: log
  enabled: true
  paths:
    - /var/log/syslog

#- type: log
#  enabled: true
#  paths:
#    - /var/log/openvpn/*.csv

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

setup.kibana:
  host: "192.168.1.23:5601"

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.1.23:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "secret"

The filebeat log after running filebeat setup -e

2021-04-20T17:37:05.705-0400    INFO    instance/beat.go:668    Beat ID: 31c85ee3-f3e0-4140-ad1f-4f6611c93bf5
2021-04-20T17:37:05.705-0400    INFO    [beat]  instance/beat.go:996    Beat info       {"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "31c85ee3-f3e0-4140-ad1f-4f6611c93bf5"}}}
2021-04-20T17:37:05.705-0400    INFO    [beat]  instance/beat.go:1005   Build info      {"system_info": {"build": {"commit": "08e20483a651ea5ad60115f68ff0e53e6360573a", "libbeat": "7.12.0", "time": "2021-03-18T06:16:51.000Z", "version": "7.12.0"}}}
2021-04-20T17:37:05.705-0400    INFO    [beat]  instance/beat.go:1008   Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":4,"version":"go1.15.8"}}}
2021-04-20T17:37:05.706-0400    INFO    [beat]  instance/beat.go:1012   Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2021-04-09T18:02:13-04:00","containerized":false,"name":"vpn","ip":["127.0.0.1/8","::1/128","192.168.1.22/24","fe80::5054:ff:fe96:8be4/64","172.17.0.1/16","fe80::42:65ff:fee8:707a/64","fe80::acaf:4eff:fe52:555d/64","192.168.85.1/24","fe80::75ec:f495:858c:a045/64"],"kernel_version":"4.15.0-140-generic","mac":["52:54:00:96:8b:e4","02:42:65:e8:70:7a","ae:af:4e:52:55:5d"],"os":{"type":"linux","family":"debian","platform":"ubuntu","name":"Ubuntu","version":"18.04.5 LTS (Bionic Beaver)","major":18,"minor":4,"patch":5,"codename":"bionic"},"timezone":"-04","timezone_offset_sec":-14400,"id":"2c43069cfd43444e98e47c2020bbb338"}}}
2021-04-20T17:37:05.707-0400    INFO    [beat]  instance/beat.go:1041   Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"ambient":null}, "cwd": "/home/producao", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 4497, "ppid": 4496, "seccomp": {"mode":"disabled","no_new_privs":false}, "start_time": "2021-04-20T17:37:04.950-0400"}}}
2021-04-20T17:37:05.708-0400    INFO    instance/beat.go:304    Setup Beat: filebeat; Version: 7.12.0
2021-04-20T17:37:05.709-0400    INFO    [index-management]      idxmgmt/std.go:184      Set output.elasticsearch.index to 'filebeat-7.12.0' as ILM is enabled.
2021-04-20T17:37:05.709-0400    INFO    eslegclient/connection.go:99    elasticsearch url: http://elasticsearch:9200
2021-04-20T17:37:05.709-0400    INFO    [publisher]     pipeline/module.go:113  Beat name: vpn
2021-04-20T17:37:05.711-0400    INFO    eslegclient/connection.go:99    elasticsearch url: http://elasticsearch:9200
2021-04-20T17:37:05.870-0400    INFO    [esclientleg]   eslegclient/connection.go:314   Attempting to connect to Elasticsearch version 7.12.0
Overwriting ILM policy is disabled. Set `setup.ilm.overwrite: true` for enabling.

2021-04-20T17:37:05.901-0400    INFO    [index-management]      idxmgmt/std.go:261      Auto ILM enable success.
2021-04-20T17:37:05.904-0400    INFO    [index-management.ilm]  ilm/std.go:139  do not generate ilm policy: exists=true, overwrite=false
2021-04-20T17:37:05.904-0400    INFO    [index-management]      idxmgmt/std.go:274      ILM policy successfully loaded.
2021-04-20T17:37:05.904-0400    INFO    [index-management]      idxmgmt/std.go:407      Set setup.template.name to '{filebeat-7.12.0 {now/d}-000001}' as ILM is enabled.
2021-04-20T17:37:05.904-0400    INFO    [index-management]      idxmgmt/std.go:412      Set setup.template.pattern to 'filebeat-7.12.0-*' as ILM is enabled.
2021-04-20T17:37:05.904-0400    INFO    [index-management]      idxmgmt/std.go:446      Set settings.index.lifecycle.rollover_alias in template to {filebeat-7.12.0 {now/d}-000001} as ILM is enabled.
2021-04-20T17:37:05.904-0400    INFO    [index-management]      idxmgmt/std.go:450      Set settings.index.lifecycle.name in template to {filebeat {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2021-04-20T17:37:05.908-0400    INFO    template/load.go:183    Existing template will be overwritten, as overwrite is enabled.
2021-04-20T17:37:08.178-0400    INFO    template/load.go:117    Try loading template filebeat-7.12.0 to Elasticsearch
2021-04-20T17:37:08.684-0400    INFO    template/load.go:109    template with name 'filebeat-7.12.0' loaded.
2021-04-20T17:37:08.684-0400    INFO    [index-management]      idxmgmt/std.go:298      Loaded index template.
2021-04-20T17:37:08.687-0400    INFO    [index-management]      idxmgmt/std.go:309      Write alias successfully generated.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
2021-04-20T17:37:08.688-0400    INFO    kibana/client.go:119    Kibana url: http://elasticsearch:5601

It appeared the new filebeat-* index pattern and it is logging from the client var/log/syslog

I don't know if it was intended, now the logstash service isn't running.

[2021-04-20T21:30:18,618][ERROR][logstash.javapipeline    ][main][6343e707a843e0a33ea4e9006c0672156e8ebce3e38ad7bbab2f83fed8b4dfeb] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Beats port=>5044, id=>"6343e707a843e0a33ea4e9006c0672156e8ebce3e38ad7bbab2f83fed8b4dfeb", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_810f74c2-2bbf-4def-928e-d4f6a4fc9027", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>4>
  Error: Address already in use
  Exception: Java::JavaNet::BindException
  Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:455)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:447)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:227)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:134)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:550)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1334)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:506)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:491)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:973)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:248)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:356)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:164)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:472)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:500)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:989)
io.netty.util.internal.ThreadExecutorMap$2.run(io/netty/util/internal/ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:834)
[2021-04-20T21:30:18,874][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-20T21:30:18,888][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-20T21:30:19,621][INFO ][org.logstash.beats.Server][main][6343e707a843e0a33ea4e9006c0672156e8ebce3e38ad7bbab2f83fed8b4dfeb] Starting server on port: 5044
[2021-04-20T21:30:20,109][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
[2021-04-20T21:30:23,986][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-20T21:30:24,003][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-20T21:30:25,332][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}

And connection to 5044 port is refused.

Thanks in advance.

Ok @rodrigoross

I think you need to perhaps slow down. :slight_smile:

First I said

In your filebeat.yml have filebeat point to the elasticsearch and run.
filebeat setup -e

Then point it (filebeat.yml) back to logstash and run your stack with this pipeline (we will go back to the other stuff later)

You left filebeat pointing directly at elasticsearch which is OK but it is not using logstash ... just direct filebeat -> elasticsearch

Now ... when I said use my logstash.conf I expected you to fill in the correct endpoints

Next

output {
  if [@metadata][pipeline] {
    elasticsearch {
      hosts => "http://localhost:9200" <!-- Probably "http://elasticsearch:9200"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
      pipeline => "%{[@metadata][pipeline]}" 
      user => "elastic"
      password => "secret"
    }
  } else {
    elasticsearch {
      hosts => "http://localhost:9200" <!-- Probably "http://elasticsearch:9200"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
      user => "elastic"
      password => "secret"
    }
  }
}

Keep trying! and take out the, ECS is good stuff

ecs_compatibility => disabled

@rodrigoross how's it going?

Good afternoon, sorry for the late reply, today is a holiday in the city I live. And VPN policy locked me out from the company network. Tomorrow morning I'll be capable to advance. So I just took half of the day to read examples of mappings and read more from the documentation.

Have a nice day.

Oh no worries was just curious... please don't rush ... or even respond on my account :slight_smile:

Hello again.

Sorry English isn't my native language I might have misunderstood the instructions.

Going back.

I assumed that I should've used

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.1.23:9200"]
  username: "elastic"
  password: "secret"

It configured the filebeat -> elasticsearch, but I guess you're saying that I just needed to make filebeat output to logstash using the elasticsearch port.

output.logstash:
  hosts: ["192.168.1.23:9200"]

However running the configuration above (maintaining output.elasticsearch configuration all commented).

filbeat setup -e returns

INFO    instance/beat.go:304    Setup Beat: filebeat; Version: 7.12.0
INFO    [publisher]     pipeline/module.go:113  Beat name: vpn
WARN    beater/filebeat.go:178  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
ERROR   instance/beat.go:971    Exiting: Index management requested but the Elasticsearch output is not configured/enabled
Exiting: Index management requested but the Elasticsearch output is not configured/enabled

Okay now to the updates.

I did pointed to use elasticsearch executed the filebeat setup -e (Wich I believe was unecessary since I already executed when poiting filebeat to output directly on elasticsearch).

Changed back logstash port to 5044 and restarted the service.

And voilá

The filebeat-* is running through logstash pipeline now and I believe that was what you intended me to do.

About the logstash service not running as I mentioned before. I made a backup copy on the pipeline folder of logstash as backup-logstash.conf I believe the service was using the backup file first wich is probably why I was getting the error address already in use and crashing the logstash.

After moving to another folder logstash worked normally.

1 Like

Sounds good!!!

And please no worry about the English I have so much respect for multilingual people I only speak English and I'm still bad at it :slight_smile:

The overall point was when you run setup Filebeat needs to point to elasticsearch including correct port but then after when you want to run through logstash you need filebeat to point to logstash including the correct port.

Set up does a bunch of stuff on the elasticsearch side that's why it needs to be pointing at elasticsearch.

But then when you actually want the data to flow through logstash you point filebeat to logstash.

This is a common pattern.

Set up only needs to be run once whether you have one host or 1000 host

Looks like you're working when you have more questions open a new thread and and good luck!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.