Filebeat multiple ports

I have 2 servers where OASIS logs are getting monitored with filebeat.
I want them to be saved in the same index.

Should I port themm from different ports( Server 1 : 5044, server 2 : 5045)?
Or can i use the same port for both servers?

If I use different ports , can i map them to the same index?
Kindly help me out.


If I understand your question, you can send events with Filebeat agents to the same port.
Since you run different agents, events will have the information of the host where the agent runs.

Hi @ChrsMark,
I did try that but my data only comes in for one Server logs. I had them both come into port 5044.


Which's service are those ports? Could you explain the exact setup please? Are you using Filebeat to ship to Elasticsearch or Logstash?
Also you can check in the logs of Filebeats agents and see what is going wrong?


Hi @ChrsMark, Apologies, If I wasn't clear,
So my set up goes like this I have 2 servers where i installed filebeat 7.2.0, and they are sending an application log to my logstash 7.2.0 which is a different server. the logstash pushes data to my ES instance later.
This is my logstash conf file.

input {
  beats {
    port => 5044
    ssl  => false
output {
  elasticsearch {
    hosts => [""]
    index => "oasis"
        stdout { codec => rubydebug }

I have configured logstash output as

# The Logstash hosts
hosts: [""]
in both application servers.

But i see only data from one of them passing when i check my ES index.

Thank you for providing the information!

Now we need to see what is the problem with Filebeat that does not send events. Some checks to do:

  1. Could you look into the Filebeat's logs for errros/warnings?
  2. Is there anything interesting in Logstash logs?
  3. Last but not least we should make sure that the logstash server is reachable by the node where the problematic Filebeat runs. Yuo can use telnet for this.


I do not find any errors or warnings in the filebeat logs.
As far as I can see, logstash also has general warnings.

    [2019-12-11T06:41:53,380][WARN ][logstash.runner ] SIGTERM received. Shutting down.
    [2019-12-11T06:41:58,598][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>30, "name"=>"[oasislogs]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.0-java/lib/logstash/inputs/beats.rb:204:in run'"}, {"thread_id"=>22, "name"=>"[oasislogs]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:239:in block in start_workers'"}, {"thread_id"=>23, "name"=>"[oasislogs]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:239:in block in start_workers'"}, {"thread_id"=>24, "name"=>"[oasislogs]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:239:in block in start_workers'"}, {"thread_id"=>25, "name"=>"[oasislogs]>worker3", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:239:in block in start_workers'"}, {"thread_id"=>26, "name"=>"[oasislogs]>worker4", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:239:in block in start_workers'"}, {"thread_id"=>27, "name"=>"[oasislogs]>worker5", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:239:in block in start_workers'"}, {"thread_id"=>28, "name"=>"[oasislogs]>worker6", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:239:in block in start_workers'"}, {"thread_id"=>29, "name"=>"[oasislogs]>worker7", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:239:in block in start_workers'"}]}} 
    [2019-12-11T06:41:58,602][ERROR][org.logstash.execution.ShutdownWatcherExt] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information. 
    [2019-12-11T06:41:59,876][INFO ][logstash.javapipeline ] Pipeline terminated {""=>"oasislogs"} [2019-12-11T06:42:00,572][INFO ][logstash.runner ] Logstash shut down. [2019-12-11T06:42:10,757][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.2.0"} 
    [2019-12-11T06:42:14,859][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[]}} [2019-12-11T06:42:14,997][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>""}  
    [2019-12-11T06:42:15,033][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7} 
    [2019-12-11T06:42:15,035][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type` event field won't be used to determine the document _type {:es_version=>7}
    [2019-12-11T06:42:15,058][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//"]}
    [2019-12-11T06:42:15,111][INFO ][logstash.outputs.elasticsearch] Using default mapping template
    [2019-12-11T06:42:15,135][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
    [2019-12-11T06:42:15,138][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"oasislogs", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, :thread=>"#<Thread:0x67dc8344 run>"}
    [2019-12-11T06:42:15,176][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
    [2019-12-11T06:42:15,376][INFO ][ ] Beats inputs: Starting input listener {:address=>""}
    [2019-12-11T06:42:15,383][INFO ][logstash.javapipeline ] Pipeline started {""=>"oasislogs"}
    [2019-12-11T06:42:15,454][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:oasislogs], :non_running_pipelines=>}
    [2019-12-11T06:42:15,467][INFO ][] Starting server on port: 5044
    [2019-12-11T06:42:15,702][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

And now,
the only server which was pushing data, stopped and no other data is getting uploaded as well. I tried to change the output to a file, which is of no data as well.

Lastly, I'm able to telnet to the logstash server from both the app servers.

Please let me know of what I can try out to resolve this.

Could you restart the Filebeat's, with filebeat -e -d "*" so as to have verbose output, and follow their logs?

We need to know why Filebeats are not sending events.

@ChrsMark Apologies for the delayed response,
below is the logs,

D:\ELK\filebeat720\filebeat720>filebeat -e -d "*"
2019-12-15T23:41:27.642-0500    INFO    instance/beat.go:606    Home path: [D:\E
LK\filebeat720\filebeat720] Config path: [D:\ELK\filebeat720\filebeat720] Data p
ath: [D:\ELK\filebeat720\filebeat720\data] Logs path: [D:\ELK\filebeat720\filebe
2019-12-15T23:41:27.642-0500    DEBUG   [beat]  instance/beat.go:658    Beat met
adata path: D:\ELK\filebeat720\filebeat720\data\meta.json
2019-12-15T23:41:27.643-0500    INFO    instance/beat.go:614    Beat ID: e733c6b
2019-12-15T23:41:27.649-0500    DEBUG   [filters]       add_cloud_metadata/add_c
loud_metadata.go:164    add_cloud_metadata: starting to fetch metadata, timeout=
2019-12-15T23:41:27.653-0500    DEBUG   [filters]       add_cloud_metadata/add_c
loud_metadata.go:196    add_cloud_metadata: received disposition for qcloud afte
r 2.9297ms. result=[provider:qcloud, error=failed requesting qcloud metadata: Ge
t dial tcp: lookup meta no such host, metadata={}]
2019-12-15T23:41:30.675-0500    DEBUG   [filters]       add_cloud_metadata/add_c
loud_metadata.go:196    add_cloud_metadata: received disposition for aws after 3
.0097656s. result=[provider:aws, error=failed requesting aws metadata: Get http:
// net/http: reque
st canceled while waiting for connection (Client.Timeout exceeded while awaiting
 headers), metadata={}]
2019-12-15T23:41:30.675-0500    DEBUG   [filters]       add_cloud_metadata/add_c
loud_metadata.go:203    add_cloud_metadata: timed-out waiting for all responses
2019-12-15T23:41:30.675-0500    DEBUG   [filters]       add_cloud_metadata/add_c
loud_metadata.go:167    add_cloud_metadata: fetchMetadata ran for 3.0097656s
2019-12-15T23:41:30.675-0500    INFO    add_cloud_metadata/add_cloud_metadata.go
:347    add_cloud_metadata: hosting provider type not detected.
2019-12-15T23:41:30.675-0500    DEBUG   [processors]    processors/processor.go:
93      Generated new processors: add_host_metadata=[netinfo.enabled=[false], ca
che.ttl=[5m0s]], add_cloud_metadata=null
2019-12-15T23:41:30.675-0500    DEBUG   [seccomp]       seccomp/seccomp.go:88
Syscall filtering is only supported on Linux
2019-12-15T23:41:30.675-0500    INFO    [beat]  instance/beat.go:902    Beat inf
o       {"system_info": {"beat": {"path": {"config": "D:\\ELK\\filebeat720\\file
beat720", "data": "D:\\ELK\\filebeat720\\filebeat720\\data", "home": "D:\\ELK\\f
ilebeat720\\filebeat720", "logs": "D:\\ELK\\filebeat720\\filebeat720\\logs"}, "t
ype": "filebeat", "uuid": "e733c6bb-f5bd-435d-a073-f9d14fd0ce81"}}}
2019-12-15T23:41:30.675-0500    INFO    [beat]  instance/beat.go:911    Build in
fo      {"system_info": {"build": {"commit": "9ba65d864ca37cd32c25b980dbb4020975
288fc0", "libbeat": "7.2.0", "time": "2019-06-20T15:05:29.000Z", "version": "7.2
2019-12-15T23:41:30.675-0500    INFO    [beat]  instance/beat.go:914    Go runti
me info {"system_info": {"go": {"os":"windows","arch":"amd64","max_procs":2,"ver
2019-12-15T23:41:30.675-0500    INFO    [beat]  instance/beat.go:918    Host inf
o       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2019-12-0
6T03:20:49.53-05:00","name":"ICDWPOASIWEB03","ip":["","::1/128","1"],"kernel_version":"6.1.7601.24520 (win7sp1_ldr_escrow.190828-1732)",
:"Windows Server 2008 R2 Standard","version":"6.1","major":1,"minor":0,"patch":0
2019-12-15T23:41:30.675-0500    INFO    [beat]  instance/beat.go:947    Process
info    {"system_info": {"process": {"cwd": "D:\\ELK\\filebeat720\\filebeat720",
 "exe": "D:\\ELK\\filebeat720\\filebeat-7.2.0-windows-x86_64\\filebeat.exe", "na
me": "filebeat.exe", "pid": 5808, "ppid": 6064, "start_time": "2019-12-15T23:41:
2019-12-15T23:41:30.675-0500    INFO    instance/beat.go:292    Setup Beat: file
beat; Version: 7.2.0
2019-12-15T23:41:30.675-0500    DEBUG   [beat]  instance/beat.go:318    Initiali
zing output plugins
2019-12-15T23:41:30.675-0500    DEBUG   [publisher]     pipeline/consumer.go:137
        start pipeline event consumer
2019-12-15T23:41:30.675-0500    INFO    [publisher]     pipeline/module.go:97
2019-12-15T23:41:30.675-0500    INFO    instance/beat.go:421    filebeat start r
2019-12-15T23:41:30.675-0500    DEBUG   [test]  registrar/migrate.go:159
isFile(D:\ELK\filebeat720\filebeat720\data\registry) -> false
2019-12-15T23:41:30.675-0500    DEBUG   [test]  registrar/migrate.go:159
isFile() -> false
2019-12-15T23:41:30.675-0500    DEBUG   [test]  registrar/migrate.go:152
isDir(D:\ELK\filebeat720\filebeat720\data\registry\filebeat) -> true
2019-12-15T23:41:30.675-0500    INFO    [monitoring]    log/log.go:118  Starting
 metrics logging every 30s
2019-12-15T23:41:30.675-0500    DEBUG   [service]       service/service_windows.
go:72   Windows is interactive: true
2019-12-15T23:41:30.676-0500    DEBUG   [test]  registrar/migrate.go:159
isFile(D:\ELK\filebeat720\filebeat720\data\registry\filebeat\meta.json) -> true
2019-12-15T23:41:30.677-0500    DEBUG   [registrar]     registrar/migrate.go:51
Registry type '0' found
2019-12-15T23:41:30.678-0500    DEBUG   [registrar]     registrar/registrar.go:1
25      Registry file set to: D:\ELK\filebeat720\filebeat720\data\registry\fileb
2019-12-15T23:41:30.678-0500    INFO    registrar/registrar.go:145      Loading
registrar data from D:\ELK\filebeat720\filebeat720\data\registry\filebeat\data.j
2019-12-15T23:41:30.679-0500    INFO    registrar/registrar.go:152      States L
oaded from registrar: 0
2019-12-15T23:41:30.679-0500    WARN    beater/filebeat.go:358  Filebeat is unab
le to load the Ingest Node pipelines for the configured modules because the Elas
ticsearch output is not configured/enabled. If you have already loaded the Inges
t Node pipelines or are using Logstash pipelines, you can ignore this warning.
2019-12-15T23:41:30.680-0500    INFO    crawler/crawler.go:72   Loading Inputs:
2019-12-15T23:41:30.680-0500    DEBUG   [cfgfile]       cfgfile/reload.go:134
Checking module configs from: D:\ELK\filebeat720\filebeat720/modules.d/*.yml
2019-12-15T23:41:30.679-0500    DEBUG   [registrar]     registrar/registrar.go:2
78      Starting Registrar
2019-12-15T23:41:30.681-0500    DEBUG   [cfgfile]       cfgfile/reload.go:148
Number of module configs found: 0
2019-12-15T23:41:30.681-0500    INFO    crawler/crawler.go:106  Loading and star
ting Inputs completed. Enabled inputs: 0
2019-12-15T23:41:30.681-0500    INFO    cfgfile/reload.go:172   Config reloader
2019-12-15T23:41:30.682-0500    DEBUG   [cfgfile]       cfgfile/reload.go:198
Scan for new config files
2019-12-15T23:41:30.682-0500    DEBUG   [cfgfile]       cfgfile/reload.go:217
Number of module configs found: 0
2019-12-15T23:41:30.682-0500    DEBUG   [reload]        cfgfile/list.go:62
Starting reload procedure, current runners: 0
2019-12-15T23:41:30.683-0500    DEBUG   [reload]        cfgfile/list.go:80
Start list: 0, Stop list: 0
2019-12-15T23:41:30.683-0500    INFO    cfgfile/reload.go:227   Loading of confi
g files completed.
2019-12-15T23:42:00.692-0500    INFO    [monitoring]    log/log.go:145  Non-zero
 metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"syst
2019-12-15T23:42:30.693-0500    INFO    [monitoring]    log/log.go:145  Non-zero
 metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"syst

I also tried a file output with the Logstash conf file. There was no output there. Could this be a problem with Logstash maybe? Kindly help me out.



From the Filebeat's output you sent we cannot see any events sent to Logstash. So this makes me guess that maybe Filebeat is not collecting logs?

I would suggest we first check this. You can test your Filebeat's part only by keeping out Logstash for now and sending events to console. See:

So with this we will see if Filebeat is collecting logs by checking if events are shown in the console.

@ChrsMark, Sure, In the mean time, in one of the servers, I reinstalled Filebeat entirely, and it seems to push data again. I will try the console output in another server, so that I can get to the root cause.

Nice! Fyi: usually you can just delete data folder in such cases so as to clean start Beats, instead of removing all the installation.

1 Like

@ChrsMark, Thank you!
And I think i see something,
so when in this fresh installation as well, the data has stopped coming after a point.
there was no changes made in any of the components.
is there any reason with scheduling or automatic refresh that I'm missing?


Not sure that I fully understand your question but I Have not seen something like this. Could you elaborate more?

Also did you manage to capture any logs of Filebeat?

@ChrsMark, I mean, there is a time when i see no data logs loading and after sometime, it will start loading again. It doesn't seem very stable. I was just trying to understand the background.
and yes I ran a console but I'm certainly getting a lot of outputs in my console now, given, the data is flowing again now.

Thanks for clarifying.

In order to know if it is Filebeat's problem we need to look into the logs.


@ChrsMark, I'll try and fetch the logs once the issue occurs again and post it down.

Thank you so much for helping me out so far :slight_smile:

No problem!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.