We have configured the logstash on ServerA and pipeline is configured. All the filebeat agents are sending data successfully to ServerA.
To avoid delay and better load management. We have implemented a new logstash server ServerB and configured separate pipeline.conf file .
ServerA PrimaryPipeline.conf has port 3101
ServerB Pipeline1.conf has port 3102
ServerB Pipeline2.conf has port 3103
Each pipeline have different ports
On first agent server server1 we have 2 filebeat.yml file. Filebeat1.yml pointing to ServerA logstash and utilizing the grokking defined on Pipelineprimary.conf. via port 3101
On same agent server server1 the second filebeat, Filebeat2.yml pointing to SerevrB Logstash and utilizing the grokking defined on Pipeline2.conf.via port 3103
Both the filebeat.yml files on server ServerA is successfully sending the data.
On second agent server Server2 we have 3 filebeat.yml file.
On Server2 Filebeat1.yml pointing to ServerA logstash and utilizing the grokking defined on Pipelineprimary.conf. via port 3101
On same agent server Nodeserver2 the second filebeat, Filebeat2.yml pointing to SerevrB Logstash and utilizing the grokking defined on Pipeline1.conf.via port 3102
On same agent server Nodeserver2 the third filebeatc, Filebeat3.yml pointing to SerevrB Logstash and utilizing the grokking defined on Pipeline2.conf.via port 3103
Successfully sending data via primarypipeline (via port 3101) and Pipeline1.conf (via port 3102)
Unfortunately on NodeServer2. No data is getting send for the logs defined in the filebeat3.yml which is utilizing the grokking defined on Pipeline2.conf for server ServerB.(port 3103)
on servers where two pipelines are configured via port 3101 and 3102 the data is sending successfully.
We have noticed on servers where three pipelines are configured the third Filebeat3.yml is not sending any data to pipeline2.conf via port 3103. Other two filebeat.yml are sending data.
Thanks for your response.
Pipeline just contains an input filter and specific filtering and output.
Filebeat contains the log location, port and destination logstash server details nothing more.
The issue happening on servers where three filebeat services coexist . Is there any specific configuration needs to follow if the multiple file beat instances on same server reporting to same logstash server via different pipeline(split pipeline).
The filebeat instance are successfully sending the data if its utilizing only one of the pipeline.conf (Split pipeline method).
Multiple pipelines with different conditionals and different port numbers. Verified the conditionals already same conditional is working for other servers. The issue exist where 3 filebeat services are running and utilizing multiple pipeline.conf for sending data to the logstash. The pipeline.conf contains more than 1000 lines. So very difficult to copy paste over here.
Filebeat uses filebeat.yml, how your logstash configuration looks like doesn't matter for filebeat as it has no knowledge about it, logstash is just an output.
You need to share your filebeat.yml for the instances on the server that are not working, if you can't share it here try using pastebin for example, but without seeing the configurations is pretty hard to guess what may be the issue.
For your information. The same filebeat.yml is working on another server. All the log files are on local disk so the path is same. The only difference on working and non working is server name on file beat.
Always we are receiving the below error on non working filebeat server.
beater/filebeat.go:178 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled.
If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.