Can Filebeat and Metricbeat together be using the same port of logstash

Hello , I am new to Metricbeat . I have system set which is running Filebeat ---> logstash ----> ES ----> Kibana. Now I want to configure Metricbeat and want to send the data into logstash . So I stopped the filebeat and then started the metricbeat to test it was working fine and sending data. But when I started filebeat I see the below in the logs.

2017-07-18T18:12:51Z ERR Connecting error publishing events (retrying): dial tcp x.x.x.x:5044: i/o timeout
2017-07-18T18:13:13Z INFO No non-zero metrics in the last 30s
2017-07-18T18:13:43Z INFO No non-zero metrics in the last 30s
2017-07-18T18:14:13Z INFO No non-zero metrics in the last 30s
2017-07-18T18:14:21Z ERR Connecting error publishing events (retrying): dial tcp x.x.x.x:5044: i/o timeout
2017-07-18T18:14:43Z INFO No non-zero metrics in the last 30s

Is it because Metricbeat using this port ? Please suggest me more on it. Thanks

logstash acts as a server and filebeat,metricbeat connect and publish data to this server. One can connect a many beats to logstash. Might be some other network issue or configuration in logstash.

Have you check logstash logs?
You have some firewall/network rules in place, limiting packet rates or active connections?
You get errors from both beats?

Hey yes I agree with you .But was confused with the behavior of the beat .
Right now I have filebeat and Metric beat . Which ever I start first gets connected to the port and the later one gets
ERR Connecting error publishing events (retrying): dial tcp x.x.x.x:5044: i/o timeout

Now that I have started just the metric beat which is able to communicate with the logstash . The filebeat is throwing the errors.
Can I know more about how to check the limiting packet rates or active connections?

Will upload all the logs below.Thanks a lot for your time :slight_smile:

Can I know more about how to check the limiting packet rates or active connections?

You have to talk to your administrator if there are any limits enforced in network equipment or host logstash is running on. E.g. the listen system call accepts a backlog parameter for number of outstanding connections. The max backlog can normally be configured on system level.

As a simple test try connection with multiple telnet and see how it goes.

You can also track the connection attempt with wireshark/tshark/tcpdump.

I am also facing a similar problem. I have to monitor health of 4 different servers. So I am starting metricbeat separately on those instances and directing logs to a single logstash which is setup in a different server. So I am unable to figure out how to use metricbeat for sending logs at the same time from different servers using a single port or rather suggest a different solution for doing this. Also in my input filter of logstash, I am unable to separate those logs so that i can place them in different indexes.

Please help me.

Hey

Well . You can configure all the 4 metricbeats to send logs to the same port of logstash . Logstash will take care of getting these data and it should not be a problem. Once you start your metricbeat you can go the logs and see if you have any errors. In case you don't then your good.

But if you want to create dashboard based on the metricbeat then I would suggest you stream them to elasticsearch and create a new metricbeat index and import the dashboard. This helped for me.

But how to configure my metricbeat.yml files which are installed and running on different server instances. I can't use the same port 5044 of beats to send data from every configuration file of beat. All beats are running at the same time here and directing to a single logstash installed in a different EC2 instance, so port 5044 can't be used by every running metricbeat.

Also if this issue gets resolved, then other issue comes that when Logstash takes these chunks of data from different servers with help of beat, how shall I separate them into different indexes assigning separate index for each server name. As here we are scaling our elasticsearch, so for continous monitoring people suggests to index and store data on different data nodes and allocate proper master nodes too. What changes shall be required then in the my logstash config file and other config files as well?

Please help.

so port 5044 can’t be used by every running metricbeat.
I think it can be used and logstash will take care of getting these data. I have a setup where more than 15 beats(Specially network and filebeat) are sending data to the same port.

how shall I separate them into different indexes assigning separate index for each server name
In this case I think you have to create separate indexes .But like I said I preferred way form my thought would be

filebeat(log) ---- > Logstash ----> Elasticsearch ---> Kibana

Metricbeat ---- Elasticsearch -----> Kibana

Please do correct if my suggestions is wrong. Thanks

I think it can be used and logstash will take care of getting these data.

Thanks for the info. But still I'm in doubt that how to separate them as different data coming from different metricbeats. How to finally edit the logstash config file or different metricbeat.yml files either associating different "types" to each beat or may be something else then.

how shall I separate them into different indexes assigning separate index for each server name
In this case I think you have to create separate indexes .But like I said I preferred way form my though

For this I understood the process, but here I am wanting mainly to scale up my thing by adding extra nodes to the cluster and assigning master nodes as well. Also the concept of load balancing shall also coming into role along with it by assigning replicas or extra nodes. For each different machine the data is coming I wish a separate node to use (else if any other option you may suggest) as continous real time health monitoring has to be done which requires adequate index space (GB's of data) and provisions to prevent any kind of failovers too. So, now you must be getting my actual problem that how to allocate extra data and master nodes and directing my incoming traffic from beats to separate nodes accordingly.

@steffens and @Raghunandan_Sk, I wish if both of you can give some input to my queries.

  1. Logstash acts as a server to filebeat/metricbeat and any other beat. You can think of the protocol as a very, very, very simplified version of HTTP with persistent TCP connection, used to stream logs in batches. Normally one can connect a hundred of beats to logstash. Limitations in number of connections to logstash is nothing beats can have any control upon. And normally logstash doesn't care either. The cause is somewhere in the network or some OS settings (e.g. ulimit, intermediate proxies/load balancers/NAT).
    If you find something is limiting the number of concurrent connections, you can try/test to add additional connections via telnet/nc. Keep in mind, logstash has a client inactive connectivity timeout of 30 seconds. Increase this to a much bigger value.
    If there is really a limitation on number of connections a port is allowed to serve, you can try to open another port for metricbeat (just configure the beat plugin a second time).
    So far you didn't mention logstash/beats version in use yet. Plus the error messages from beats. This would be helpful to asses if you are actually facing the same problem, or if your problem is somewhat different in subtle ways (networking can go wrong in so many ways :frowning: ).

  2. beats do send a many meta-data you can use for filtering and routing events. For example our Getting Started Guide Settings for Logstash display how a different index is configured in logstash, per beat type. The @metadata.beat includes the beat type (filebeat or metricbeat) and @metadata.type contains the _type the beat would normally use when indexing directly to elasticsearch. The sample config will give you the same result, as if the beat would directly push to Elasticsearch.
    Normally metricbeat data will be put in the same metricbeat index. In kibana/ES API, you filter e.g. on beat.hostname or beat.name fields, to get the actual host you are looking for. If you really want to have a separate index per hostname, you can modify the index setting to: index => "%{[@metadata][beat]}-%{[beat][name]}-%{+YYYY.MM.dd}"

For load balancing see the hosts setting in logstash output plugin. The gist is: only include data/client notes in urls and the plugin will load balance batches of events for you. If you have more questions about filtering/event routing , please open another discussion in the beats or logstash forum.

Let's keep this discussion about multiple beats having problems to connect to Logstash, as mixing too many discussion in one topic becomes confusing for me and other other users looking up existing discussions for problems they might face.

As @Raghunandan_Sk already said. The metricbeat data are already structured and there is rarely need to modify these data. Directly pushing to Elasticsearch can simplify things quite a lot (less moving parts).

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.