Thank you @Badger for point me to the right direction. I didn't have a clue what is happening in background . From this post I released that every data stream will be processed with every filter in all pipelines. If we want to control the flow we need to use conditionals. But what is happening if we have for example 50 servers (50 filebeat-s agent) ? What is the best practice in this situation?
Reading articles I found that problem with multiple log files can solve using multiple instances of filebeat. Post is form here. Could someone explain to me how this solves the problem?