[Need Help ]Filebeat and Logstash in multiple instances

(Amit Joshi) #1

Hello there,

I am new to ELK stack.
We are designing a system wherein we will be checking the performance of our product via product logs on a weekly basis and draw some visualizations on Kibana dashboard.

For this reason, we are expecting an index to be created per week so that we can analyze the results for past runs. This seems a little straight forward.
However, the twist comes in when I need to trigger the performance run of our product, ON DEMAND, which will generate its own logs and subsequently expecting those to be loaded in a separate index. What if Filebeat and Logstash are not yet done with the processing of previous run logs.
I believe, it will load the new data to the existing index.

To solve this, the approach I am thinking of is to spawn multiple instances of Logstash and Filebeat with their own configurations such as different path.data, different index names and so on. This way, I will have different indexes with their own data.

Can someone please suggest me a better approach to achieve this?

(Ken Harvey) #2

There are a couple of ways that you can go about this.
One way is to setup multiple inputs inside logstash for different ports. Then have 2 Filebeats running, one is the regular which runs on 5044 all the time, and then another that is on-demand on port 5045. Then you just build the proper input on your pipelines.yml inside logstash.

The other approach would be to setup a pipeline to pipeline setup. To do this you would create a special field to identify what log is being processed. For example your on-demand filebeat.yml would look something like this:

  - type: log
    enabled: true
      - /var/logs/ondemand.log
      ondemand: true

This creates a field called ondemand that we can search for in the pipelines.yml file on the logstash server. So your pipelines.yml would look something like this:

- pipeline.id: filebeats
  config.string: |
    input {
      beats {
        port => 5044
    output {
      if "ondemand" in [fields] {
        pipeline {
          send_to => ondemand_run
      else {
        pipeline {
          send_to => regular_run
- pipeline.id: ondemand_run
  path.config: "/etc/logstash/conf.d/ondemand.conf"
- pipeline.id: regular_run
  path.config: "/etc/logstash/conf.d/regular.conf"

Then all you have to do is setup your input on your logstash conf files to look have this:

input {
  pipeline {
    address => ondemand_run

So you would create a field on each of the filebeat inputs that you can search on.
Then you would create conditional logic in your pipelines.yml to determine which config to use by created a virtual address.
Then you set the virtual address in the input of the logstash conf files. From there you can change the output to whatever index you want and send it to Elastic Search.

(Amit Joshi) #3

Thank you Ken for response!
For my requirement, second approach seems reasonable. I will drill down further and design our system.

(system) closed #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.