Best way to configure multiple output block

My question is best way to configure multiple output block .

in my case.

input { beat => {}}
filter {
if [type] {}
else if [type] {}
else if [type] {}
else [type] {}
}
output {
if [type] {}
else if [type] {}
else if [type] {}
else {}
}

single input, multiple filter and multiple output.
The reason that I did not configure separate pipeline is I don't want to make many worker.
so I wrote as above configuration in one pipeline.

And I met error..

[ERROR][org.logstash.execution.WorkerLoop][main] Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash.
org.jruby.exceptions.SystemCallError: (SystemCallError) Unknown error (SystemCallError) - <STDOUT>
        at org.jruby.RubyIO.write(org/jruby/RubyIO.java:1477) ~[jruby-complete-9.2.9.0.jar:?]
        at org.jruby.RubyIO.write(org/jruby/RubyIO.java:1432) ~[jruby-complete-9.2.9.0.jar:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_output_minus_stdout_minus_3_dot_1_dot_4.lib.logstash.outputs.stdout.multi_receive_encoded(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-stdout-3.1.4/lib/logstash/outputs/stdout.rb:43) ~[?:?]
        at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1814) ~[jruby-complete-9.2.9.0.jar:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_output_minus_stdout_minus_3_dot_1_dot_4.lib.logstash.outputs.stdout.multi_receive_encoded(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-stdout-3.1.4/lib/logstash/outputs/stdout.rb:42) ~[?:?]
        at usr.share.logstash.logstash_minus_core.lib.logstash.outputs.base.multi_receive(/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:87) ~[?:?]
        at org.logstash.config.ir.compiler.OutputStrategyExt$AbstractOutputStrategyExt.multi_receive(org/logstash/config/ir/compiler/OutputStrategyExt.java:118) ~[logstash-core.jar:?]
        at org.logstash.config.ir.compiler.AbstractOutputDelegatorExt.multi_receive(org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101) ~[logstash-core.jar:?]
        at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.start_workers(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:262) ~[?:?]

I think this Error means that to make workers it lack the number of cores.

so.. what is the best way to configure logstash setting in my case ?

your last else shouldn’t contain any expression as it’s designed to capture anything else. if you want to use expression there, you should change that to else if

also, i assume the types are defined in the inputs ?

thank you for replying

I wrote wrongly doing explain my case by mistake.
I fixed my question. thank you

yes, types are defined in the inputs.
input received filebeat.
And I set in filebeat config file

- type: log
  enable: true
  paths:
    - /var/log/*.log
  fields:
    document_type: type_A

I hope you're not mixing filebeat input types with a field [type] in your logstash configuration.

using your filebeat configuration above shouldn't your logstash filter be:

filter {
 if "document_type" == "type_A" { your filter here }
 else if  "document_type" == "type_B" {your filter here} 
}

similarly on your output

Hi @yj_h,

are all those outputs Elasticsearch with just different index names or are the outputs a mix of different output types?

If they are all Elasticsearch outputs and you are only changing index you might be able to leverage @metadata fields instead of conditionals.

My Elasticsearch output looks like this

output {
  elasticsearch {
        hosts => ["10.0.0.1:9200"]
        index => "%{[@metadata][log_prefix]}-%{[@metadata][index]}-%{[@metadata][rotation]}"
  }
} 

This lets me set those @metadata fields on inputs or in the filter section and keeps the output config very simple.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.