Understanding pipelines v6.x

Elasticsearch/Logstash version 6.1.1

When I start logstash I receive the following error with the below settings:

I have my logstash.yml with:

path.data: /var/lib/logstash
path.logs: /var/log/logstash

# ------------ Metrics Settings --------------
http.host: 10.206.141.101
http.port: 9600

# ------------ xpack settings ----------------
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: "http://10.206.141.93:9200"
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: "password4"

# ------------ xpack management settings -----
xpack.management.enabled: true
xpack.management.elasticsearch.url: "http://10.206.141.93:9200"
xpack.management.elasticsearch.username: "logstash_system_pipeline"
xpack.management.elasticsearch.password: "password"
xpack.management.logstash.poll_interval: 5s
xpack.management.pipeline.id: ["osquery_kafka","sysmon_kafka"]
[2017-12-21T20:30:25,800][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Template file '' could not be found!", :class=>"ArgumentError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:31:in `read_template_file'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:17:in `get_template'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:7:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:57:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:26:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:9:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:43:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:343:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:354:in `block in register_plugins'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:354:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:743:in `maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash

I tried creating a pipelines.yml file in /etc/logstash/ and adding my two pipeline.id's but I don't get any different results.

All of my configs are loaded in pipelines management setting of kibana. They are both taking data from kafka and inputing into elasticsearch.

Can someone help me understand the error.

[2017-12-21T20:30:25,800][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Template file '' could not be found!", ...

This error message is pretty clear, isn't it? Your elasticsearch output configuration appears to reference a file that doesn't exist.

I will post my out put when I get in but it doesn't point to any template.
That's the confusing part.

Also do you have to have a pipelines.yml file? If you are specifying the pipeline.id in the logstash.yml

  if [@metadata][source] == "sysmon-kafka" {
    elasticsearch {
      hosts => ["10.206.141.233:9200"]
      index => "sysmon-%{+YYYY.MM.dd}"
      user => "USER"
      password => "password"
    }
  }
}

And I didn't pay enough attention to the paste in my first post. This is the error I am getting:

[2017-12-22T12:12:05,267][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:213:in `get_event_type'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:165:in `event_action_params'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:39:in `event_action_tuple'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:34:in `block in multi_receive'", "org/jruby/RubyArray.java:2486:in `map'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:34:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:13:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:50:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:487:in `block in output_batch'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:486:in `output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:438:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:393:in `block in start_workers'"]}

An unexpected error occurred! {:error=>#, :backtrace=>

This error message has been mangled. What comes after "#" in the original log?

Pasting as plain text

[2017-12-22T12:22:36,280][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<NoMethodError: undefined method <' for nil:NilClass>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:213:inget_event_type'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:165:in event_action_params'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:39:inevent_action_tuple'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:34:in block in multi_receive'", "org/jruby/RubyArray.java:2486:inmap'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:34:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:13:inmulti_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:50:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:487:inblock in output_batch'", "org/jruby/RubyHash.java:1343:in each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:486:inoutput_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:438:in worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:393:inblock in start_workers'"]}

That is a copy from the log running now. It also has an infinite loop thing going on as well because of this error

Hmm. As a workaround, can you try setting the elasticsearch output's document_type option to e.g. "doc"?

That seems to have worked. Can you explain what is happening here. I'm very curious.

It looks like a bug in the elasticsearch output. When document_type is set we enter another code path that avoid the crash site.

thank you. Its working again. Are feature requests made on Git? Because it would be nice to be able use kibana to create pipeline.ids on specific logstash nodes instead of having to ssh to the node and add the data into the logstash.yml file then go to kibana and add the pipeline config. I believe the feature would make creting and using the new (and great) feature easier to people and allow people without ssh or sudo access the ability to create pipelines, because lets admit there are rpeople who can have that access in kibana but they have no reason to have sudo access to an entire machine.

Are feature requests made on Git?

GitHub? Yes.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.