Cannot create pipeline

Hi,
logstash -e 'input { stdin{} } output { elasticsearch{ host =>"localhost:9200" } }'

Please find the complete logs as below while trying to configure logstash:

[2017-08-15T12:03:17,000][DEBUG][logstash.plugins.registry] Adding plugin to the registry {:name=>"fb_apache", :type=>:modules, :class=>#<LogStash::Modules::Scaffold:0x336dcc30 @module_name="fb_apache", @directory="D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/modules/fb_apache/configuration">}
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] -------- Logstash Settings (* means modified) ---------
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] node.name: "DESKTOP-J4IPO2R"
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] path.data: "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/data"
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] *config.string: "input { stdin{} } output { elasticsearch{ host = } }"
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] modules.cli: []
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] modules: []
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] config.test_and_exit: false
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] config.reload.automatic: false
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] config.reload.interval: 3
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] metric.collect: true
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] pipeline.id: "main"
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] pipeline.system: false
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] pipeline.workers: 8
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] pipeline.output.workers: 1
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] pipeline.batch.size: 125
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] pipeline.batch.delay: 5
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] pipeline.unsafe_shutdown: false
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] path.plugins: []
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] config.debug: false
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] *log.level: "trace" (default: "info")
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] version: false
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] help: false
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] log.format: "plain"
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] http.host: "127.0.0.1"
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] http.port: 9600..9700
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] http.environment: "production"
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] queue.type: "memory"
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] queue.drain: false
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] queue.page_capacity: 262144000
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] queue.max_bytes: 1073741824
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] queue.max_events: 0
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] queue.checkpoint.acks: 1024
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] queue.checkpoint.writes: 1024
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] queue.checkpoint.interval: 1000
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] dead_letter_queue.enable: false
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] dead_letter_queue.max_bytes: 1073741824
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] slowlog.threshold.warn: -1
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] slowlog.threshold.info: -1
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] slowlog.threshold.debug: -1
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] slowlog.threshold.trace: -1
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] path.queue: "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/data/queue"
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] path.dead_letter_queue: "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/data/dead_letter_queue"
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] path.settings: "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/config"
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] path.logs: "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/logs"
[2017-08-15T12:03:17,015][DEBUG][logstash.runner ] --------------- Logstash Settings -------------------
[2017-08-15T12:03:17,031][DEBUG][logstash.agent ] Agent: Configuring metric collection
[2017-08-15T12:03:17,031][DEBUG][logstash.instrument.periodicpoller.os] PeriodicPoller: Starting {:polling_interval=>5, :polling_timeout=>120}
[2017-08-15T12:03:17,062][DEBUG][logstash.instrument.periodicpoller.jvm] PeriodicPoller: Starting {:polling_interval=>5, :polling_timeout=>120}
[2017-08-15T12:03:17,078][DEBUG][logstash.instrument.periodicpoller.persistentqueue] PeriodicPoller: Starting {:polling_interval=>5, :polling_timeout=>120}
[2017-08-15T12:03:17,094][ERROR][logstash.agent ] Cannot create pipeline {:reason=>"Expected one of #, => at line 1, column 48 (byte 48) after output { elasticsearch{ host ", :backtrace=>["D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/logstash-core/lib/logstash/pipeline.rb:59:in initialize'", "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/logstash-core/lib/logstash/pipeline.rb:156:ininitialize'", "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/logstash-core/lib/logstash/agent.rb:286:in create_pipeline'", "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/logstash-core/lib/logstash/agent.rb:95:inregister_pipeline'", "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/logstash-core/lib/logstash/runner.rb:314:in execute'", "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:67:inrun'", "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/logstash-core/lib/logstash/runner.rb:209:in run'", "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:132:inrun'", "D:\JAVA_INTEGRATIONS\ELK\logstash-5.5.1\lib\bootstrap\environment.rb:71:in `(root)'"]}
[2017-08-15T12:03:17,094][DEBUG][logstash.agent ] starting agent
[2017-08-15T12:03:17,094][DEBUG][logstash.instrument.periodicpoller.os] PeriodicPoller: Stopping
[2017-08-15T12:03:17,094][DEBUG][logstash.agent ] Starting puma
[2017-08-15T12:03:17,094][DEBUG][logstash.instrument.periodicpoller.jvm] PeriodicPoller: Stopping
[2017-08-15T12:03:17,094][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9600}
[2017-08-15T12:03:17,109][DEBUG][logstash.instrument.periodicpoller.persistentqueue] PeriodicPoller: Stopping
[2017-08-15T12:03:17,109][DEBUG][logstash.api.service ] [api-service] start

I appreciate any feedback on my issue also suggestions how to solve the same.

Thanks.

Since Logstash 2.0, the elasticsearch output doesn't have a host option. It's called hosts. I would've expected a different kind of error because of that, but start by fixing that obvious error.

Thanks for the quick reply. But change in the input did not change the ERROR as below.

Input: logstash -e 'input { stdin{} } output { elasticsearch{ hosts => [ "localhost:9200" ] }'

[2017-08-15T13:27:54,420][DEBUG][logstash.plugins.registry] Adding plugin to the registry {:name=>"fb_apache", :type=>:modules, :class=>#<LogStash::Modules::Scaffold:0x336dcc30 @module_name="fb_apache", @directory="D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/modules/fb_apache/configuration">}
[2017-08-15T13:27:54,533][DEBUG][logstash.runner ] -------- Logstash Settings (* means modified) ---------
[2017-08-15T13:27:54,533][DEBUG][logstash.runner ] node.name: "DESKTOP-J4IPO2R"
[2017-08-15T13:27:54,533][DEBUG][logstash.runner ] path.data: "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/data"
[2017-08-15T13:27:54,533][DEBUG][logstash.runner ] *config.string: "input { stdin{} } output { elasticsearch{ hosts = localhost:9200 ] }"
[2017-08-15T13:27:54,533][DEBUG][logstash.runner ] modules.cli: []
[2017-08-15T13:27:54,533][DEBUG][logstash.runner ] modules: []
[2017-08-15T13:27:54,533][DEBUG][logstash.runner ] config.test_and_exit: false
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] config.reload.automatic: false
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] config.reload.interval: 3
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] metric.collect: true
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] pipeline.id: "main"
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] pipeline.system: false
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] pipeline.workers: 8
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] pipeline.output.workers: 1
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] pipeline.batch.size: 125
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] pipeline.batch.delay: 5
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] pipeline.unsafe_shutdown: false
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] path.plugins: []
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] config.debug: false
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] *log.level: "trace" (default: "info")
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] version: false
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] help: false
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] log.format: "plain"
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] http.host: "127.0.0.1"
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] http.port: 9600..9700
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] http.environment: "production"
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] queue.type: "memory"
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] queue.drain: false
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] queue.page_capacity: 262144000
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] queue.max_bytes: 1073741824
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] queue.max_events: 0
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] queue.checkpoint.acks: 1024
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] queue.checkpoint.writes: 1024
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] queue.checkpoint.interval: 1000
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] dead_letter_queue.enable: false
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] dead_letter_queue.max_bytes: 1073741824
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] slowlog.threshold.warn: -1
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] slowlog.threshold.info: -1
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] slowlog.threshold.debug: -1
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] slowlog.threshold.trace: -1
[2017-08-15T13:27:54,537][DEBUG][logstash.runner ] path.queue: "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/data/queue"
[2017-08-15T13:27:54,541][DEBUG][logstash.runner ] path.dead_letter_queue: "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/data/dead_letter_queue"
[2017-08-15T13:27:54,541][DEBUG][logstash.runner ] path.settings: "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/config"
[2017-08-15T13:27:54,541][DEBUG][logstash.runner ] path.logs: "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/logs"
[2017-08-15T13:27:54,541][DEBUG][logstash.runner ] --------------- Logstash Settings -------------------
[2017-08-15T13:27:54,718][DEBUG][logstash.agent ] Agent: Configuring metric collection
[2017-08-15T13:27:54,718][DEBUG][logstash.instrument.periodicpoller.os] PeriodicPoller: Starting {:polling_interval=>5, :polling_timeout=>120}
[2017-08-15T13:27:54,843][DEBUG][logstash.instrument.periodicpoller.jvm] PeriodicPoller: Starting {:polling_interval=>5, :polling_timeout=>120}
[2017-08-15T13:27:54,874][DEBUG][logstash.instrument.periodicpoller.persistentqueue] PeriodicPoller: Starting {:polling_interval=>5, :polling_timeout=>120}
[2017-08-15T13:27:54,905][ERROR][logstash.agent ] Cannot create pipeline {:reason=>"Expected one of #, => at line 1, column 49 (byte 49) after output { elasticsearch{ hosts ", :backtrace=>["D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/logstash-core/lib/logstash/pipeline.rb:59:in initialize'", "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/logstash-core/lib/logstash/pipeline.rb:156:ininitialize'", "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/logstash-core/lib/logstash/agent.rb:286:in create_pipeline'", "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/logstash-core/lib/logstash/agent.rb:95:inregister_pipeline'", "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/logstash-core/lib/logstash/runner.rb:314:in execute'", "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:67:inrun'", "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/logstash-core/lib/logstash/runner.rb:209:in run'", "D:/JAVA_INTEGRATIONS/ELK/logstash-5.5.1/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:132:inrun'", "D:\JAVA_INTEGRATIONS\ELK\logstash-5.5.1\lib\bootstrap\environment.rb:71:in `(root)'"]}
[2017-08-15T13:27:54,905][DEBUG][logstash.agent ] starting agent
[2017-08-15T13:27:54,921][DEBUG][logstash.instrument.periodicpoller.os] PeriodicPoller: Stopping
[2017-08-15T13:27:54,921][DEBUG][logstash.agent ] Starting puma
[2017-08-15T13:27:54,921][DEBUG][logstash.instrument.periodicpoller.jvm] PeriodicPoller: Stopping
[2017-08-15T13:27:54,921][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9600}
[2017-08-15T13:27:54,921][DEBUG][logstash.instrument.periodicpoller.persistentqueue] PeriodicPoller: Stopping
[2017-08-15T13:27:54,921][DEBUG][logstash.api.service ] [api-service] start

I could also observe that logstash runs good with the command below.

logstash -e 'input { stdin{} } output { stdout{} }'

But unable to export the same to elasticsearch. Please let me know if you want me to update any config on elasticsearch.

Thanks in advance.

You are missing a closing curly brace in the example where you see the error.

Apologies. I pasted it wrong. Anyway I was using the right command as below.

logstash -e 'input { stdin{} } output { elasticsearch { hosts => [ "localhost:9200" ] } }'

Make sure you're using regular straight quotes everywhere. What you've posted here indicates that you're using curly quotes but it's unclear if that's the actual command you've run or if that damage has been inserted somewhere else.

Hi . It works now. Thanks for your time.

I am a beginner working on ELK from the past few days.

I have few questions here.

  1. I see output.elasticsearch: key configured in filebeat.yml. When I am exporting the logs from filebeat to logstash why do we need to provide the elasticsearch host over in filebeat? When I remove that ,filebeat doesn't start.
  2. I tried updating the log file with few more loggers but could not find the same when tried to search them through kibana. Please help on this.

Thanks in advance.

When I am exporting the logs from filebeat to logstash why do we need to provide the elasticsearch host over in filebeat?

You don't.

When I remove that ,filebeat doesn’t start.

Then you're doing something wrong. Without knowing what you've tried we can't help.

I tried updating the log file with few more loggers

What do you mean?

Whenever you can, copy/paste actual configuration instead of attempting to describe it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.