Logstash Error - causing shutdown and restart

Hi may I know how to resolve this? I am unable to identify which file is causing this error.

Can you post your configuration file?

Hi Sam, below is the yml config file.

path.data: /usr/share/logstash/data/
path.config: /etc/logstash/conf.d/
path.logs: /var/log/logstash

pipeline.batch.size: 125
pipeline.batch.delay: 50

log.level: info
path.logs: /var/log/logstash

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "*************"
xpack.monitoring.elasticsearch.hosts: ["http://IP"]

The problem is in your logstash configuration file, not in logstash.yml. It will be one of the files in path.config (/etc/logstash/conf.d/). The hosts option on an elasticsearch output expects an array of strings. logstash is flexible (perhaps confusingly flexible) about allowing "barewords" where strings are expected (i.e. strings not enclosed in quotes), but the periods in an IP address will break that.

To put it more simply,

hosts => [ 127.1.2.3:9200 ]

will result in an error and has to be changed to

hosts => [ "127.1.2.3:9200" ]

Hi, I have checked all the hosts are already properly formatted to this. But still having the issue. Other than that, what should I check as well?

Do you have ANY file in the folder /etc/logstash/conf.d with the ending .conf which you haven't checked? Some old "disabled" dev-pipeline or something maybe?

As Badger said, this almost definitely is a config file without quotes around the address in a conf file in this specific location according to your screenshot (because of the pipeline being "main" and the error given). Please double check.

Hi yes, I have 3 files there. 01-wazuh.conf / 02-beats-input.conf / 30-elasticsearch-output.conf.

The rest I moved to a backup folder.

Error:

[ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, ,, ] at line 91, column 23 (byte 1978) after output {\n if [type] == "stdin-type" {\n elasticsearch {\n hosts => [10.162", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in block in compile_sources'", "org/jruby/RubyArray.java:2577:in map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:151:in initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:23:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:325:in block in converge_state'"]}

[2019-11-26T15:15:40,998][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, ,, ] at line 91, column 23 (byte 1978) after output {\n if [type] == "stdin-type" {\n elasticsearch {\n hosts => [10.162", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in block in compile_sources'", "org/jruby/RubyArray.java:2577:in map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:151:in initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:23:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:325:in block in converge_state'"]}

Try running with --config.debug --log.level debug --config.test_and_exit on the command line. That will show you each file that it is loading as part of the configuration, and it will show you the merged configuration. You can then identify where line 91 is coming from.

Hi thanks for the suggestion. May I know how should I run it on the command line?

service logstash --config.debug ?
service logstash --log.level debug
service logstash --config.test_and_exit

Correct ?

It sounds like you are using a configuration manager. I can't help with that since I do not know which manager you are using.

Oh no, I am using command line. Not using any configuration manager

If you are running logstash on the command line then just add all three options to the command line.

Hi friend, please help to advise. Now all the filters and input/ output are ok. Now having pipeline patterns error and the following:

[2019-11-28T17:15:38,171][ERROR][logstash.javapipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Grok::PatternError: pattern %{NGINXACCESS} not defined>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:123:in block in compile'", "org/jruby/RubyKernel.java:1425:in loop'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:93:in compile'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.0.4/lib/logstash/filters/grok.rb:281:in block in register'", "org/jruby/RubyArray.java:1792:in each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.0.4/lib/logstash/filters/grok.rb:275:in block in register'", "org/jruby/RubyHash.java:1419:in each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.0.4/lib/logstash/filters/grok.rb:270:in register'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:56:in register'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:191:in block in register_plugins'", "org/jruby/RubyArray.java:1792:in each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:190:in register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:446:in maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:203:in start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:145:in run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:104:in block in start'"], :thread=>"#<Thread:0x4d8c5e run>"}
-bash: syntax error near unexpected token `newline'
root@eta10:/var/log/logstash# [2019-11-28T17:15:38,189][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create, action_result: false", :backtrace=>nil}

There is no GROK-Pattern with the name %{NGINXACCESS} defined. Either you need to define a custom Pattern with this name yourself or you have to replace it with some other pattern - testwise with %{GREEDYDATA} or something alike.

Please provide the config if this doesn't work for you.

Hi, do you know which file does it refers to? I am unable to find the file containing this word.

I was following this guide, its should be the official guideline, how come the patterns cannot be match? Any idea? Or how can I recreate those missing items?

Logstash.yml

path.data : /usr/share/logstash/data/
path.config : /etc/logstash/conf.d/
path.logs : /var/log/logstash
http.port: 9610
xpack.monitoring.enabled : true
xpack.monitoring.elasticsearch.username : "elastic"
xpack.monitoring.elasticsearch.password : "changeme"
xpack.monitoring.elasticsearch.hosts : ["http://ip:9200"]

Hi @skyluke.1987.

i think i am sure your issue is coming from logstash-metadb file.

just do one thing remove your old .logstash-metadb file from user home path:-

you need to re run logstash file from terminal as your previous.

Note :- this kind of issue come when your are doing some import data from some other server or db into existing elastic search index but existing index is not matching with newly coming index pattern. or your privious metadata is not able to read/match

Thanks
HadoopHelp

Hi thanks. My logstash now encounter frequent restart. Each time it will not last for long, only a few minutes.

Hi @skyluke.1987.

so i think you collecting some data from some source to elastic search index using Logstash within logstash scheduler ?

if yes :::please try to increase your scheduler time interval ....
this can be resolved your issue?
and also try to check RAM Occupied from Elasticsearch machine ...

Note :- correct me if i am wrong :clap:

Thanks
HadoopHelp