Log stash error

below is the error i get while running logstash

Sending Logstash logs to C:/busapps/rrsb/gbl1/logstash/7.0.0/logs which is now configured via log4j2.properties
[2019-11-01T10:04:00,538][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-11-01T10:04:00,703][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.0.0"}
[2019-11-01T10:04:13,129][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, { at line 69, column 5 (byte 1460) after output {\r\n\tstdout {}\r\n \tif ("total" in [tags]) {\r\n \t\telasticsearch {\r\n \t\t\thosts => ["localhost:9200"]\r\n \t\t\tindex => "totalexecution-%{+YYYY}"\r\n\t\t\t\tuser => elastic\r\n\t\t\t\tpassword => 3wUwULD3QJaKke\r\n\t\t\t\r\n \t\t", :backtrace=>["C:/busapps/rrsb/gbl1/logstash/7.0.0/logstash-core/lib/logstash/compiler.rb:41:in compile_imperative'", "C:/busapps/rrsb/gbl1/logstash/7.0.0/logstash-core/lib/logstash/compiler.rb:49:in compile_graph'", "C:/busapps/rrsb/gbl1/logstash/7.0.0/logstash-core/lib/logstash/compiler.rb:11:in block in compile_sources'", "org/jruby/RubyArray.java:2577:in map'", "C:/busapps/rrsb/gbl1/logstash/7.0.0/logstash-core/lib/logstash/compiler.rb:10:in compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:151:in initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in initialize'", "C:/busapps/rrsb/gbl1/logstash/7.0.0/logstash-core/lib/logstash/java_pipeline.rb:23:in initialize'", "C:/busapps/rrsb/gbl1/logstash/7.0.0/logstash-core/lib/logstash/pipeline_action/create.rb:36:in execute'", "C:/busapps/rrsb/gbl1/logstash/7.0.0/logstash-core/lib/logstash/agent.rb:325:in block in converge_state'"]}
[2019-11-01T10:04:15,907][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-11-01T10:04:19,360][INFO ][logstash.runner ] Logstash shut down.

config file

config file has 68 lines , but log stash is showing error in line number 69 . am not sure whether log stash is picking the config file.
log stash version used is 7.0 . please help.
P.S my previous config file was bigger , it had a else section in the out put. i removed just to identify whether there is really problem with line number 69

I would put double quotes around the username and password. What comes immediately after that?

1 Like

are u asking the pipline folder configration which logstash uses to apply filters etc., ?

No, I am asking what comes immediately after the password in your logstash configuration.

let me give some more info . below is the error we are getitng while "file beats" sends te log to the ELK server
2019-11-04T12:43:06.072Z ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://win000587.aze.michelin.com:5044)): dial tcp 10.221.100.180:5044: connectex: No connection could be made because the target machine actively refused it.

and to your question what comes after the password section


i even tried to copy and paste the in some working configration

"password =>"

why beacuse , some times the => this one gives problem . i readin some threads

I do not understand how that configuration could produce that error.

1 Like

you mean to say that everything is fine ? are you are doubtful about something ?

You initial post included a configuration and an error message. I am saying I cannot understand how that configuration could result in the error message that your post included.

there are two error message i posted . one i obtained in form the log stash log file . that is in the post itself.
the error message in the conversation thread is one more evidence i got from file beats .

am not saying that config file is one issue , am just posting what are all the facts i have . not sure still this error is because of config file or some thing else is the reason

That exception is unquestionably because of the config file.

ok here are some more observations ,

  • even if i change the cofigration (like removing stdout , removing one more output section still the number of lines in the error was the same. ideally logstash service should pick the new file in $logshtash/bin/pipelines

*telnet to 5044 from beats server to logstash server is not working

If you get exception=>"LogStash::ConfigurationError" then the pipeline is not running, so I would not expect it to be listening on port 5044.

Try running with --config.debug --log.level debug --config.test_and_exit on the command line. The configuration will get printed out after a message that says [DEBUG][logstash.config.pipelineconfig] Merged config. Please post the configuration that gets printed.

as you asked : below is the final part i posted this as image . otherwise the entire debug result is in the below link
Logstash output in command line - DebugMode

[DEBUG][logstash.runner          ] *path.config: "C:\\busapps\\rrsb\\gbl1\\logstash\\7.0.0\\bin\\pipelines"

That is a directory. logstash will concatenate all of the files in that directory to form the configuration. Every file. No exceptions. If there is a java heap dump in the directory then logstash will try to parse it as a configuration file.

Other messages then logged are

Config string {:protocol=>"file", :id=>"C:/busapps/rrsb/gbl1/logstash/7.0.0/bin/pipelines/logstash - Copy.conf"}
Config string {:protocol=>"file", :id=>"C:/busapps/rrsb/gbl1/logstash/7.0.0/bin/pipelines/logstash.conf"}
Config string {:protocol=>"file", :id=>"C:/busapps/rrsb/gbl1/logstash/7.0.0/bin/pipelines/logstash_bkp.conf"}

So it merges those three files to form the configuration. The error is at line 68, which is in the first file. Adding quotes around the passwords in logstash - Copy.conf fixes the errors, but you probably want to move the backup files to a different directory.

1 Like

trying to keep only one config file and trying to rerun . will post the results in few minutes. thanks .

seems i have corrected the configuration as you suggested . why i am sure , because got the below screen after i tested the config

, but when logstash service runs i again got following error

[2019-11-05T17:57:17,176][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"index [totalexecution-2019] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-11-05T17:57:17,176][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>1}
[2019-11-05T17:57:17,176][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"index [totalexecution-2019] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-11-05T17:57:17,176][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>1}
[2019-11-05T17:57:17,184][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"index [totalexecution-2019] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-11-05T17:57:17,188][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>1}
[2019-11-05T17:57:17,188][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"index [totalexecution-2019] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-11-05T17:57:17,188][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"index [totalexecution-2019] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-11-05T17:57:17,188][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"index [totalexecution-2019] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-11-05T17:57:17,188][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>3}

two kind of post discuss about this.
1.disk space issue (which is not in my case )
2.this post talks about locked indices

not sure how to identify the actual problem

If elasticsearch has set the index to be read-only by far the most common reason is that disk utilization reached 95%. Even if utilization comes back down the index will remain read-only.

When elasticsearch set the index to be read-only it will have logged the reason why. Check your elasticsearch logs.

1 Like

checking will get back in few mins thanks for your observation and guidance so far :slight_smile:

Below is the elastic log . i did have a look at that . could not locate exactly where elastic is setting index to be read only
elastic log
how ever some words here and there i can find . like ,

Desired survivor size 17432576 bytes, new threshold 1 (max 6)

  • age 1: 21928840 bytes, 21928840 total

in some lines above mentioned threshold is going up "6" . not sure whether that is the place where index is becoming read only.
also one observation , as the above logstash problem existed for more than a month , lots of logs have heaped under this index"totalexecution-2019" (which is the indexwe saw in the error). so we should take that also in account while zeroing the problem.