Logstash not working after changing password

Hi all i have recently started on elk stack, and i have encounter some problem after changing the default password.
Current stack version 7.0.1 based off this link
https://github.com/deviantony/docker-elk
after setting the stack up on docker, i have proceed to change the password using:
docker-compose exec -T elasticsearch 'bin/elasticsearch-setup-passwords' auto --batch
and have replaced the username and password inlogstash.yml & kibana.yml then proceeded to change the password in logstash.conf and restarted the whole stack

At this point all is working then i proceed to set my own password user: elastic, and updated the password in logstash.conf once this is done i restarted logstash.

From this point on the logstash fail to start.

Below is the logs shown
logstash_1 | [2019-06-14T07:52:39,639][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, { at line 101, column 16 (byte 2020) after output {\r\n\telasticsearch {\r\n\t\thosts => "elasticsearch:9200"\r\n\t\tuser => elastic\r\n\t\tpassword => P", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:incompile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in block in compile_sources'", "org/jruby/RubyArray.java:2577:inmap'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:151:ininitialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:23:ininitialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:325:inblock in converge_state'"]}
logstash_1 | [2019-06-14T07:52:56,899][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", password=>, hosts=>[http://elasticsearch:9200], sniffing=>false, manage_template=>false, id=>"eadcdc69b7355983ca3a69ecac563286a667376c099396f2ec7dac2089060a4d", user=>"logstash_system", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_a30a99f4-045a-4849-8837-cec8ccb8969d", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
logstash_1 | [2019-06-14T07:52:57,649][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://logstash_system:xxxxxx@elasticsearch:9200/]}}
logstash_1 | [2019-06-14T07:52:57,868][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://logstash_system:xxxxxx@elasticsearch:9200/"}
logstash_1 | [2019-06-14T07:52:57,941][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
logstash_1 | [2019-06-14T07:52:57,947][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2019-06-14T07:52:58,071][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
logstash_1 | [2019-06-14T07:52:58,182][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, :thread=>"#<Thread:0x5d43d92 run>"}
logstash_1 | [2019-06-14T07:52:58,856][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
logstash_1 | [2019-06-14T07:53:01,439][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
logstash_1 | [2019-06-14T07:53:07,471][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
logstash_1 | [2019-06-14T07:53:08,217][INFO ][logstash.runner ] Logstash shut down.
docker-elk_logstash_1 exited with code 0

Here is a look at the logstash.conf file that may be causing the problem.
I cant identify where the problem is caused

input {
	tcp {
		port => 5000
	}
	gelf {
    port => 12200
    codec => json
    type => nginx
	}

	gelf {
    port => 12201
    codec => json
    type => app
	}

}
filter {

  if [type] == "nginx" {
     json {
      source => message
      add_tag => ["%{tag}"]
      tag_on_failure => ["error"]
    }

    date {
      match => ["timestamp", "ISO8601"]
      target => "@timestamp"
    }

    if "error" in [tags] {
      mutate {
        rename => {"message" => "error"}
      }
    }

    mutate {
      rename => {"tag" => "server_name"}
      remove_field => ["@version", "timestamp", "command", "message", "level"]
    }
  }

  if [type] == "app" {
    json {
      source => message
      add_tag => ["%{tag}"]
    }

    date {
      match => ["timestamp", "ISO8601"]
      target => "@timestamp"
    }

    mutate {
      remove_field => ["@version", "timestamp", "command", "level"]
    }
    # drop debug log
    # if [server_name] == "oauth-server" and [log_level] == "DEBUG"  {
    #   drop { }
    # }

    # if [server_name] == "oauth-server" and  [log_level] == "INFO" {
    #   drop { }
    # }

    # if [server_name] == "bff-server" and "/user/authorise" in [message] {
    #   drop { }
    # }

    # if [server_name] == "bff-server" and "/user/signup" in [message] {
    #   drop { }
    # }

    # if [server_name] == "bff-server" and "/user/reset_password" in [message] {
    #   drop { }
    # }

    # if [server_name] == "bff-server" and "/user/change_password" in [message] {
    #   drop { }
    # }

    # if [server_name] == "bff-server" and "/user/password/reset" in [message] {
    #   drop { }
	# }


    # if [server_name] == "bff-server" and [log_level] == "DEBUG" and "/oauth/token" in [message] {
    #   drop { }
    # }
  }
}


## Add your filters / logstash plugins configuration here

output {
	elasticsearch {
		hosts => "elasticsearch:9200"
		user => elastic
		password => newPassword
	}
}

I believe you need to quote your password, adding " before and after it. While the logstash configuration language allows some "barewords" to be included without quoting, more complex strings typically break the rules with special characters and need to be provided in a quoted manner.

I would advise using a environment variables or storing the credentials in the Java Keystore, which would enable you to have:

output {
  elasticsearch {
    hosts    => "elasticsearch:9200"
    user     => "${ELASTICSEARCH_USERNAME}"
    password => "${ELASTICSEARCH_PASSWORD}"
  }
}

Docs on the Keystore can be found here, and docs on environment variable substitution can be found here.


Additionally, I believe the error message you have pasted includes the first character of your password, so it would be wise to change the password again.

thanks, adding quotation to user and password as you said worked well!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.