Logstash 5 Alpha4 with Xpack getting error in log

elasticsearch is secured with xpack security and hooked with ldap which is working fine. Even user has admin right in role_mapping.

Below is configuration
output {
elasticsearch {
hosts => ['localhost:9200']
user => 'gaurav@gmail.com'
password => 'pwd'
}
}

Getting below error and because of which logstash is starting up correctly.

{:timestamp=>"2016-07-14T11:40:39.917000+0530", :message=>"Pipeline aborted due to error", :exception=>#<URI::InvalidComponentError: bad component(expected userinfo component or user component): gaurav@gmail.com>, :backtrace=>["/usr/share/logstash/vendor/jruby/lib/ruby/1.9/uri/generic.rb:412:in check_user'", "/usr/share/logstash/vendor/jruby/lib/ruby/1.9/uri/generic.rb:483:inuser='", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:242:in normalize_url'", "org/jruby/RubyArray.java:2414:inmap'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:253:in update_urls'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:66:ininitialize'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:135:in build_pool'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:20:ininitialize'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:52:in build'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch.rb:159:inbuild_client'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:13:in register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:86:inregister'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:184:in start_workers'", "org/jruby/RubyArray.java:1613:ineach'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:184:in start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:139:inrun'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:240:in `start_pipeline'"], :level=>:error, :file=>"logstash/agent.rb", :line=>"242", :method=>"start_pipeline"}

Please help me to solve below problem

At a guess, it doesn't like the @ in the username.
Can you try with a user name that doesn't have this?

You are right. Thanks a lot for quick suggestion .

Now I can see below exception

{:timestamp=>"2016-07-14T16:32:29.592000+0530", :message=>"Encountered an unexpected error submitting a bulk request! Will retry.", :error_message=>"undefined method code' for #<LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError:0x97f8a30>", :class=>"NoMethodError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:217:insafe_bulk'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:105:in submit'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:72:inretrying_submit'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:23:in multi_receive'", "org/jruby/RubyArray.java:1653:ineach_slice'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:22:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:136:inthreadsafe_multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:122:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:321:inoutput_batch'", "org/jruby/RubyHash.java:1342:in each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:321:inoutput_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:249:in worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:210:instart_workers'"], :level=>:error, :file=>"logstash/outputs/elasticsearch/common.rb", :line=>"78", :method=>"retrying_submit"}

and I can see no data is passed to elasticsearch. Please provide your suggestion.

@warkolm . do you have any more suggestion on above issue?

I am having the same issue here using Logstah 5 and an output plugin to Elasticsearch. Here is the error I am having:

{:timestamp=>"2016-07-28T14:56:08.854000+0000", :message=>"Encountered an unexpected error submitting a bulk request! Will retry.", :error_message=>"undefined method code' for #<LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError:0xb1765fd>", :class=>"NoMethodError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:217:in safe_bulk'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:182:in safe_bulk'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:105:in submit'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:72:in retrying_submit'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:23:in multi_receive'", "org/jruby/RubyArray.java:1653:in each_slice'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:22:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:136:in threadsafe_multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:122:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:321:in output_batch'", "org/jruby/RubyHash.java:1342:in each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:321:in output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:249:in worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:210:in `start_workers'"], :level=>:error}

And here is the configuration I am using in the Logstash server:

output {

      elasticsearch {
  hosts => ["elasticsearch-server:443"]
  codec => "json"
                    ssl => true
              ssl_certificate_verification => true
              }

If you guys have any idea about that it will be great!

Thank you very much!

After looking around I saw that there is an issue with the current version of the elasticsearch output plugin. After updating it I saw that the message now changes:

{:timestamp=>"2016-07-28T17:13:24.193000+0000", :message=>"Got a bad response code from server, but this code is not considered retryable. Request will be dropped", :code=>403, :level=>:error}

Seems like there was a bug in the previous version which was preventing the system to output the correct message.

Have you reslove this 403 error?How?