Attempted to send a bulk request to Elasticsearch configured at

hi I have an issue with logstash ->searchguard (elasticserach) ?
works fine without searchguard

output {
elasticsearch {
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
ssl => true
ssl_certificate_verification => true
keystore => '/opt/logstash/localhost.jks'
keystore_password => '*******'
truststore => '/opt/logstash/truststore.jks'
truststore_password => '********'
}
stdout { codec => rubydebug }
}

{:timestamp=>"2016-08-09T23:15:42.650000+0100", :message=>"[401] ",
:class=>"Elasticsearch::Transport::Transport::Errors::Unauthorized",
:backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:201
:in `__raise_transport_error'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-

{:timestamp=>"2016-08-09T23:15:44.667000+0100", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["https://127.0.0.1:9200"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided?", :error_message=>"[401] ", :error_class=>"Elasticsearch::Transport::Transport::Errors::Unauthorized", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:201:in __raise_transport_error'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:312 :inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/http/manticore.rb:67
:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/client.rb:128 :inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.18/lib/elasticsearch/api/actions/bulk.rb:90:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:53 :innon_threadsafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "org/jruby/ext/thread/Mutex.java:149 :insynchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:172:insafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:101
:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:86 :inretrying_submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:29
:in multi_receive'", "org/jruby/RubyArray.java:1653:ineach_slice'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:28

please help

It looks like your ES server requires you to authenticate but you haven't provided any username and password.

cool thanks magnusbaeck :=), that error is fixed but I see now nothing happening in Logstash or Elasticsearch when I push logs from filebeat

input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

output {
elasticsearch {
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
user => logstash
password => *****
ssl => true
ssl_certificate_verification => true
keystore => '/opt/logstash/localhost-keystore.jks'
keystore_password => ''
truststore => '/opt/logstash/truststore.jks'
truststore_password => '
*'
}
stdout { codec => rubydebug }
}

Logstash
{:timestamp=>"2016-08-10T01:23:46.135000+0100", :message=>"Pipeline main started"}

Elasticsearch
[2016-08-09 23:09:54,784][INFO ][node ] [Typhoid Mary] started
[2016-08-09 23:09:54,936][INFO ][gateway ] [Typhoid Mary] recovered [1] indices into cluster_state
[2016-08-09 23:09:55,754][INFO ][cluster.routing.allocation] [Typhoid Mary] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][2], [.kibana][2]] ...]).
[2016-08-09 23:10:00,200][INFO ][cluster.metadata ] [Typhoid Mary] [searchguard] creating index, cause [api], templates [], shards [1]/[0], mappings []
[2016-08-09 23:10:00,609][INFO ][cluster.routing.allocation] [Typhoid Mary] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[searchguard][0]] ...]).
[2016-08-09 23:10:01,181][INFO ][cluster.metadata ] [Typhoid Mary] [searchguard] create_mapping [config]
[2016-08-09 23:10:01,685][INFO ][cluster.metadata ] [Typhoid Mary] [searchguard] create_mapping [roles]
[2016-08-09 23:10:01,826][INFO ][cluster.metadata ] [Typhoid Mary] [searchguard] create_mapping [rolesmapping]
[2016-08-09 23:10:01,979][INFO ][cluster.metadata ] [Typhoid Mary] [searchguard] create_mapping [internalusers]
[2016-08-09 23:10:02,100][INFO ][cluster.metadata ] [Typhoid Mary] [searchguard] create_mapping [actiongroups]

filebeat:
2016/08/10 09:35:59.023799 prospector.go:143: INFO Starting prospector of type: log
2016/08/10 09:35:59.024135 crawler.go:78: INFO All prospectors initialised with 1 states to persist
2016/08/10 09:35:59.024156 registrar.go:87: INFO Starting Registrar
2016/08/10 09:35:59.024190 publish.go:88: INFO Start sending events to output
2016/08/10 09:35:59.024249 spooler.go:77: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2016/08/10 09:35:59.024419 log.go:113: INFO Harvester started for file: /Users/BUJAR/IdeaProjects/dropwizard-metrices-example/myapplication.log
2016/08/10 09:36:21.528929 publish.go:109: DBG Publish: {
"@timestamp": "2016-08-10T09:36:14.028Z",
"beat": {
"hostname": "Bujars-MacBook-Pro.local",
"name": "Bujars-MacBook-Pro.local"
},
"count": 1,
"input_type": "log",
"message": "INFO [2016-08-10 09:36:12,882] com.metrices.example.resource.ExampleResource: value='Hello, 2312313!",
"offset": 248140,
"service": "my example service",
"source": "/Users/BUJAR/IdeaProjects/dropwizard-metrices-example/myapplication.log",
"type": "log"
}
2016/08/10 09:36:21.529044 publish.go:109: DBG Publish: {
"@timestamp": "2016-08-10T09:36:14.028Z",
"beat": {
"hostname": "Bujars-MacBook-Pro.local",
"name": "Bujars-MacBook-Pro.local"
},
"count": 1,
"input_type": "log",
"message": "INFO [2016-08-10 09:36:12,882] com.metrices.example.resource.ExampleResource: counter='6",
"offset": 248242,
"service": "my example service",
"source": "/Users/BUJAR/IdeaProjects/dropwizard-metrices-example/myapplication.log",
"type": "log"
}

Does Logstash's stdout output produce any output, i.e. is there any evidence that Logstash is receiving any data?

I believe it's working

{
"message" => "INFO [2016-08-10 10:02:20,591] org.eclipse.jetty.server.ServerConnector: Started admin@5fcacc0{HTTP/1.1}{0.0.0.0:8081}",
"@version" => "1",
"@timestamp" => "2016-08-10T10:03:35.612Z",
"beat" => {
"hostname" => "Bujars-MacBook-Pro.local",
"name" => "Bujars-MacBook-Pro.local"
},
"offset" => 260003,
"type" => "log",
"service" => "my example service",
"input_type" => "log",
"count" => 1,
"source" => "/Users/BUJAR/IdeaProjects/dropwizard-metrices-example/myapplication.log",
"host" => "Bujars-MacBook-Pro.local",
"tags" => [
[0] "beats_input_codec_plain_applied"
]
}
{
"message" => "INFO [2016-08-10 10:02:20,591] org.eclipse.jetty.server.Server: Started @4208ms",
"@version" => "1",
"@timestamp" => "2016-08-10T10:03:35.612Z",
"source" => "/Users/BUJAR/IdeaProjects/dropwizard-metrices-example/myapplication.log",
"type" => "log",
"input_type" => "log",
"count" => 1,
"service" => "my example service",
"beat" => {
"hostname" => "Bujars-MacBook-Pro.local",
"name" => "Bujars-MacBook-Pro.local"
},
"offset" => 260123,
"host" => "Bujars-MacBook-Pro.local",
"tags" => [
[0] "beats_input_codec_plain_applied"
]
}

this is the output from logstash.stdout

thank you very much magnusbaeck :slight_smile:

And how do you conclude that there's nothing happening on the ES side? Where are you looking?

i am checking elasticsearch.log and I see no activity

when I connect with Kibana I can see that I am hitting elasticsearch from elasticsearch.log but I am getting perm issue
[2016-08-10 12:00:25,420][INFO ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] No perm match for indices:data/read/field_stats and [sg_kibana4_server, sg_public]
[2016-08-10 12:00:32,414][INFO ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] No perm match for indices:data/read/field_stats and [sg_kibana4_server, sg_public]
[2016-08-10 12:00:39,413][INFO ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] No perm match for indices:data/read/field_stats and [sg_kibana4_server, sg_public]
[2016-08-10 12:00:46,421][INFO ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] No perm match for indices:data/read/field_stats and [sg_kibana4_server, sg_public]
[2016-08-10 12:00:53,424][INFO ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] No perm match for indices:data/read/field_stats and [sg_kibana4_server, sg_public]
[2016-08-10 12:01:00,437][INFO ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] No perm match for indices:data/read/field_stats and [sg_kibana4_server, sg_public]
[2016-08-10 12:01:07,423][INFO ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] No perm match for indices:data/read/field_stats and [sg_kibana4_server, sg_public]
[2016-08-10 12:01:14,446][INFO ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] No perm match for indices:data/read/field_stats and [sg_kibana4_server, sg_public]
[2016-08-10 12:01:21,416][INFO ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] No perm match for indices:data/read/field_stats and [sg_kibana4_server, sg_public]
[2016-08-10 12:01:28,543][INFO ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] No perm match for indices:data/read/field_stats and [sg_kibana4_server, sg_public]
[2016-08-10 12:01:35,411][INFO ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] No perm match for indices:data/read/field_stats and [sg_kibana4_server, sg_public]
[2016-08-10 12:01:41,796][INFO ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] No perm match for indices:data/read/field_stats and [sg_kibana4_server, sg_public]

i am checking elasticsearch.log and I see no activity

That's not a very good test since Elasticsearch doens't log that much during normal operations. I suggest you use the cat indices API to see which indices you have, how many documents they contain, etc.

thanks magnusbaeck but i get nothing when I do

curl -k https://localhost:9200/_cat/indices/twi*?v
curl --cacert /tmp/example-pki-scripts/ca/root-ca.pem https://localhost:9200/_cat/indices/twi*?v

The "twi" part of the documentation was an example. Remove it.

still nothing
curl --cacert /tmp/example-pki-scripts/ca/root-ca.pem https://localhost:9200/_cat/indices