Hi, I have some issues shipping internal monitoring data from logstash to elasticsearch. I don't know if i better place it in logstash or elasticsearch forum. So I start here
I installed logstash and elasticsearch from rpm on centos. Stack version is 7.4.2.
I am using basic license with enabled security.
I am struggling with following error message in logstash logs and I don't see any data of logstash process in kibana's monitoring module.
Error message in logstash logs:
[2020-01-15T10:17:08,598][ERROR][logstash.outputs.elasticsearch][.monitoring-logstash] Encountered a retryable error. Will Retry with exponential backoff {:code=>403, :url=>"https://es-lb.local:9200/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s"}
complete log:
[2020-01-15T10:16:09,666][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.4.2"}
[2020-01-15T10:16:11,132][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2020-01-15T10:16:11,133][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2020-01-15T10:16:13,648][INFO ][org.reflections.Reflections] Reflections took 45 ms to scan 1 urls, producing 20 keys and 40 values
[2020-01-15T10:16:52,579][INFO ][logstash.outputs.elasticsearch][commonOutElasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://logstash_ingest:xxxxxx@es-lb.local:9200/]}}
[2020-01-15T10:16:53,642][WARN ][logstash.outputs.elasticsearch][commonOutElasticsearch] Restored connection to ES instance {:url=>"https://logstash_ingest:xxxxxx@es-lb.local:9200/"}
[2020-01-15T10:16:53,650][INFO ][logstash.outputs.elasticsearch][commonOutElasticsearch] ES Output version determined {:es_version=>7}
[2020-01-15T10:16:53,650][WARN ][logstash.outputs.elasticsearch][commonOutElasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-01-15T10:16:53,764][INFO ][logstash.outputs.elasticsearch][commonOutElasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//es-lb.local:9200"]}
[2020-01-15T10:16:54,635][INFO ][logstash.outputs.elasticsearch][commonOutElasticsearch] Using default mapping template
[2020-01-15T10:16:54,773][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][commonOutElasticsearch] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2020-01-15T10:16:54,775][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][httpd_access] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2020-01-15T10:16:54,775][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][auskunft_json] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2020-01-15T10:16:54,781][INFO ][logstash.javapipeline ][httpd_access] Starting pipeline {:pipeline_id=>"httpd_access", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, :thread=>"#<Thread:0x60cf97e9 run>"}
[2020-01-15T10:16:54,782][INFO ][logstash.javapipeline ][auskunft_json] Starting pipeline {:pipeline_id=>"auskunft_json", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, :thread=>"#<Thread:0x9901e19@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38 run>"}
[2020-01-15T10:16:54,782][INFO ][logstash.javapipeline ][commonOutElasticsearch] Starting pipeline {:pipeline_id=>"commonOutElasticsearch", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, :thread=>"#<Thread:0x6cffdc69@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:37 run>"}
[2020-01-15T10:16:54,809][INFO ][logstash.outputs.elasticsearch][commonOutElasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-01-15T10:16:56,018][INFO ][logstash.inputs.redis ][httpd_access] Registering Redis {:identity=>"redis://<password>@redis-lb.local:16379/0 list:httpd_access"}
[2020-01-15T10:16:56,026][INFO ][logstash.inputs.redis ][auskunft_json] Registering Redis {:identity=>"redis://<password>@redis-lb.local:16379/0 list:auskunft_json"}
[2020-01-15T10:16:56,063][INFO ][logstash.javapipeline ][commonOutElasticsearch] Pipeline started {"pipeline.id"=>"commonOutElasticsearch"}
[2020-01-15T10:16:56,063][INFO ][logstash.javapipeline ][auskunft_json] Pipeline started {"pipeline.id"=>"auskunft_json"}
[2020-01-15T10:16:56,080][INFO ][logstash.javapipeline ][httpd_access] Pipeline started {"pipeline.id"=>"httpd_access"}
[2020-01-15T10:16:56,087][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][generic_json] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2020-01-15T10:16:56,088][INFO ][logstash.javapipeline ][generic_json] Starting pipeline {:pipeline_id=>"generic_json", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, :thread=>"#<Thread:0x8eb9d50@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:37 run>"}
[2020-01-15T10:16:56,154][INFO ][logstash.inputs.redis ][generic_json] Registering Redis {:identity=>"redis://<password>@redis-lb.local:16379/0 list:generic-json"}
[2020-01-15T10:16:56,155][INFO ][logstash.inputs.redis ][generic_json] Registering Redis {:identity=>"redis://<password>@redis-lb.local:16379/0 list:generic-json-root"}
[2020-01-15T10:16:56,172][INFO ][logstash.javapipeline ][generic_json] Pipeline started {"pipeline.id"=>"generic_json"}
[2020-01-15T10:16:56,295][INFO ][logstash.agent ] Pipelines running {:count=>5, :running_pipelines=>[:httpd_access, :generic_json, :commonOutElasticsearch, :auskunft_json], :non_running_pipelines=>[]}
[2020-01-15T10:16:57,953][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", ssl_certificate_verification=>false, password=><password>, hosts=>[//es-lb.local:9200], cacert=>"/etc/logstash/config_sets/plx/certs/ca/elasticsearch/ca.crt", sniffing=>false, manage_template=>false, id=>"d2bd83dad561381257de0b203724b86122bc0132449508529f15cb48cc583204", user=>"remote_monitoring_user", ssl=>true, document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_3d4aab69-94d0-42a0-a510-e66974c44d90", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2020-01-15T10:16:58,017][WARN ][logstash.outputs.elasticsearch][.monitoring-logstash] ** WARNING ** Detected UNSAFE options in elasticsearch output configuration!
** WARNING ** You have enabled encryption but DISABLED certificate verification.
** WARNING ** To make sure your data is secure change :ssl_certificate_verification to true
[2020-01-15T10:16:58,043][INFO ][logstash.outputs.elasticsearch][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://remote_monitoring_user:xxxxxx@es-lb.local:9200/]}}
[2020-01-15T10:16:58,140][WARN ][logstash.outputs.elasticsearch][.monitoring-logstash] Restored connection to ES instance {:url=>"https://remote_monitoring_user:xxxxxx@es-lb.local:9200/"}
[2020-01-15T10:16:58,159][INFO ][logstash.outputs.elasticsearch][.monitoring-logstash] ES Output version determined {:es_version=>7}
[2020-01-15T10:16:58,160][WARN ][logstash.outputs.elasticsearch][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-01-15T10:16:58,187][INFO ][logstash.outputs.elasticsearch][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//es-lb.local:9200"]}
[2020-01-15T10:16:58,205][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, :thread=>"#<Thread:0x60584987 run>"}
[2020-01-15T10:16:58,250][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2020-01-15T10:16:58,260][INFO ][logstash.agent ] Pipelines running {:count=>6, :running_pipelines=>[:".monitoring-logstash", :httpd_access, :generic_json, :commonOutElasticsearch, :auskunft_json], :non_running_pipelines=>[]}
[2020-01-15T10:16:58,534][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-01-15T10:17:08,598][ERROR][logstash.outputs.elasticsearch][.monitoring-logstash] Encountered a retryable error. Will Retry with exponential backoff {:code=>403, :url=>"https://es-lb.local:9200/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s"}
[2020-01-15T10:17:10,631][ERROR][logstash.outputs.elasticsearch][.monitoring-logstash] Encountered a retryable error. Will Retry with exponential backoff {:code=>403, :url=>"https://es-lb.local:9200/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s"}
I have no entry in elasticsearch's log .log.
For monitoring I define the standard remote-monitoring-user which ships with elasticsearch. I just set the password for it, but no other changes on it or it's roles.
My logstash.yml looks like this:
node.name: kubernetes03-plx-0
path.data: /var/lib/logstash/plx/0
log.level: info
path.logs: /var/log/logstash/plx/0
xpack.monitoring.enabled: ${XPACK_MONITORING_ENABLED}
xpack.monitoring.elasticsearch.username: ${monitoring_user}
xpack.monitoring.elasticsearch.password: ${monitoring_password}
xpack.monitoring.elasticsearch.hosts: ${XPACK_MONITORING_ELASTICSEARCH_HOSTS}
xpack.monitoring.elasticsearch.ssl.certificate_authority: "/etc/logstash/config_sets/plx/certs/ca/elasticsearch/ca.crt"
xpack.monitoring.elasticsearch.ssl.verification_mode: ${XPACK_MONITORING_ELASTICSEARCH_SSL_VERIFICATION_MODE}
xpack.monitoring.elasticsearch.sniffing: false
xpack.monitoring.collection.interval: ${XPACK_MONITORING_COLLECTION_INTERVAL}
xpack.monitoring.collection.pipeline.details.enabled: true
Can you please support here?
Thanks a lot, Andreas