Issues sending monitoring data from logstash to elasticsearch

Hi, I have some issues shipping internal monitoring data from logstash to elasticsearch. I don't know if i better place it in logstash or elasticsearch forum. So I start here :wink:

I installed logstash and elasticsearch from rpm on centos. Stack version is 7.4.2.
I am using basic license with enabled security.

I am struggling with following error message in logstash logs and I don't see any data of logstash process in kibana's monitoring module.

Error message in logstash logs:

[2020-01-15T10:17:08,598][ERROR][logstash.outputs.elasticsearch][.monitoring-logstash] Encountered a retryable error. Will Retry with exponential backoff  {:code=>403, :url=>"https://es-lb.local:9200/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s"}

complete log:

[2020-01-15T10:16:09,666][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.4.2"}
[2020-01-15T10:16:11,132][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2020-01-15T10:16:11,133][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2020-01-15T10:16:13,648][INFO ][org.reflections.Reflections] Reflections took 45 ms to scan 1 urls, producing 20 keys and 40 values
[2020-01-15T10:16:52,579][INFO ][logstash.outputs.elasticsearch][commonOutElasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://logstash_ingest:xxxxxx@es-lb.local:9200/]}}
[2020-01-15T10:16:53,642][WARN ][logstash.outputs.elasticsearch][commonOutElasticsearch] Restored connection to ES instance {:url=>"https://logstash_ingest:xxxxxx@es-lb.local:9200/"}
[2020-01-15T10:16:53,650][INFO ][logstash.outputs.elasticsearch][commonOutElasticsearch] ES Output version determined {:es_version=>7}
[2020-01-15T10:16:53,650][WARN ][logstash.outputs.elasticsearch][commonOutElasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-01-15T10:16:53,764][INFO ][logstash.outputs.elasticsearch][commonOutElasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//es-lb.local:9200"]}
[2020-01-15T10:16:54,635][INFO ][logstash.outputs.elasticsearch][commonOutElasticsearch] Using default mapping template
[2020-01-15T10:16:54,773][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][commonOutElasticsearch] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-01-15T10:16:54,775][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][httpd_access] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-01-15T10:16:54,775][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][auskunft_json] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-01-15T10:16:54,781][INFO ][logstash.javapipeline    ][httpd_access] Starting pipeline {:pipeline_id=>"httpd_access", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, :thread=>"#<Thread:0x60cf97e9 run>"}
[2020-01-15T10:16:54,782][INFO ][logstash.javapipeline    ][auskunft_json] Starting pipeline {:pipeline_id=>"auskunft_json", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, :thread=>"#<Thread:0x9901e19@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38 run>"}
[2020-01-15T10:16:54,782][INFO ][logstash.javapipeline    ][commonOutElasticsearch] Starting pipeline {:pipeline_id=>"commonOutElasticsearch", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, :thread=>"#<Thread:0x6cffdc69@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:37 run>"}
[2020-01-15T10:16:54,809][INFO ][logstash.outputs.elasticsearch][commonOutElasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-01-15T10:16:56,018][INFO ][logstash.inputs.redis    ][httpd_access] Registering Redis {:identity=>"redis://<password>@redis-lb.local:16379/0 list:httpd_access"}
[2020-01-15T10:16:56,026][INFO ][logstash.inputs.redis    ][auskunft_json] Registering Redis {:identity=>"redis://<password>@redis-lb.local:16379/0 list:auskunft_json"}
[2020-01-15T10:16:56,063][INFO ][logstash.javapipeline    ][commonOutElasticsearch] Pipeline started {"pipeline.id"=>"commonOutElasticsearch"}
[2020-01-15T10:16:56,063][INFO ][logstash.javapipeline    ][auskunft_json] Pipeline started {"pipeline.id"=>"auskunft_json"}
[2020-01-15T10:16:56,080][INFO ][logstash.javapipeline    ][httpd_access] Pipeline started {"pipeline.id"=>"httpd_access"}
[2020-01-15T10:16:56,087][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][generic_json] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-01-15T10:16:56,088][INFO ][logstash.javapipeline    ][generic_json] Starting pipeline {:pipeline_id=>"generic_json", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, :thread=>"#<Thread:0x8eb9d50@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:37 run>"}
[2020-01-15T10:16:56,154][INFO ][logstash.inputs.redis    ][generic_json] Registering Redis {:identity=>"redis://<password>@redis-lb.local:16379/0 list:generic-json"}
[2020-01-15T10:16:56,155][INFO ][logstash.inputs.redis    ][generic_json] Registering Redis {:identity=>"redis://<password>@redis-lb.local:16379/0 list:generic-json-root"}
[2020-01-15T10:16:56,172][INFO ][logstash.javapipeline    ][generic_json] Pipeline started {"pipeline.id"=>"generic_json"}
[2020-01-15T10:16:56,295][INFO ][logstash.agent           ] Pipelines running {:count=>5, :running_pipelines=>[:httpd_access, :generic_json, :commonOutElasticsearch, :auskunft_json], :non_running_pipelines=>[]}
[2020-01-15T10:16:57,953][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", ssl_certificate_verification=>false, password=><password>, hosts=>[//es-lb.local:9200], cacert=>"/etc/logstash/config_sets/plx/certs/ca/elasticsearch/ca.crt", sniffing=>false, manage_template=>false, id=>"d2bd83dad561381257de0b203724b86122bc0132449508529f15cb48cc583204", user=>"remote_monitoring_user", ssl=>true, document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_3d4aab69-94d0-42a0-a510-e66974c44d90", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2020-01-15T10:16:58,017][WARN ][logstash.outputs.elasticsearch][.monitoring-logstash] ** WARNING ** Detected UNSAFE options in elasticsearch output configuration!
** WARNING ** You have enabled encryption but DISABLED certificate verification.
** WARNING ** To make sure your data is secure change :ssl_certificate_verification to true
[2020-01-15T10:16:58,043][INFO ][logstash.outputs.elasticsearch][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://remote_monitoring_user:xxxxxx@es-lb.local:9200/]}}
[2020-01-15T10:16:58,140][WARN ][logstash.outputs.elasticsearch][.monitoring-logstash] Restored connection to ES instance {:url=>"https://remote_monitoring_user:xxxxxx@es-lb.local:9200/"}
[2020-01-15T10:16:58,159][INFO ][logstash.outputs.elasticsearch][.monitoring-logstash] ES Output version determined {:es_version=>7}
[2020-01-15T10:16:58,160][WARN ][logstash.outputs.elasticsearch][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-01-15T10:16:58,187][INFO ][logstash.outputs.elasticsearch][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//es-lb.local:9200"]}
[2020-01-15T10:16:58,205][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, :thread=>"#<Thread:0x60584987 run>"}
[2020-01-15T10:16:58,250][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2020-01-15T10:16:58,260][INFO ][logstash.agent           ] Pipelines running {:count=>6, :running_pipelines=>[:".monitoring-logstash", :httpd_access, :generic_json, :commonOutElasticsearch, :auskunft_json], :non_running_pipelines=>[]}
[2020-01-15T10:16:58,534][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2020-01-15T10:17:08,598][ERROR][logstash.outputs.elasticsearch][.monitoring-logstash] Encountered a retryable error. Will Retry with exponential backoff  {:code=>403, :url=>"https://es-lb.local:9200/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s"}
[2020-01-15T10:17:10,631][ERROR][logstash.outputs.elasticsearch][.monitoring-logstash] Encountered a retryable error. Will Retry with exponential backoff  {:code=>403, :url=>"https://es-lb.local:9200/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s"}

I have no entry in elasticsearch's log .log.

For monitoring I define the standard remote-monitoring-user which ships with elasticsearch. I just set the password for it, but no other changes on it or it's roles.

My logstash.yml looks like this:

node.name: kubernetes03-plx-0
path.data: /var/lib/logstash/plx/0
log.level: info
path.logs: /var/log/logstash/plx/0
xpack.monitoring.enabled: ${XPACK_MONITORING_ENABLED}
xpack.monitoring.elasticsearch.username: ${monitoring_user}
xpack.monitoring.elasticsearch.password: ${monitoring_password}
xpack.monitoring.elasticsearch.hosts: ${XPACK_MONITORING_ELASTICSEARCH_HOSTS}
xpack.monitoring.elasticsearch.ssl.certificate_authority: "/etc/logstash/config_sets/plx/certs/ca/elasticsearch/ca.crt"
xpack.monitoring.elasticsearch.ssl.verification_mode: ${XPACK_MONITORING_ELASTICSEARCH_SSL_VERIFICATION_MODE}
xpack.monitoring.elasticsearch.sniffing: false
xpack.monitoring.collection.interval: ${XPACK_MONITORING_COLLECTION_INTERVAL}
xpack.monitoring.collection.pipeline.details.enabled: true

Can you please support here?
Thanks a lot, Andreas

Can you provide your output config file ?

sure, sorry, thought it was already in there:

# encrypted output with TLS and authorization
output
{
	elasticsearch
	{
		hosts         => ["${ES_HOSTS}"]
		ssl           => "${USE_ES_SSL}"
		cacert        => "${ES_CA_CERT_PATH}"
		ssl_certificate_verification =>	"${USE_ES_OUTPUT_SSL_CERT_VERIFICATION}"

		# credentials are fetched from envrionment or logstash-keystore

		user		=> "${LOGSTASH_USER}"
		password	=> "${LOGSTASH_PASSWORD}"

		index		=> "%{[@metadata][indexName]}"
	}
}

Is this problem related to the output config of my pipeline? I thought this issue comes from the monitoring part and not from the ingest.

What's the value of $ES_HOSTS ?

it looks like a 403 forbidden error on the URL

Both, ${ES_HOST} and ${XPACK_MONITORING_ELASTICSEARCH_HOSTS} have the value of "es-lb.local:9200"

I also tried it with "https://es-lb.local:9200" but it makes no difference.

So if it is an permission issue with the users, here are the settings:

No change for monitoring user. Used as shipped with elastic.

monitoring user:

{
  "remote_monitoring_user" : {
    "username" : "remote_monitoring_user",
    "roles" : [
      "remote_monitoring_collector",
      "remote_monitoring_agent"
    ],
    "full_name" : null,
    "email" : null,
    "metadata" : {
      "_reserved" : true
    },
    "enabled" : true
  }
}

role remote_monitoring_collector:

{
  "remote_monitoring_collector" : {
    "cluster" : [
      "monitor"
    ],
    "indices" : [
      {
        "names" : [
          "*"
        ],
        "privileges" : [
          "monitor"
        ],
        "allow_restricted_indices" : true
      },
      {
        "names" : [
          ".kibana*"
        ],
        "privileges" : [
          "read"
        ],
        "allow_restricted_indices" : false
      }
    ],
    "applications" : [ ],
    "run_as" : [ ],
    "metadata" : {
      "_reserved" : true
    },
    "transient_metadata" : {
      "enabled" : true
    }
  }
}

role remote_monitoring_agent

{
  "remote_monitoring_agent" : {
    "cluster" : [
      "manage_index_templates",
      "manage_ingest_pipelines",
      "monitor",
      "cluster:monitor/xpack/watcher/watch/get",
      "cluster:admin/xpack/watcher/watch/put",
      "cluster:admin/xpack/watcher/watch/delete"
    ],
    "indices" : [
      {
        "names" : [
          ".monitoring-*"
        ],
        "privileges" : [
          "all"
        ],
        "allow_restricted_indices" : false
      },
      {
        "names" : [
          "metricbeat-*"
        ],
        "privileges" : [
          "index",
          "create_index"
        ],
        "allow_restricted_indices" : false
      }
    ],
    "applications" : [ ],
    "run_as" : [ ],
    "metadata" : {
      "_reserved" : true
    },
    "transient_metadata" : {
      "enabled" : true
    }
  }
}

user logstash_ingest (used for elasticsearch-output)

{
  "logstash_ingest" : {
    "username" : "logstash_ingest",
    "roles" : [
      "logstash-writer-plx",
      "logstash-writer-metricbeat"
    ],
    "full_name" : "logstash ingest process",
    "email" : null,
    "metadata" : { },
    "enabled" : true
  }
}

role logstash-writer-plx

{
  "logstash-writer-plx" : {
    "cluster" : [
      "manage_index_templates",
      "monitor",
      "manage_ilm"
    ],
    "indices" : [
      {
        "names" : [
          "plx_*"
        ],
        "privileges" : [
          "write",
          "delete",
          "create_index",
          "manage",
          "manage_ilm"
        ],
        "allow_restricted_indices" : false
      }
    ],
    "applications" : [ ],
    "run_as" : [ ],
    "metadata" : { },
    "transient_metadata" : {
      "enabled" : true
    }
  }
}

** role logstash-writer-metricbeat**

{
  "logstash-writer-metricbeat" : {
    "cluster" : [
      "manage_index_templates",
      "monitor",
      "manage_ilm"
    ],
    "indices" : [
      {
        "names" : [
          "metricbeat-*"
        ],
        "privileges" : [
          "write",
          "delete",
          "create_index",
          "manage",
          "manage_ilm"
        ],
        "allow_restricted_indices" : false
      }
    ],
    "applications" : [ ],
    "run_as" : [ ],
    "metadata" : { },
    "transient_metadata" : {
      "enabled" : true
    }
  }
}

I set logstash to debug mode and see the following details now:

[2020-01-15T11:54:35,448][ERROR][logstash.outputs.elasticsearch][.monitoring-logstash] Encountered a retryable error. Will Retry with exponential backoff {:code=>403, :url=>"https://es-lb.local:9200/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", :body=>"{\"error\":{\"root_cause\":[{\"type\":\"security_exception\",\"reason\":\"action [cluster:admin/xpack/monitoring/bulk] is unauthorized for user [remote_monitoring_user]\"}],\"type\":\"security_exception\",\"reason\":\"action [cluster:admin/xpack/monitoring/bulk] is unauthorized for user [remote_monitoring_user]\"},\"status\":403}"}

But I cannot find any difference to another dev system I am running where I also use this monitoring user without issues

Are you sure that you are authorized to reach the server ?
Can you please provide the output of

curl -k https://es-lb.local:9200

I found the issue:

Login, etc. for the user is successful, thats why we get 403 instead of 401. But the default user has no permission to access cluster:admin/xpack/monitoring/bulk.

Since I cannot change anything in the reserved role remote_monitoring_agent or the reserved user monitoring user I did cloned and added the needed cluster privilege to the role via dev tools:

PUT /_security/role/remote_monitoring_agent1
{
  
    "cluster" : [
      "manage_index_templates",
      "manage_ingest_pipelines",
      "monitor",
      "cluster:monitor/xpack/watcher/watch/get",
      "cluster:admin/xpack/watcher/watch/put",
      "cluster:admin/xpack/watcher/watch/delete",
      "cluster:admin/xpack/monitoring/bulk"
    ],
    "indices" : [
      {
        "names" : [
          ".monitoring-*"
        ],
        "privileges" : [
          "all"
        ],
        "allow_restricted_indices" : false
      },
      {
        "names" : [
          "metricbeat-*"
        ],
        "privileges" : [
          "index",
          "create_index"
        ],
        "allow_restricted_indices" : false
      }
    ]
   
  
}

PUT /_security/user/remote_monitoring_user1
{
  
    "username" : "remote_monitoring_user1",
    "roles" : [
      "remote_monitoring_collector",
      "remote_monitoring_agent1"
    ],
    "full_name" : null,
    "email" : null,
    "password": "monitoring1",
    "enabled" : true
  
}

=> voila: logstash can use this function to add its metrics via bulk api.

But I don't understand why I cannot use the default user for this. When elasticsearch and logstash are using the same version, everything should fit with the default users. Otherwise I don't know why they are shipped.

Thanks a lot for your help.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.