Hello,
I have APM version 7.6.2
APM succesfully brings me the log info in kibana as the image show below;
Nevertheless, I receive the following error when navigating in APM settings in kibana
The critical parts of my APM yml file are as follows:
kibana:
# For APM Agent configuration in Kibana, enabled must be true.
enabled: true
# Scheme and port can be left out and will be set to the default (`http` and `5601`).
# In case you specify an additional path, the scheme is required: `http://localhost:5601/path`.
# IPv6 addresses should always be defined as: `https://[2001:db8::1]:5601`.
host: "http://sag-tst-es-001.sag.services:5601"
# Optional protocol and basic auth credentials.
#protocol: "https"
username: "myuser"
password: "myuser"
# Optional HTTP path.
#path: ""
# Enable custom SSL settings. Set to false to ignore custom SSL settings for secure communication.
#ssl.enabled: true
# Optional SSL configuration options. SSL is off by default, change the `protocol` option if you want to enable `https`.
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# List of root certificates for HTTPS server verifications.
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication.
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
# It is recommended to use the provided keystore instead of entering the passphrase in plain text.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections.
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites.
#ssl.curve_types: []
#================================= General =================================
# Data is buffered in a memory queue before it is published to the configured output.
# The memory queue will present all available events (up to the outputs
# bulk_max_size) to the output, the moment the output is ready to serve
# another batch of events.
queue:
# Queue type by name (default 'mem').
mem:
# Max number of events the queue can buffer.
events: 12288
# Hints the minimum number of events stored in the queue,
# before providing a batch of events to the outputs.
# The default value is set to 2048.
# A value of 0 ensures events are immediately available
# to be sent to the outputs.
#flush.min_events: 2048
# Maximum duration after which events are available to the outputs,
# if the number of events stored in the queue is < `flush.min_events`.
#flush.timeout: 1s
# Sets the maximum number of CPUs that can be executing simultaneously. The
# default is the number of logical CPUs available in the system.
#max_procs:
#-------------------------- Elasticsearch output --------------------------
output.elasticsearch:
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (`http` and `9200`).
# In case you specify and additional path, the scheme is required: `http://localhost:9200/path`.
# IPv6 addresses should always be defined as: `https://[2001:db8::1]:9200`.
hosts: [ "sag-tst-es-001.sag.services:9200", "sag-tst-es-002.sag.services:9200" ]
# Boolean flag to enable or disable the output module.
#enabled: true
# Set gzip compression level.
#compression_level: 0
# Optional protocol and basic auth credentials.
#protocol: "https"
username: "myuser"
password: "myuser"
# Dictionary of HTTP parameters to pass within the url with index operations.
#parameters:
#param1: value1
#param2: value2
# Number of workers per Elasticsearch host.
worker: 2
#============================= X-pack Monitoring =============================
# APM server can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires x-pack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
monitoring.enabled: true
# Most settings from the Elasticsearch output are accepted here as well.
# Note that these settings should be configured to point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration. This means that if you have the Elasticsearch output configured,
# you can simply uncomment the following line.
monitoring.elasticsearch:
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "apm_system"
#password: "apm_system"
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (`http` and `9200`).
# In case you specify and additional path, the scheme is required: `http://localhost:9200/path`.
# IPv6 addresses should always be defined as: `https://[2001:db8::1]:9200`.
#hosts: ["localhost:9200"]#============================= X-pack Monitoring =============================
# APM server can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires x-pack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
monitoring.enabled: true
# Most settings from the Elasticsearch output are accepted here as well.
# Note that these settings should be configured to point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration. This means that if you have the Elasticsearch output configured,
# you can simply uncomment the following line.
monitoring.elasticsearch:
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "apm_system"
#password: "apm_system"
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (`http` and `9200`).
# In case you specify and additional path, the scheme is required: `http://localhost:9200/path`.
# IPv6 addresses should always be defined as: `https://[2001:db8::1]:9200`.
#hosts: ["localhost:9200"]
Also to add that the user named: myuser that i use in the previous yml file has superuser rights and me as a user when i navigate in kibana i have superuser rights.
What i need to change in order to fix the issue?