Kibana ILM not work correctly (elasticsearch log info message)

Hi,
I have 3 node cluster elasticsearch (every on different VM (RHEL8)- node 1, 2 ,3), kibana is installed at one of this nodes (node 2).

There is config for one node (other's the same, but with diff names):

# ---------------------------------- Cluster --------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: es-cluster
# ---------------------------------- END Cluster ----------------------------
 
# ------------------------------------ Node ---------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
node.roles: [ data, master,data_content,data_hot, data_warm, data_cold, data_frozen ]
# ------------------------------------ END Node -----------------------------
 
bootstrap.memory_lock: true
# ----------------------------------- Paths ---------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
# --------------------------------- END Paths -------------------------------
 
# ---------------------------------- Network --------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: nameOfTheMachine
http.port: 9200
# -------------------------------- END Network ------------------------------
 
# --------------------------------- Discovery -------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["IP_Node_1", "IP_Node_2", "IP_Node_3"]
discovery.type: multi-node
# ------------------------------- END Discovery -----------------------------
 
#----------------------- BEGIN SECURITY CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 02-11-2022 10:14:20
#
# ---------------------------------------------------------------------------
 
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
 
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/http.p12
xpack.monitoring.collection.enabled: true
#----------------------- END SECURITY CONFIGURATION -------------------------

Certs was generated as one for all nodes (at one of the nodes created CA (default elasticserach) then create http.p12 and elastic-certificates.p12) then copy to all others nodes and sign it with commands:

./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
./bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password

After this, I can curl all nodes (check cluster health etc.) - everything return "correct" response.

Next I install kibana at node 2, with config:

server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "IP_node2"

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
server.publicBaseUrl: "https://IP_node2:5601"

# The Kibana server's name. This is used for display purposes.
server.name: "kibana"
# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
server.ssl.enabled: true
server.ssl.keystore.path: /etc/kibana/certs/http.p12
server.ssl.keystore.password: "PassForCerthttp.p12"

# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["https://IP_node1:9200", "https:IP_node2:9200", "https://IP_node3:9200"]

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "kibana_system"
elasticsearch.password: "PassForKibanaSystem"

# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/certs/elasticsearch-ca.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
elasticsearch.ssl.verificationMode: "certificate"


# Enables you to specify a file where Kibana stores log output.
logging:
  appenders:
    file:
      type: file
      fileName: /var/log/kibana/kibana.log
      layout:
        type: json
  root:
    appenders:
      - default
      - file


# Specifies the path where Kibana creates the process ID file.
pid.file: /run/kibana/kibana.pid

monitoring.ui.ccs.enabled: false 
xpack.encryptedSavedObjects.encryptionKey: keyGenerted
xpack.reporting.encryptionKey: keyGenerted
xpack.security.encryptionKey: keyGenerted

there index settings:

PUT _ilm/policy/name_api_policy
{
   "policy": {
    "phases": {
      "hot": {
        "min_age": "0ms",
        "actions": {
          "rollover": {
            "max_age": "1h"
          }
        }
      },
      "delete": {
        "min_age": "29d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}


PUT _template/name_api_template
{
  "index_patterns": ["name-api-*"], 
  "settings": {
    "number_of_shards": 2,
    "number_of_replicas": 2,
    "index.lifecycle.name": "name_api_policy", 
    "index.lifecycle.rollover_alias": "name-api" 
  }
}

PUT name-api-000001
{
  "mappings": {
	....
    },
    "settings": {
      "index": {
        "number_of_shards": "2",
        "number_of_replicas": "2"
      }
    }
  
}


POST _aliases 
{
    "actions" : [
        { "add" : { "index" : "name-api-000001", "alias" : "name-api", "is_write_index": true} }
    ]
}

I can login to Kibana, check ILM, indexes, use devtools (send request), stack monitoring etc. Everything is correct (at kibana logs and cluster logs i dont see any warrnings or errors)
but when ILM reach the conditions then at all nodes elasticsearch logs i can see:
"Authentication of [kibana_system] was terminated by realm [reserved] - failed to authenticate user [kibana_system]", what's strange, becouse i can curl for anything elasticsearch nodes with permission kibana_system like:

curl -X GET "https://node1/2/3:9200/_cluster/health?wait_for_status=yellow&timeout=50s&pretty" --key certificates/elasticsearch-ca.pem  -k -u kibana_system

This INFO says that credentials are incorrect, but i can request with this credential (like above).
That communicate only shows when index with ILM reach rollover state, when i request for

GET /name-api-000001/_ilm/explain

i get following response :

{
  "indices": {
    "name-api-000001": {
      "index": "name-api-000001",
      "managed": true,
      "policy": "name_api_policy",
      "index_creation_date_millis": 1672753491558,
      "time_since_index_creation": "17.13h",
      "lifecycle_date_millis": 1672753491558,
      "age": "17.13h",
      "phase": "hot",
      "phase_time_millis": 1672753492170,
      "action": "rollover",
      "action_time_millis": 1672753492374,
      "step": "check-rollover-ready",
      "step_time_millis": 1672753492374,
      "phase_execution": {
        "policy": "name_api_policy",
        "phase_definition": {
          "min_age": "0ms",
          "actions": {
            "rollover": {
              "max_age": "1h"
            }
          }
        },
        "version": 1,
        "modified_date_in_millis": 1672753363104
      }
    }
  }
}

Can someone help - why does this INFO shows, when credentials are ok, and I can do everything by browser (on my desktop - not exactly at nodes)?
Why index dont create next one (000002) ?
Index has flag: "is_write_index": true

Problem solved - topic, can be deleted

Actually, @mario2 could you post your solution it might help another community member.

1 Like

@stephenb I can see, that ILM rollover only when index is not empty - so when i create the index and add some data it start works as I excpected - thats the solution :slight_smile:
Btw can you told me is it possible to ILM work with empty index?
ILM needs data phases roles at nodes - otherwise it wouldnt be work correctly?

Logically your index won't be created unless atleast 1 event is ingested and stored for that index. So no, ILM won't support rolling over the empty index by definition.

1 Like

Thanks for replay

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.