How do ECE Cloud UI Hot-Warm and Index Management Hot-Warm join up

I've been working through the Elastic documentation, starting here: https://www.elastic.co/guide/en/cloud-enterprise/current/ece-configuring-ece-tag-allocators.html

The guide walks through the setting up of Hot-Warm allocators/allocator templates.

I've configured a set of allocators, via templates, to be hot, and a set for warm, as per docs.

The intention is that new data/docs/indexes are written to the hot allocators/hosts, and a lifecycle policy will cause the movement of these indexes over to warm allocators/hosts, when certain conditions are met (e.g. older than n-days, or more than n-docs).

After completing the section in Cloud UI - the guide moves into configuring the life cycle policies in Kibana/Index Management.

In Kibana, I've created a life-cycle policy, and associated it with a test index, which I've created, but I cant find anywhere within the index life-cycle settings, or documentation, where there is any association to configuration made to my allocators and allocator templates in Cloud UI.

Furthermore, I can see the warning:
No node attributes configured in elasticsearch.yml
You can't control shard allocation without node attributes.

and looking into this - it is suggested that I need to create additional node configuration in the elasticsearch.yml - but this additional config (from what i can make out), also, doesn't appear to link back to the config I created in Cloud UI.

I've searched for similar error messages which led me here: https://stackoverflow.com/questions/60061442/elastic-search-no-node-attributes-configured-in-elasticsearch-yml-you-cant-co

This stack-overflow post suggests additional configuration, which looks separate, but similar in intent, to what I have already configured in Cloud UI, and is not mentioned in the documentation.

How does this all tie up, what are the missing bits?

Are two separate, possibly differing versions, of behaviours getting mixed up?

This is the page that is intended to tie the two together: https://www.elastic.co/guide/en/cloud-enterprise/current/ece-configure-templates-index-management.html

ie:

  • You tag the allocators
  • You map instance configs to the allocator tags
  • You build a deployment template that uses those instance configs AND ...
  • ... includes the node_type: hot, warm, or cold under "Node Attributes"
    • (this last bit is what tie together ECE allocation rules and ILM rules, ETA note the key and value are arbitrary but need to match the ILM policy require as shown below)

(to be clear the node_type: warm kv pairs you enter into the "Node Attributes" section of the Deployment Template UI should then need to match the ILM policies' require section, eg:

//GET _ilm/policy
{
  "my_ilm_policy": {
    "policy": {
      "phases": {
//...
        "warm": {
          "min_age": "1d",
          "actions": {
            "allocate": {
              "include": {},
              "exclude": {},
              "require": {
                "node_type": "warm"
              }
            },
//...

Hi Alex, thanks for the response. I'm quite new to elastic, so you might need to bear with me.
I'm trying to add hot-warm ILM to an existing platform. I've walked through to documents (link in my initial post), but had skipped the deployment template section as this was an existing platform, and I assumed the instance configuration superseded deployment, it as we already have existing deployments. I've walked through the link you posted, and created a deployment template, but as this is an existing platform, I'm feeling it is not relevant - and as the doc you posted also leads to the page - Create Deployment.

Am I still able to apply the allocator template configuration I've created, into ILM, will this only work with new deployments?

Thanks

What deployment template are you using currently?

If you have a hot-warm deployment template using the legacy "Index Curation" (the ECE pre-cursor to ILM), then the best thing to do is way a few days for 2.5.0 to come out - since that has a single button migrate

If not, what templates does your existing deployment have?

Hi Alex,

Regarding our current deployments, I cant find any where to which tells me specifically, but after a quick comparison it looks like the existing deployments were probably built using the 'Default' deployment template.

There is no immediate urgency, so we can wait for 2.5.0, for migrating the data to another deployment.

My immediate task was understanding the configuration process. I created a new deployment template (as per your link), and a new deployment against this. I was able to then create a new index etc. and apply ILM in Kibana, without issue. So thanks for this!

But, I'm still struggling to understand how the "Instance Configuration" relates to the "Deployment Configuration".

That is ,the allocator query
i.e. (hot:true AND SSD:true AND CPUs:4 AND instanceType:r5.xlarge)
appears to be a separate function to the ILM config for a deployment template
i.e. (node_type: hot)

Are these separate features?

I'm also looking for a way to see if my indexes have moved across to the warm from the hot. I have ILM policy on my test index alias, configured with max docs: 5, and have written 5 docs to an associated index, and another 5 to a new associated index. (test-ilm-000001, test-ilm-000002 - alias: test-ilm) - I can't find anywhere in the UI to see if index test-ilm-000002 is on hot, and test-ilm-000001 has been moved to warm.

What's the best way to confirm the index locations, is there anywhere on the UI, or an API call to confirm?

Thanks

OK so default templates don't have any concept of hot/warm nodes, so you can't use ILM to move data around (only to shrink etc)

The easiest way to migrate from default to hot-warm is just to snapshot, and restore from snapshot - though I believe support will walk people through the sneaky migration process on request.

(Note 2.5 I don't think includes "one click" migration from "default" to "hot warm", only from "old school hot warm" to "ILM hot warm")

But, I'm still struggling to understand how the "Instance Configuration" relates to the "Deployment Configuration".

That is ,the allocator query
i.e. (hot:true AND SSD:true AND CPUs:4 AND instanceType:r5.xlarge)
appears to be a separate function to the ILM config for a deployment template
i.e. (node_type: hot)

The link is

  • "ILM node attributes" links an ILM policy to an entry in the deployment template
  • Each entry in a deployment template has a specified instance configuration
  • Each instance configuration maps (via those allocator queries) to the (eg) hot vs warm hardware

Put another way ... if you look under the advanced plan for an ILM deployment you'll see an array called cluster_topology, which is the list of entries specified for the deployment template (with the amount of RAM allocated), and each entry has two relevant attributes:

  • node_attributes which maps to the ILM policy (via the require)
  • instance_configuration_id which maps to the allocator query (via the instance config)

So that gives you a mapping of three things - the ILM policy, the amount of RAM for indices under that ILM policy, and which allocators that RAM lives on

Hi Alex,

Thanks for the explanation, sorry but I'm not able to find where you mention:

is

from an API call?

Also, would i expect to see a division in the response to this call:
GET /_cat/segments?v

for my index today/now - or maybe tomorrow - i cant see any separation atm - it looks like the warm is being used for both primary and replica shards for both my test indexes?

image

Scroll to bottom of page:

image

for my index today/now - or maybe tomorrow - i cant see any separation atm - it looks like the warm is being used for both primary and replica shards for both my test indexes?

For a given index, you can look at MY_INDEX/_settings to see what node attributes ILM believes it should have, and then work backwards from there. There's an (ES) "ILM explain" API that can be helpful in understanding what is going on at the index level

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.