Rollover ELK

Hello. Indexes from docker containers are coming to me from the server.

Here is my filebeat.yml

logging.to_files: false
logging.to_syslog: false
logging.to_stderr: true
#logging.level: debug

setup.template:
 enabled: false
 overwrite: false
# name: 'v.popov-atms_nginx_1'
# pattern: 'v.popov-atms_nginx_1'

setup.ilm:
 enabled: false 
#rollover_alias: "v.popov"
# policy_file: "/usr/share/filebeat/ilm_policy2.json" 
# policy_name: "v.popov" 
# pattern: "{now/M{yyyy.MM}}-000001"

output.elasticsearch:
 hosts: ["16.0.1.160:9200"]
 username: "elastic"
 password: "changeme"
 index: "v.popov-%{[container.name]}-%{+MM}-000001"

setup.kibana:
 host: "http://16.0.1.160:5601"
 username: "elastic"
 password: "changeme"

filebeat.inputs:
 - type: container # Source type - Docker containers.
 paths:
 - "/var/lib/docker/containers/*/*.log"
 processors:
 - add_docker_metadata:
 host: "unix:///var/run/docker.sock" # Path to the Docker socket.
 - decode_json_fields:
 fields: ["message"]
 target: "json"
 max_depth: 10
 - drop_event:
 when:
 not:
 contains:
 container.name: "nginx"
 - drop_fields:
 fields:
 - container.labels.com_docker_compose_config-hash
 - container.labels.com_docker_compose_container-number

 - type: container # The second data source is Docker containers.
 paths:
 - "/var/lib/docker/containers/*/*.log" # Paths to container logs.
 processors:
 - add_docker_metadata:
 host: "unix:///var/run/docker.sock" # Path to the Docker socket.
 - drop_event:
 when:
 not:
 or:
 - contains:
 container.name: "atms-service"
 - contains:
 container.name: "fuel-card-service"
 - contains:
 container.name: "pulse-api-gateway"

As we can see, indexes from nginx, fuel-card-service and others should arrive to us

Here are my elastic settings via Dev tools

PUT /_template/v.popov-atms_nginx_1
{
 "index_patterns": ["v.popov-atms_nginx*"],
 "settings": {
 "number_of_shards": 2,
 "number_of_replicas": 0,
 "index.lifecycle.name": "nginx_1",
 "index.lifecycle.rollover_alias": "v.popov-nginx"
 },
 "aliases": {
 "v.popov-nginx": {
 "is_write_index": true
 }
 }
}

PUT_ilm/policy/nginx_1
{
 "policy": {
 "phases": {
 "hot": {
 "actions": {
 "rollover": {
 "max_size": "1GB",
 "max_age": "1d"
 }
 }
 },
 "delete": {
 "min_age": "2d",
 "actions": {
 "delete": {}
 }
 }
 }
 }
}

PUT /v.popov-atms_nginx_1*/_settings
{
  "index.lifecycle.name": "nginx_1"
}

PUT /_cluster/settings
{
  "persistent": {
    "indices.lifecycle.poll_interval": "5m"
  }
}

POST /_aliases
{
    "actions" : [
        { "add" : { "index" : "v.popov-atms_nginx_1-05-000001", "alias" : "v.popov-nginx" } }
    ]
}

GET _alias/v.popov-nginx
{
  "v.popov-atms_nginx_1-05-000001" : {
    "aliases" : {
      "v.popov-nginx" : { }
    }
  }
}

I get an error illegal_argument_exception: setting [index.lifecycle.rollover_alias] for index [v.popov-atms_nginx_1-05-000001] is empty or not defined

HELP!!!)

Hi @v.popov

What version are you using Filebeat / Elasticsearch

output.elasticsearch:
 hosts: ["16.0.1.160:9200"]
 username: "elastic"
 password: "changeme"
 index: "v.popov-%{[container.name]}-%{+MM}-000001" << Not Correct This need to point to the writ alias ... And putting the container name does not match

This would be something like

index: "v.popov-%{[container.name]}"

But then you will need a write alias for each

In short, you have a number issues

  • Not correctly understanding rollover alias
  • Trying to Put Container in Index name
  • You would need a write alias for every container which is hard to do in the output if you have 1 or 2 but does not scale etc.
  • The

Correct syntax would be something like

PUT v.popov-atms_nginx_2024.06.03-000001/
{
  "aliases": {
    "v.popov-nginx": {
      "is_write_index": true
    }
  }
}

So what I usually tell people new to elastic and beats etc...

Is to start with the defaults ...
do not try all your custom naming and get it working..
Do no try fancy naming
No reason to put each container
It should all work Out of the Box with just all the default settings

THEN we can talk about all your custom naming conventions

The container names and everything will be in the documents and it will be easy to filter sort etc..

That is my recommendation

Hello
Thank you for your recommendations and advice, but I have some questions:

I'm using filebeat version 7.9 and not using logstesh.

  • Trying to Put Container in Index name. But is there a way to delimit the data flow of containers? I’ll explain what I want to see: the logs of several containers will go into Kibana and each container will be separately so that you can interact with it (for example: delete logs, separately configure each index with logs) if all this can be obtained from one container ( If you do the setup via filebeat, then all containers are placed in one index, I can’t separate them (filebeat.yml setup)). This relates to your comment "No reason to put each container"

  • You would need a write alias for every container which is hard to do in the output if you have 1 or 2 but does not scale etc.. I didn’t quite understand this remark.

  • It should all work Out of the Box with just all the default settings. Yes, it works, but as I described above, all containers (logs) from services end up in one index. How is it possible to manage logs of different services through one index?

The above is needed in order to separately manage the logs of each container (index) in order to control the space on elastic (kibana), since now everything has to be deleted for each container manually for certain periods.

Hi @v.popov

If you want to separately manage logs from every container in their own set of indices then you will need to create a template for each container because an index template can only have one write alias..

And in your case you want the write alias to contain your container name...

I would still say the vast majority of our customers and users put hundreds of containers into the same index. Even thousands.

You can absolutely do what you want, but you may be having trouble when you scale up the number of containers.

7.9 It's very very old You should really think about upgrading

What's the normal approach with newer versions is with data streams if you put the different Data types into common string. So let's say you have 10 containers that have nginx they would go into a single data stream... Not 10 separate indices with the exact same type of data.

Got you. I tried to update to 8.10 and other versions above 8, but I had errors setting up configurations and an error (I don’t remember which one), but I didn’t have time for that. I had to install the update as close as possible to 8, and if I now update beyond 7.9, then something on the services may fall or be destroyed, if you know what I mean.

In general, everything is very complicated, but I agree, I want to use everything new, in this case version 8 and higher

Well, the fact you want to manage every container separately is what is making it complicated :slight_smile: and generally not needed / will cause problems at scale.

If you were using 8.x with Agent and the Defaults it would be all pretty simple. Then you would use ILM to manage index lifecycle.

The above is needed in order to separately manage the logs of each container (index) in order to control the space on elastic (kibana), since now everything has to be deleted for each container manually for certain periods.

There is probably a better solution using ILM...

Hello
Okay, thanks for the advice. Can I ask another question?

I managed to launch a rollover to my index v.popov-atms_nginx_1-06-000001, a day later v.popov-atms_nginx_1-06-000002 appears.

1- why don’t logs come to v.popov-atms_nginx_1-06-000002, but they still continue to come to v.popov-atms_nginx_1-06-000001?
2- how to make the rollover happen at 00:00, and not 24 hours after the index is launched?

Thank you in advance

Official Elastic Search 8.14 + Docker 4.30 Repo via JARaaS Hybrid RAG - Documentation - code current 6/19/2024 [AI Generated]

Note: Sources at the end of the response

It seems like you're trying to configure Elasticsearch to manage index rollover but are encountering issues with the index lifecycle management (ILM) settings. The error you are seeing, illegal_argument_exception: setting [index.lifecycle.rollover_alias] for index [v.popov-atms_nginx_1-05-000001] is empty or not defined, indicates that the rollover alias is not correctly set for the index.

Here is a step-by-step guide to troubleshoot and resolve the issue:

Step-by-Step Guide

  1. Check ILM Policy and Template Configuration:
    Ensure your ILM policy and template configurations are set correctly for rollover. Your configurations seem mostly correct but require validation.

  2. Define ILM Policy:
    Make sure the ILM policy nginx_1 is correctly defined:

    PUT _ilm/policy/nginx_1
    {
      "policy": {
        "phases": {
          "hot": {
            "actions": {
              "rollover": {
                "max_size": "1GB",
                "max_age": "1d"
              }
            }
          },
          "delete": {
            "min_age": "2d",
            "actions": {
              "delete": {}
            }
          }
        }
      }
    }
    
  3. Define Index Template:
    Ensure the index template v.popov-atms_nginx_1 points to the correct rollover alias:

    PUT /_template/v.popov-atms_nginx_1
    {
      "index_patterns": ["v.popov-atms_nginx*"],
      "settings": {
        "number_of_shards": 2,
        "number_of_replicas": 0,
        "index.lifecycle.name": "nginx_1",
        "index.lifecycle.rollover_alias": "v.popov-nginx"
      },
      "aliases": {
        "v.popov-nginx": {
          "is_write_index": true
        }
      }
    }
    
  4. Assign ILM Policy to Existing Index:
    If the indices already exist, assign the ILM policy to the existing indices:

    PUT /v.popov-atms_nginx_1*/_settings
    {
      "index.lifecycle.name": "nginx_1",
      "index.lifecycle.rollover_alias": "v.popov-nginx"
    }
    
  5. Cluster Settings:
    Ensure your cluster settings for ILM are correctly set:

    PUT /_cluster/settings
    {
      "persistent": {
        "indices.lifecycle.poll_interval": "5m"
      }
    }
    
  6. Validate the Alias:
    Verify that the alias is properly set and points to the index:

    POST /_aliases
    {
        "actions" : [
            { "add" : { "index" : "v.popov-atms_nginx_1-05-000001", "alias" : "v.popov-nginx" } }
        ]
    }
    
  7. Check Alias Configuration:
    Confirm the alias configuration:

    GET _alias/v.popov-nginx
    {
      "v.popov-atms_nginx_1-05-000001" : {
        "aliases" : {
          "v.popov-nginx" : { }
        }
      }
    }
    

General Tips

  • Rollover Alias: When configuring the template, ensure the rollover alias is the same across the ILM policy and the template settings.
  • Order of Operations: Typically, the ILM policy and template should be configured before any index creation. If indexes already exist, apply the settings manually to these indices as shown above.

If you follow these steps and still experience issues, make sure to review Elasticsearch logs for detailed error messages that might provide further insight.

Sources:

  • Docker Images Command Reference: docker_images.yaml (internal document)
  • Docker Platform Documentation: _index.md (internal document)
  • Elasticsearch Index Lifecycle Management: latest.yaml (internal document)