How to send ingest pipeline output to a index alias name

Hi ,

I am trying to send ingest pipeline output to a index alias name , so that the index rollovers correctly.

what is the required processor or how do we do this ?

Hi @Kannappan_Somu Welcome to the community.

I think perhaps we need a little clarification more information before we can help.

What components you are using? Filebeat? Logstash? What version are you on?

How did you set up ILM? How did you setup the write alias?

The way in general these work is you configure the component (beats or logstash) to write to the write alias and also configure the ingest pipeline ... the ingest pipeline does not affect the name of index the data is being written to as it is processed before the document is actually written to the index.

Tell us a bit more, share your config / information and perhaps we can help otherwise we don't have enough information to ep

Please format your code with the </> button

Hi , @stephenb

we are using filebeat 7.12.1 . below is the filebeat output configuration which sends to ingest pipeline "test" . Note that in filebeat config , i m not changing any index name so it uses default index name filebeat-7.12.1*

output.elasticsearch:
  hosts: ["xxxx"]
  protocol: "https"
  username: "ksenthil"
  password: "******"
  pipeline: "test"
  ssl.verification_mode: none

I have already created the index template and boot strap index like below

PUT _template/ksenthil
{
    "index_patterns": ["ksenthil*"],
    "settings": {
        "index" : {
          "lifecycle" : {
            "name" : "5_mins_retention_prod",
            "rollover_alias" : "ksenthil"
          },
          "routing" : {
            "allocation" : {
              "require" : {
                "data" : "hot"
              }
            }
          }
        }
    },
      "mappings" : { },
      "aliases" : { }
}

PUT ksenthil-000001
{
  "aliases": {
    "ksenthil": {
      "is_write_index": true  
    }
  }
}

Elastic search version is 7.10 , and the test ingest pipeline has a set processor to change the _index metadata to "ksenthil" (index alias name) , but this makes ingest pipeline to write output in default filebeat-7.12.1 index instead of writing in index "ksenthil-000001" with alias "ksenthil"

is there a way we can rename the index in ingest pipeline and send to index with alias name as reference (like we do in logstash), so that rollover happens automatically

thanks my issue has been fixed.

No its not fixed @alexsunny123 , can you suggest the workaround for above

Hi @Kannappan_Somu

Here is probably the more correct way to do it. I would not suggest doing it the way your are trying. if you are going to write to different indices with filebeat go all in and set it up properly.

# Sample Pipeline
PUT _ingest/pipeline/test-discuss2
{
  "description": "Test Pipeline",
  "processors": [
    {
      "set": {
        "field": "event.data",
        "value": "stephen"
      }
    }
  ]
}

# Sample template
PUT _template/stephen
{
  "index_patterns": [
    "stephen-*"
  ],
  "settings": {
    "index": {
      "lifecycle": {
        "name": "stephen",
        "rollover_alias": "stephen"
      }
    }
  },
  "mappings": {},
  "aliases": {}
}

# Sample Policy
PUT _ilm/policy/stephen
{
  "policy": {
    "phases": {
      "hot": {
        "min_age": "0ms",
        "actions": {
          "rollover": {
            "max_size": "5gb",
            "max_age": "1d"
          }
        }
      }
    }
  }
}


# Output section filebeat.yml
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]
  pipeline: "test-discuss2"
  index : "stephen"

setup.ilm.enabled: true
setup.ilm.rollover_alias: "stephen"
setup.ilm.pattern: "000001" 
setup.template.name : "stephen"
setup.template.pattern: "stephen"

Note if you do this you will not even need to setup up the boot strap index it will do it for your.

I ran

./filebeat -e

It setup the bootstrap index and everything ingested the data

GET _cat/indices/s*?v

health status index          uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   stephen-000001 oAMkIXECQM6qoLnnf0T1zQ   1   1         10            0     40.4kb         40.4kb


GET _cat/aliases/stephen?v
alias   index          filter routing.index routing.search is_write_index
stephen stephen-000001 -      -             -              true

And I see the pipeline ran

GET stephen-000001/_search

....
          },
          "event" : {
            "data" : "stephen"
          }
        }
....

And the setting look right

GET stephen-000001

{
  "stephen-000001" : {
    "aliases" : {
      "stephen" : {
        "is_write_index" : true
      }
    },
    "mappings" : 
.......

    "settings" : {
      "index" : {
        "lifecycle" : {
          "name" : "stephen",
          "rollover_alias" : "stephen"
        },
        "routing" : {
          "allocation" : {
            "include" : {
              "_tier_preference" : "data_content"
            }
          }
        },
        "number_of_shards" : "1",
        "provided_name" : "<stephen-000001>",
        "creation_date" : "1621703920043",
        "number_of_replicas" : "1",
        "uuid" : "oAMkIXECQM6qoLnnf0T1zQ",
        "version" : {
          "created" : "7120199"
        }
      }
    }

thanks @stephenb i will try this with filebeat configuration, can you also let me know how to write it to different indices based on source data from filebeat with different ilm rollover pattern ?

for example , i would need a filebeat output configuration like below

all info level messages , log goes to index called **ksenthil-info-000001**
all error level messages , log should go to **ksenthil-error-000001**
all debug level messages , log should go to **ksenthil-debug-000001**

Hmmm ... little more complex but you can figure this out.

I think Then you are going to take the ILM part out of filebeat. You will need to then setup up the boot strap index yourself (which you seem to know how to do) then use something like below.

setup.ilm.enabled: false   <!- This just disables the filebeat automatic ILM
setup.template.name : "stephen"
setup.template.pattern: "stephen"

Then use something like this...
See Here for that ...

output.elasticsearch:
  hosts: ["http://localhost:9200"]
  indices:
    - index: "stephen"
      when.contains:
        message: "INFO"
    - index: "stephen-warning"
      when.contains:
        message: "WARN"
    - index: "stephen-error"
      when.contains:
        message: "ERR"

BTW / FYII noticed you made some really small / short ILM policies... long story short they will not really work as expected / exactly because ILM is a background task... like if you expect it to be exactly a 5 minute or exactly 10MB it wont do that... it will work as expected with normal settings.

thanks for details @stephenb , that was a test ILM policy , will create much longer policy (56 days in production) .

i will try all these settings and let you know the outcome.

Also i have another question , like do you suggest us to use ingest pipeline or logstash pipeline ? we chose ingest pipeline as it comes along with elastic, whereas managing logstash becomes a overhead.

Is there any important functionality today that we cant achieve with ingest pipeline , but only with logstash pipelines?

Ok :slight_smile: @Kannappan_Somu That is a big question

Yes there are probably some things you can do in a logstash pipeline vs ingest pipeline. Logstash has a lot of input plugins, and I suspect there are still some filters etc.. etc but I do not know the list off hand, ingest is getting investment.

It comes down to architecture, managebility and where you want to do your processing.

I would do your research.

Good Luck!

thank you @stephenb

thanks @stephenb , following above configurations worked perfectly.

so filebeat with ingest pipelines to work with automated rollover , rollover settings has to be updated in filebeat output configuration.

Is there any other way to achieve the same thing by using settings in ingest pipeline , etc ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.