How we can create two index in logstash

Hello,
Anyone came across below scenario,

I have a json as input and am filtering the data later creating index in output block to push it into elastic
Here i want to split the data into two set and want them to send to two respective indices. like below example.
created query_details event object by combination of few fields from json input.
i want to send this quer_details columns alone to first index rest of the columns which is part of input json will be send to second index.
All my transaction has query_details so i can't use If else condition in my output field.

//Code is here

input {
  tcp {
   codec => json_lines { charset => "UTF-8" }
   port => 
  }
}
filter {

    json {
        source => "payload_raw"
        target => "payload"
    }
   ruby {
    code => "
      removed_keys = ['id','operator']
      query_details = {}
      removed_keys.each do |key|
        if event.get('msg').include?(key)
          query_details[key] = event.get('msg')[key]
          event.remove('[msg][#{key}]')
        end
      end
      event.set('[query_details]', query_details)
    "
  }
    mutate {copy => { "[msg][metadata]" =>  "metadata" }
remove_field => ["[msg][metadata]"]
}
 
}

output {

if [query_details] {
elasticsearch {
           hosts => ["hello.net:9200","hello2.net:9200"]
            index => "query-details-%{+YYYY.MM.dd}"
user => "elas"
            password => "hai"
             }
}
else {
elasticsearch {
           hosts => ["hello.net:9200","hello2.net:9200"]
            index => "query-default-%{+YYYY.MM.dd}"
            user => "ela"
            password => "hai"
             }


stdout { codec => rubydebug }
}
}```

Hi @subash_k and welcome to the community!

One way to accomplish this would be to utilize Ingest Pipelines along with two separate elasticsearch outputs to send the same document to two different indices.

Something like this:

output {
  elasticsearch {
    index => "query-details-%{+YYYY.MM.dd}"
    pipeline => "query_details_pipeline"
    hosts=> "${ELASTIC_HOSTS}"
    user=> "${ELASTIC_USER}"
    password=> "${ELASTIC_PASSWORD}"
  }
  elasticsearch {
    index => "query-default-%{+YYYY.MM.dd}"
    pipeline => "query_default_pipeline"
    hosts=> "${ELASTIC_HOSTS}"
    user=> "${ELASTIC_USER}"
    password=> "${ELASTIC_PASSWORD}"
  }
}

Then in your Ingest Pipelines - you can drop the [query_details] or any other fields that aren't required in the final document to be put in the index:

PUT _ingest/pipeline/query_default_pipeline
{
  "description": "pipeline for processing query_default documents",
  "processors": [
    {
      "remove": {
        "description": "removing query_details",
        "field": "query_details"
      }
    }
  ]
}
1 Like

You can use the clone filter to clone your event and a conditional to only apply the ruby filter to the cloned event and use the same conditional in the output.

Something like this:

filter {
  json {
    source => "payload_raw"
    target => "payload"
  }
  clone {
    clones => ["details"]
  }
  if [type] == "details" {
    ruby {
      code => "
        removed_keys = ['id','operator']
        query_details = {}
        removed_keys.each do |key|
          if event.get('msg').include?(key)
            query_details[key] = event.get('msg')[key]
            event.remove('[msg][#{key}]')
          end
        end
        event.set('[query_details]', query_details)
      "
    }
  }
  mutate {
    copy => { 
      "[msg][metadata]" =>  "metadata" 
    }
    remove_field => ["[msg][metadata]"]
  }
}
output {
  if [type] == "details" {
    elasticsearch {
      hosts => ["hello.net:9200","hello2.net:9200"]
      index => "query-details-%{+YYYY.MM.dd}"
      user => "elas"
      password => "hai"
    }
  } else {
    elasticsearch {
      hosts => ["hello.net:9200","hello2.net:9200"]
      index => "query-default-%{+YYYY.MM.dd}"
      user => "elas"
      password => "hai"
    }
  }
}

You just need to check if you have pipeline.ecs_compatibility enabled or not as this will change the behavior of the clone filter as explained in the documentation.

1 Like

Thanks @leandrojmp this will fail to insert the deafult set data. pervious solution (pipeline logic ) worked

Hey Thanks @eMitch .
It worked :smiley:

I don't think so, if the conditional is correct it will index the cloned documents, with type equals to details on one index, and the other documents on the another index.

But since the other solution worked for you, there is no need to troubleshoot this further.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.