Logstash Lookup Fields

i wanted to use a lookup table in Logstash to check if account id exisits in lookub table then it will grab the output location from the lookup file. e--g

Lookup File
account_id, output_location, secrets,
123, s3, abcdef

i want to apply that in output to see if account exists in lookup table/file then only it will send the message to output

You can use a translate filter to do the lookup, then drop {} if the event contains the fallback value.

Can we use dynamo db or any other aws service for lookup?

You might be able to call the Dynamo API using an http filter, and you can do pretty much anything in a ruby filter. So it is very likely possible, but I have no experience and cannot offer any advice.

Can i defined my lookup file like this?

{
        "123456" : {
                                "bucket" : "my_bucket",
                                "iam_role": "iam_role",
                                "region": "us-east-1",
                                },
        789012" : {
                                "bucket" : "my_bucket2",
                                "iam_role": "iam_role2",
                                "region": "us-east-2",
                                }
}

I believe so. This blog has an example of using a JSON dictionary.

1 Like

Thanks for the sharing. That was very helpful.
i stored all values in data as

source => "[acc_id]"
target => "[data]"

How i can get those values in output and assign to variables?

         "data" => {
        "secret_access_key" => "abc",
            "access_key_id" => "def",
                   "bucket" => "my_bucket",
                   "region" => "us-east-1",
                   "prefix" => "nnew"
    },
    "acc_id" => "1234567890",
    "lookup_id" => "1234",
     "sequence" => 0
}

I'm trying to do like this and getting error

access_key_id => "%[data][access_key_id]"

Try "%{[data][access_key_id]}" . You need the {} for a sprintf reference.

Thanks for the response again.
translate filter enriched the message which i don't want. I want to look into lookup file on basis of account id and if account id exists, then want to grab all the secrets defined against that account id and then use those secrets to send the actual event to output.

If you use target => "[@metadata][secrets]" you can use sprintf references to it but it will not get sent by the output with the rest of the fields on the event.

I tried it but still getting the error.

What error message are you getting?

s3 - Uploading failed, retrying (#5 of Infinity) {:exception=>Aws::S3::Errors::InvalidAccessKeyId, message=>"The AWS Access Key Id you provided does not exist in our records."

after running /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/lookup_test.conf my message looks like

{
             "data" => {
        "secret_access_key" => "def",
                   "bucket" => "my_bucket",
                   "region" => "us-east-1",
            "access_key_id" => "abc
    },
        "lookup_id" => "1234",
    "output_fields" => {
        "account_id" => "123456789012"
    },
         "sequence" => 0
}

and my output plugin looks like

output {
        stdout { codec =>  "rubydebug" }
        s3 {
                access_key_id => "%{[data][access_key_id]}"
                other info ....
       }
}

Here is actual log,

{"output_fields": {"account_id": "123456789012"}, "lookup_id": "1234"}

adding translate filter

translate {
                    source => "[output_fields][account_id]"
                    target => "[data]"
                    fallback => '{"tenant_id":"not_found"}'
                    dictionary_path => "/home/lookup.json"
                }

log after translate

{
             "data" => {
        "secret_access_key" => "def",
                   "bucket" => "my_bucket",
                   "region" => "us-east-1",
            "access_key_id" => "abc
    },
        "lookup_id" => "1234",
    "output_fields" => {
        "account_id" => "123456789012"
    },
         "sequence" => 0
}

/home/lookup.json file looks like

{
        "123456789012": {
                                "access_key_id": "abc",
                                "secret_access_key": "def",
                                "bucket" : "my_bucket"
                                }

}

:open_mouth: The output does not sprintf the access_key_id! It has to be a constant.

Oh,
my use case is to send logs to different aws account on basis of account id. I need to change the bucket name and secrets

You could modify the plugin to make the sprintf call and build it yourself.

If you have a small number of different buckets you could use an if else in the output section to choose which s3 output to route to.

Hey,
I'm adding tags to my log using sprintf but when i try to pass logs but it seems it is not adding the tag.
that's how 'm passing the tag

if [data] == '{"acc_id":"not_found"}' {
                                                        drop {}
        }
        else {
                mutate { add_tag => ["%{[data][tag]}"] }
                }

But getting tags like this in my output file

"tags":["%{[data][tag]}"]