Should we be allowed to use http filter to get log data?

Problem: We need to dynamically change http request URL to fetch data.

I figured that one cannot simply change URL dynamically; instead I found a few ways to go around it:

  1. exec input filter with curl and some scripting
  2. dynamically changing our .conf file with auto-reload option
    (both of above methods are dependent on external shell script)
  3. getting our parameters from a different service through http_poller and then getting the actual data through http filter plugin.

Why do we need to do this? Similar to this case => Adding 1 day to the date but can't use ruby because it's used in the input.

The config file roughly looks like this:

    input {
        http_poller{ 
           # will return just simple json with few parameters
            url => "http://url-to-my-parameters/fetch/parameters"
        }
    }
    filter {
        mutate {
            add_field => {
                "parameter" => "%{[data][parameter]}"
            }
        }
        http {
            # will return thousands of data items in JSON
            url => "http://url-to-actual-data/data?parameter=%{parameter}"
        }
    }

and then filter, output and so on.

This seems to do what I want to get done but I feel uneasy about this approach as it feels like an anti pattern to the idea of log data pipeline. Also, unlike http input plugin it doesn't leave the response headers on @metadata.

Has anybody tried similar approach in production? Is there better ways/recommendations to get this done?

Thanks in advance.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.