Hello,
Currently a I have a single file as below which will do the following requirements:
Step1: Get bearer token every minute
Step2: Use bearer token and get Prometheus data
stpe3: parsing Prometheus data
Below single file working fine.
#Get bearer token which is same for every DC
input {
http_poller {
urls =>
{
gettoken =>
{
url => "http://test.dev.com:40021/oauth/token?grant_type=client_credentials"
user => "46ad390f-4b8f-8b36-aa20a253cae3"
password => "8e057748-4dc1-9a9b-c27a3a7114d7"
}
}
keepalive => true
automatic_retries => 1
codec => json
schedule => { cron => "* * * * * UTC"}
add_field => { "index" => "hc-prometheus" }
add_field => { "hc_type" => "prometheus-metrics" }
}
}
#Save bearer token
filter
{
ruby
{
code => "
event.set('token',event.get('access_token'))
"
}
}
#Get prometheus dataset
filter
{
http {
url => "http://test.dev.com:11000/green/service/actuator/prometheus?tId=7667977e-8dcf-578a746b8812"
verb => "GET"
headers =>
{
"Authorization" => "Bearer %{token}"
}
target_body => "prometheusdataset"
}
}
filter
{
split {
field => "prometheusdataset"
}
}
#Prometheus Metrics Parsing Filter
filter
{
mutate { add_tag => "a" }
if [prometheusdataset] =~ /^#/ { drop{ } }
dissect {
mapping =>
{
"prometheusdataset" => "%{[@metadata][type]}{%{[@metadata][stuff]}} %{[@metadata][value]}"
}
}
kv {
source => "[@metadata][stuff]" target => "[@metadata][data]" field_split => "," trim_value => '\\"'
}
mutate { rename => { "[@metadata][value]" => "[@metadata][data][value]" } }
mutate { rename => { "[@metadata][data]" => "%{[@metadata][type]}" } }
}
Now I need to split the above single file into multiple conf files, so that i can change only Step2 file which will reload automatically for every new environment edition.
New split requirement:
Step1: Create new input_file name "111-gettoken.conf"
Step2: Create a new filter file name "112-getprometheusdata.conf"
Step3: Create a new filter file name "113-parseprometheusdata.conf"
111-gettoken.conf will have below code:
#Get bearer token which is same for every DC
input {
http_poller {
urls =>
{
gettoken =>
{
url => "http://test.dev.com:40021/oauth/token?grant_type=client_credentials"
user => "46ad390f-f56a-4b8f-8b36-aa20a253cae3"
password => "8e057748-a09a-4dc1-9a9b-c27a3a7114d7"
}
}
keepalive => true
automatic_retries => 1
codec => json
schedule => { cron => "* * * * * UTC"}
add_field => { "index" => "hc-prometheus" }
add_field => { "hc_type" => "prometheus-metrics" }
}
}
#Save bearer token
filter
{
ruby
{
code => "
event.set('token',event.get('access_token'))
"
}
}
112-getprometheusdata.conf will have below code:
#Get prometheus dataset
filter
{
http {
url => "http://in7lin075.dev.e2open.com:11000/green/service/actuator/prometheus?tId=7667977e-6ddd-4788-8dcf-578a746b8812"
verb => "GET"
headers =>
{
"Authorization" => "Bearer %{token}"
}
target_body => "prometheusdataset"
}
}
113-parseprometheusdata.conf will have below code:
filter
{
split {
field => "prometheusdataset"
}
}
#Prometheus Metrics Parsing Filter
filter
{
mutate { add_tag => "a" }
if [prometheusdataset] =~ /^#/ { drop{ } }
dissect {
mapping =>
{
"prometheusdataset" => "%{[@metadata][type]}{%{[@metadata][stuff]}} %{[@metadata][value]}"
}
}
kv {
source => "[@metadata][stuff]" target => "[@metadata][data]" field_split => "," trim_value => '\\"'
}
mutate { rename => { "[@metadata][value]" => "[@metadata][data][value]" } }
mutate { rename => { "[@metadata][data]" => "%{[@metadata][type]}" } }
}
When i split to multiple config's as above, pipeline is not working. I have a docket ELK stack which will pick up all conf files and working for other scenarios where i am not passing variables between the files as shown pipeline.conf below. The problem is Step1 saved "bearer token" is not passing in Step2 filter HTTP section and same Step2 "prometheusdataset" is not parsed in Step3 conf file. Request to let me know the issue?
- pipeline.id: main
path.config: "/usr/share/logstash/pipeline/*.conf"
pipeline.workers: 5