Wrong values pass to index

there are multiple config files in logstash server. all configs files transfer data to different indexes. values pass from other config files as well to particular index. Please let me know how to over come from to pass values to correct index.
used type condition, but it didn't work .

I would like to help, but you need to give us some more information and show us some of your configuration and what you want to achieve.

three config files in logstash

  1. country.conf
    input {
    s3 {
    id => "act_tmcountry"
    access_key_id => "..."
    secret_access_key => "...."
    bucket => "supplier"
    prefix => "test_n"
    interval => 15
    additional_settings => {
    force_path_style => true
    follow_redirects => false
    }
    }
    }
    filter{
    json { source => "message" }
    split { field => "data" }
    mutate {
    add_field => {
    "country_id" => "%{[data][id]}"
    "vendorcountrycode" => "%{[data][code]}"
    "vendorcountryname" => "%{[data][name]}"
    }
    }
    if ("_split_type_failure" not in [tags]) {
    mutate {
    remove_field => ["success","data","message"]
    }
    }
    }
    output {
    elasticsearch {
    document_id => "%{[country_id]}"
    hosts => [" ..../"]
    index => "act_tm_country"
    }
    stdout {codec => rubydebug}
    }

data come in json files. all country details json in test_n folder

2 ) city.conf
input {
s3 {
id => "act_tmcity"
access_key_id => "...."
secret_access_key => "..."
bucket => "supplier"
prefix => "test_n4"
additional_settings => {
force_path_style => true
follow_redirects => false
}
}
}
filter{
json { source => "message" }
split { field => "data" }
mutate {
add_field => {
"id" => "%{[data][id]}"
"vendorcitycode" => "%{[data][id]}"
"vendorcityname" => ""
"country_id" => "%{[data][country_id]}"
}
}
mutate {
remove_field => ["success","data","@timestamp","message"]
}
}
output {
elasticsearch {
document_id => "%{[id]}"
hosts => [".../"]
index => "act_tm_city_test"
}
stdout {codec => rubydebug}
}

all city detail json files in test_n4 folder

tlist.conf

input {
s3 {
id => "act_tmtlist"
access_key_id => "..."
secret_access_key => "..."
bucket => "supplier"
prefix => "test_n2"
interval => 15
additional_settings => {
force_path_style => true
follow_redirects => false
}
}
}
filter{
json { source => "message" }
split { field => "data" }
mutate {
add_field => {
"tid" => "%{[data][id]}"
"imagethumbnailurl" => "%{[data][thumbnail_url]}"
"vendorcitycode" => "%{[data][city_id]}"
}
}
if ("_split_type_failure" not in [tags]) {
mutate {
remove_field => ["success","data","message"]
}
}
}
output {
elasticsearch {
document_id => "%{[tid]}"
hosts => ["..."]
index => "act_tm_tlist"
}
stdout {codec => rubydebug}
}

all detail list json files in test_n2 folder

want to push test_n to act_tm_country, test_n4 to act_tm_city and test_n2 to act_tm_tlist.

but act_tm_country and act_tm_city indexes get objects of act_tm_tlist as well.
I tried putting type in input and checking [type] condition in filter and output in all 3 config files. but didn't work it. and also put id in input section even its not working.

appreciate your help pls.

If you point path.config at a directory then it will concatenate all of the configuration files in that directory, read events from the inputs and send them to all of the outputs.

You could use a different pipeline for each configuration, or you could use tag events on the inputs and use conditionals based on the tags.