Multiple config files and filter application

I want to pass a directory as the -f argument to load several config files but I'm concerned about filter sequence.The following is a pair of typical config files. Is my usage of conditionals appropriate to have the filters from config 1 process input 1 data only? Same for the elasticsearch indices, I want them named according to the tags I set in the input sections.

# config file 1
input{
    http{
        port => 9001
        tags => ["input1"]
        ...
    }
}
filter{
    if "input1" in [tags]{
        filterA{
            # do stuff to input1 data
        }
        filterB{
            # do stuff to input1 data
        }
    }
}
output{
    if "input1" in [tags]{
        elasticsearch{
            index => "logstash-system1-%{+YYYY.MM.dd}"
            ...
        }
    }    
}

Second config file in same directory:

# config file 2
input{
    http{
        port => 9002
        tags => ["input2"]
        ...
    }
}
filter{
    if "input2" in [tags]{
        filterA{
            # do stuff to input2 data
        }
        filterB{
            # do stuff to input2 data
        }
    }
}
output{
    if "input2" in [tags]{
        elasticsearch{
            index => "logstash-system2-%{+YYYY.MM.dd}"
            ...
        }
    }    
}

Also, would it be more appropriate to use the "type" field for this?

Is my usage of conditionals appropriate to have the filters from config 1 process input 1 data only?

Yes.

Also, would it be more appropriate to use the "type" field for this?

That depends on the messages. Can it be argued that the messages arriving on ports 9001 and 9002 are different kinds?

Thanks Magnus, good to know I'm on track. I think you're right in that, in this case, the two inputs' types probably should be the same. I'll keep using tags. BTW, is there a difference? I've read that the type tag cannot be overwritten, i.e. cannot be given a new value once set. Is the same true for tags?

A message can have all of its fields, including tags and the type, altered from Logstash filters. Once index in Elasticsearch I don't think you can update the type field. I suspect you have to reindex the document. Well, maybe you can update type but then type will differ from _type and that's probably bad.

Ok good to know. I find this on Jdbc input plugin | Logstash Reference [8.11] | Elastic very confusing then:

If you try to set a type on an event that already has one (for
example when you send an event from a shipper to an indexer) then
a new input will not override the existing type. A type set at
the shipper stays with that event for its life even
when sent to another Logstash server.

What's meant by that is if you have an input like

lumberjack {
  ...
  type => "foo"
}

to receive messages from logstash-forwarder or another Logstash instance, incoming messages won't be stamped with the "foo" type if the type field is already set. But, you can definitely use e.g. a mutate filter to overwite the type value.

The documentation you quoted isn't very relevant for the jdbc input. The type option is, like add_field and a few others, generic for all filters.