Logstash s3 elb config into existing ELK stack

I recently set up an ELK stack in AWS with 1 elasticsearch master, and 1 data node. Running fine for a few weeks with filebeat and some basic auth log and syslog filters.

Now I'm trying to add my ELB logs from s3. I have the plugin configured and I can see s3 logs landing in logstash's tmp dir. So I believe that part is fine. I have 2 input config files. Here is the one for s3 bucket:

input {
  s3 {
    type => "elb"
    bucket => "loadbalancer-name-comapany.net"
    region => "us-east-1"
  }
}

Note I didn't put any credentials in because my EC2 instance is using IAM roles for access to s3.

Next I made a filter. It's one of 3 filter config files I have. I made it based on other examples I found lying about the internet. logstash configtest tells me that the config is OK. logstash also stops/start fine when I have this config file in place. Here it is:

filter {
    if [type] == "elb" {
        grok {
            match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:loadbalancer} %{IP:client_ip}:%{NUMBER:client_port:int} %{IP:backend_ip}:%{NUMBER:backend_port:int} %{NUMBER:request_processing_time:float} %{NUMBER:backend_processing_time:float} %{NUMBER:response_processing_time:float} %{NUMBER:elb_status_code:int} %{NUMBER:backend_status_code:int} %{NUMBER:received_bytes:int} %{NUMBER:sent_bytes:int} %{QS:request}" ]
        }
        date {
            match => [ "timestamp", "ISO8601" ]
        }
        # Add geolocalization attributes based on ip.
        geoip {
            source => "ip"
        }
   }
}

The part I'm stuck on is if I need a new output block for this or not. My existing config file (I only have 1) with an output block looks like this

output {
  elasticsearch {
    hosts => ["10.0.1.60:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

I don't think that will handle the new ELB logs from s3... or will it? If it won't do I edit the existing? Add a new one? My symptom is that I don't see any of these new logs available in Kibana. Maybe I need to define the index for the new elb logs somehow? Sorry if this is a noob question, but I am a bit new to ELK. Thanks!

Given you won't have [@metadata][beat] in the data from that input you will probably want to define a different output.

Thanks @warkolm you were right. I blindly copied the index from another config file and it wasn't the right one to use. I changed the index to this:
index => "elb-%{+YYYY.MM.dd}"
Then gave the services a restart. After a few minutes I was able to start seeing the data show up in Kibana UI. Now on to kibana dashboards...