[SOLVED] Filebeat to Logstash best practice

Thanks for the quick reply, Magnus! I appreciate it.

I'm pretty new to ES and I'm still wrapping my head around it, but would the logstash config look something like this for sorting the nginx-access beats from the rest?

input {
  beats {
    port => 5010
    host => "0.0.0.0"
  }
}
filter {
  if [type] == "nginx-access" {
   grok {
      <match rules here>
   }
   mutate {
      rename => { "@timestamp" => "read_timestamp"}
   }
   date {
      match => [ "[nginx][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
      remove_field => "[nginx][access][time]"
      target => "@timestamp"
   }
   useragent {
      source => "[nginx][access][agent]"
      target => "user_agent"
   }
   geoip {
      source => "[nginx][access][remote_ip]"
      target => "geoip"
   }
  }
}
output {
  if [type] == "nginx-access" {
    elasticsearch {
      hosts => ["elasticsearch:9200"]
      manage_template => false
      index => "nginx-access-dedicated-%{+YYYY.MM.dd}"
      document_type => "nginx-access-dedicated"
    }
  }
  else
  {
    elasticsearch {
      hosts => ["elasticsearch:9200"]
    }
  }
}

I'm not clear on how the Logstash filter is able to differentiate between the different document types fed to it by Filebeat, and then how they'd get sent to their own indices. Would the above example be roughly correct? Any pointers would be appreciated!