Assuming this grok defined in Logstash, you will want to use %{INT:file-size:int}. I wouldn't worry about integer overflow in Logstash since it is based in Ruby and it will auto convert excessively large numbers to a BigNum.
With Elasticsearch, if you are using dynamic mapping for this field, it will just work since it will be backed by a long. However, if you explicitly map this field be sure to use a long data type.
If this is grok via the ingest node, what you have should work.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.