Where does Logstash store its data?

I'm using a central Logstash server that gets logs from multiple remote Logstash clients, which use rsyslog to forward logs. Where does Logstash store the logs in the server? Can I setup an NFS mount and tell Logstash to store them there? I want to set the NFS up using AWS Elastic File System (EFS) so the volume grows automatically. Make sense?

I'm using a central Logstash server that gets logs from multiple remote Logstash clients, which use rsyslog to forward logs. Where does Logstash store the logs in the server?

By default Logstash doesn't store any logs.

Can I setup an NFS mount and tell Logstash to store them there?

Yes.

Thanks. Let me clarify what I'm after.

Say from my clients, I send this log to my central Logstash serer

2016-08-26 16:15:09,586 INFO :some_thread [com.mycompany.classpath] The message body

If I have 10 clients, and each client sends 1000 of these per hour, I would've collected 10,000/hour on my server. Or if my clients send 1,000,000 per hour, I would've collected 10,000,000/hour on my server. 10,000,000/hour would require more space than 10,000/hour, correct? Obviously then it'll take more room to store 10,000,000 logs per hour, that it would to store 10,000 logs per hour. What Logstash construct stores this information? How can I then tell Logstash to use an NFS to store this
information?

My logstash configuration file has this

input {
  tcp {
    port => 5514
  }
}

filter {
   grok {
   }
}

output {
   elasticsearch {
     some_es_configuration => some_es_configuration_value
   }
}

So you're really asking about Elasticsearch rather than Logstash? And is the question if Elasticsearch can have its storage on NFS? The answer is probably yes but it's strongly discouraged.

For Elasticsearch I can why that would be discouraged.

Any documentation (either Logstash or Elasticsearch) on how I can handle this then? I do want to use AWS Elastic File System to manage expanding my volume that holds the actual data, but I can't find any info on HOW to tell either Logstash or Elasticsearch to use this volume.

NFS is discouraged mainly because it's slower than local disk. To configure where Elasticsearch stores its data, look into the path.data configuration option.

https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-dir-layout.html

Thanks again @magnusbaeck.

So what is the recommended way to setup the ELK stack so the volume can grow dynamically? I'm trying to setup an AWS Elastic File Service, which of course has to be NFS-mounted? Should I start a separate thread?

I'm no AWS expert, but why not use EBS volumes?

@magnusbaeck, I'm exploring both EBS and EFS. Thanks!