I'm testing out the ELK stack on my desktop (ie 1 node) and thought I'd
start by pulling a flat file, having logstash parse and output it to
Elasticsearch. The setup was easy, but working through the flat file is
painfully slow. The flat file is tab delimited, about 6million rows and 10
fields. I've messed around with the refresh_interval, flush_size, and
workers, but the most I've been able to get is about 300 documents a
second, which means 5-6hours. I'm having a hard time believing that that's
In addition to this, logstash stops reading in the file at 579,242
documents every single time (about an hour in), but throws no errors.
If I pull the index field out or the mapping template out (which is mostly
specifying integers, dates and non-analyzed fields), then I start getting
4-6k documents loading per second.
Any guesses as to what I'm doing wrong?
If it's relevant, my desktop is set at 10gb (with a 4gb heap setting for
ES) and 4 cores.
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to firstname.lastname@example.org.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/043d9573-c07d-49f9-9410-9cb1424b2b78%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.