I'm testing out the ELK stack on my desktop (ie 1 node) and thought I'd
start by pulling a flat file, having logstash parse and output it to
Elasticsearch. The setup was easy, but working through the flat file is
painfully slow. The flat file is tab delimited, about 6million rows and 10
fields. I've messed around with the refresh_interval, flush_size, and
workers, but the most I've been able to get is about 300 documents a
second, which means 5-6hours. I'm having a hard time believing that that's
right.
In addition to this, logstash stops reading in the file at 579,242
documents every single time (about an hour in), but throws no errors.
If I pull the index field out or the mapping template out (which is mostly
specifying integers, dates and non-analyzed fields), then I start getting
4-6k documents loading per second.
Any guesses as to what I'm doing wrong?
If it's relevant, my desktop is set at 10gb (with a 4gb heap setting for
ES) and 4 cores.
I'm testing out the ELK stack on my desktop (ie 1 node) and thought I'd
start by pulling a flat file, having logstash parse and output it to
Elasticsearch. The setup was easy, but working through the flat file is
painfully slow. The flat file is tab delimited, about 6million rows and 10
fields. I've messed around with the refresh_interval, flush_size, and
workers, but the most I've been able to get is about 300 documents a
second, which means 5-6hours. I'm having a hard time believing that that's
right.
In addition to this, logstash stops reading in the file at 579,242
documents every single time (about an hour in), but throws no errors.
If I pull the index field out or the mapping template out (which is mostly
specifying integers, dates and non-analyzed fields), then I start getting
4-6k documents loading per second.
Any guesses as to what I'm doing wrong?
If it's relevant, my desktop is set at 10gb (with a 4gb heap setting for
ES) and 4 cores.
Can you please share you logstash configuration, some sample data as well
as your mappings?
Best regards,
Christian
On Friday, March 6, 2015 at 11:30:45 AM UTC-8, Econgineer wrote:
I'm testing out the ELK stack on my desktop (ie 1 node) and thought I'd
start by pulling a flat file, having logstash parse and output it to
Elasticsearch. The setup was easy, but working through the flat file is
painfully slow. The flat file is tab delimited, about 6million rows and 10
fields. I've messed around with the refresh_interval, flush_size, and
workers, but the most I've been able to get is about 300 documents a
second, which means 5-6hours. I'm having a hard time believing that that's
right.
In addition to this, logstash stops reading in the file at 579,242
documents every single time (about an hour in), but throws no errors.
If I pull the index field out or the mapping template out (which is mostly
specifying integers, dates and non-analyzed fields), then I start getting
4-6k documents loading per second.
Any guesses as to what I'm doing wrong?
If it's relevant, my desktop is set at 10gb (with a 4gb heap setting for
ES) and 4 cores.
Turns out I just had the wrong character encoding set. Everythings working
great at 2-3k documents a second now!
Thanks!
On Friday, March 6, 2015 at 11:30:45 AM UTC-8, Econgineer wrote:
I'm testing out the ELK stack on my desktop (ie 1 node) and thought I'd
start by pulling a flat file, having logstash parse and output it to
Elasticsearch. The setup was easy, but working through the flat file is
painfully slow. The flat file is tab delimited, about 6million rows and 10
fields. I've messed around with the refresh_interval, flush_size, and
workers, but the most I've been able to get is about 300 documents a
second, which means 5-6hours. I'm having a hard time believing that that's
right.
In addition to this, logstash stops reading in the file at 579,242
documents every single time (about an hour in), but throws no errors.
If I pull the index field out or the mapping template out (which is mostly
specifying integers, dates and non-analyzed fields), then I start getting
4-6k documents loading per second.
Any guesses as to what I'm doing wrong?
If it's relevant, my desktop is set at 10gb (with a 4gb heap setting for
ES) and 4 cores.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.