ES indexing throughput and scalability

Hi there,

I'm considering to use Flume to collect logs from systems running in
multiple datacenters. Flume would fork the data flow to HDFS and NRT
indexing. The goal is to have the log entry searchable in less than 30
sec from the time it was generated in the remote server. Basically,
I'd like to build something like Spunk with long term data storage in
Hadoop.

I'm new to ES & Lucene and would like to verify if ES is the right
tool for indexing a large volume of logs. Also, I'd like to understand
better how ES scales, how the persistence works and if it supports
multi datacenter deployments for DR.

Assume a situation where thousands of servers generate logs with rate
of e.g. 1TB/day. To make math easy, let's say a single log entry is
1kB on average. This would translate into 11,500 log entries per
second on average. The log entry should be searchable for 14 days and
then expire from the index.

Questions:

  • would each 1k log entry be considered a "document" in ES?
  • 1TB/day x two weeks = 14TB of searchable logs. Where does ES
    persists the documents? Local file system?
  • is it reasonable to expect ES cluster to be able index & search this
    rate of logs?
  • how does ES scale? E.g. if the log indexing rate goes from 10/sec to
    100/sec to 1000/sec, where's the bottleneck?
  • the datacenter hosting ES just went down, now what? How would you
    implement DR?

Any comments appreciated,

Thanks,

-raaka

Hi,
...

Questions:

  • would each 1k log entry be considered a "document" in ES?

Yes, typically that's how it is.

  • 1TB/day x two weeks = 14TB of searchable logs. Where does ES
    persists the documents? Local file system?

Yes. ES Cluster consists of multiple nodes. Data would be stored in the
local file system of each of the nodes.

  • is it reasonable to expect ES cluster to be able index & search this

rate of logs?

Yes, it can handle this easily.

  • how does ES scale? E.g. if the log indexing rate goes from 10/sec to
    100/sec to 1000/sec, where's the bottleneck?

ES scales by adding nodes to the cluster. An index is divided into multiple
shards that would be distributed across the nodes, hence write rate goes up
as the number of nodes and shards increase

  • the datacenter hosting ES just went down, now what? How would you
    implement DR?

ES clusters are not supposed to span across data centers, hence you need to
replicate the data into the DR site yourself (as of now, may change in the
future). ES indices are stored in the file system, and can be copied over
periodically. If the data center goes down, you can start new instances on
the DR site and continue processing new logs.

Any comments appreciated,

There has been fair amount of discussion in the list about how to store
time based data like logs; it would be worth reading them. Also, you may
want to take a look at the logstash project since folks use it with ES to
store log files the way you're describing.

Thanks,

-raaka

Good luck!

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

Berkay, thanks a lot for the reply, much appreciated!

Could you - or anybody else - guesstimate what kind of ES cluster is
needed to support indexing 11,500 1kB documents per sec? This is 1TB/
day log ingest with 1kB/log entry. Assume the search volume would be
light.

E.g. take a commodity server with two intel westmere CPUs (2 x 6
cores), lots of memory if needed (up to 192GB) and SATA (16x 1TB) or
SAS (16x 300GB) disks.

With SATA disks a single server would have 16TB JBOD disk, faster SAS
disk capacity would be 4.8TB.

The disk write speed would be 11.5MB/sec, so it should not be a
bottleneck even for SATA disk. What kind of disk do you usually use
for high volume ES cluster? SATA or SAS?

As for ES DR, I could probably re-build the index from the data stored
in HDFS .. assuming it has DR.

Thanks again,

-raaka

On Feb 11, 9:22 pm, Berkay Mollamustafaoglu mber...@gmail.com wrote:

Hi,
...

Questions:

  • would each 1k log entry be considered a "document" in ES?

Yes, typically that's how it is.

  • 1TB/day x two weeks = 14TB of searchable logs. Where does ES
    persists the documents? Local file system?

Yes. ES Cluster consists of multiple nodes. Data would be stored in the
local file system of each of the nodes.

  • is it reasonable to expect ES cluster to be able index & search this> rate of logs?

Yes, it can handle this easily.

  • how does ES scale? E.g. if the log indexing rate goes from 10/sec to
    100/sec to 1000/sec, where's the bottleneck?

ES scales by adding nodes to the cluster. An index is divided into multiple
shards that would be distributed across the nodes, hence write rate goes up
as the number of nodes and shards increase

  • the datacenter hosting ES just went down, now what? How would you
    implement DR?

ES clusters are not supposed to span across data centers, hence you need to
replicate the data into the DR site yourself (as of now, may change in the
future). ES indices are stored in the file system, and can be copied over
periodically. If the data center goes down, you can start new instances on
the DR site and continue processing new logs.

Any comments appreciated,

There has been fair amount of discussion in the list about how to store
time based data like logs; it would be worth reading them. Also, you may
want to take a look at the logstash project since folks use it with ES to
store log files the way you're describing.

Thanks,

-raaka

Good luck!

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

You will need to run some capacity planning. First, read about storing time based data like logs here: Redirecting to Google Groups,

For capacity planning, you can start a simple 2 node cluster, and start to index into it, you can form some sort of numbers out of it based on your HW and data, from there, its just a matter of having enough nodes and creating the correct time based indices with the relevant number of shards to be able to handle the load.

On Sunday, February 12, 2012 at 7:22 AM, Berkay Mollamustafaoglu wrote:

Hi,
...

Questions:

  • would each 1k log entry be considered a "document" in ES?
    Yes, typically that's how it is.
  • 1TB/day x two weeks = 14TB of searchable logs. Where does ES
    persists the documents? Local file system?
    Yes. ES Cluster consists of multiple nodes. Data would be stored in the local file system of each of the nodes.
  • is it reasonable to expect ES cluster to be able index & search this
    rate of logs?
    Yes, it can handle this easily.
  • how does ES scale? E.g. if the log indexing rate goes from 10/sec to
    100/sec to 1000/sec, where's the bottleneck?
    ES scales by adding nodes to the cluster. An index is divided into multiple shards that would be distributed across the nodes, hence write rate goes up as the number of nodes and shards increase
  • the datacenter hosting ES just went down, now what? How would you
    implement DR?
    ES clusters are not supposed to span across data centers, hence you need to replicate the data into the DR site yourself (as of now, may change in the future). ES indices are stored in the file system, and can be copied over periodically. If the data center goes down, you can start new instances on the DR site and continue processing new logs.

Any comments appreciated,

There has been fair amount of discussion in the list about how to store time based data like logs; it would be worth reading them. Also, you may want to take a look at the logstash project since folks use it with ES to store log files the way you're describing.

Thanks,

-raaka
Good luck!

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

It would indeed be good to hear what people use for hardware specs. It
would also be good to hear what people do about RAID, etc.
I suspect we'd see quite different specs between folks who are using cloud
(Amazon etc.) and virtual servers vs physical servers on premise.
First question you'll need to answer is about availability requirements.
Can you live with down time in case of a server failure? If you cannot,
then you'll need a replica for each index which doubles the disk space
requirement.
Also, 1k is the raw size, indexed size will be higher. Best thing to do
would be to index some data to see what you'd get and extrapolate from
there.
About DR, it may take a while to rebuild the entire index. If you use time
based index (say create a new index per day) you can copy the index over
daily once they are no longer written. This way you'd only have to re-index
the last days data.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Sun, Feb 12, 2012 at 2:52 AM, Raaka flinkster@gmail.com wrote:

Berkay, thanks a lot for the reply, much appreciated!

Could you - or anybody else - guesstimate what kind of ES cluster is
needed to support indexing 11,500 1kB documents per sec? This is 1TB/
day log ingest with 1kB/log entry. Assume the search volume would be
light.

E.g. take a commodity server with two intel westmere CPUs (2 x 6
cores), lots of memory if needed (up to 192GB) and SATA (16x 1TB) or
SAS (16x 300GB) disks.

With SATA disks a single server would have 16TB JBOD disk, faster SAS
disk capacity would be 4.8TB.

The disk write speed would be 11.5MB/sec, so it should not be a
bottleneck even for SATA disk. What kind of disk do you usually use
for high volume ES cluster? SATA or SAS?

As for ES DR, I could probably re-build the index from the data stored
in HDFS .. assuming it has DR.

Thanks again,

-raaka

On Feb 11, 9:22 pm, Berkay Mollamustafaoglu mber...@gmail.com wrote:

Hi,
...

Questions:

  • would each 1k log entry be considered a "document" in ES?

Yes, typically that's how it is.

  • 1TB/day x two weeks = 14TB of searchable logs. Where does ES
    persists the documents? Local file system?

Yes. ES Cluster consists of multiple nodes. Data would be stored in the
local file system of each of the nodes.

  • is it reasonable to expect ES cluster to be able index & search this>
    rate of logs?

Yes, it can handle this easily.

  • how does ES scale? E.g. if the log indexing rate goes from 10/sec to
    100/sec to 1000/sec, where's the bottleneck?

ES scales by adding nodes to the cluster. An index is divided into
multiple
shards that would be distributed across the nodes, hence write rate
goes up
as the number of nodes and shards increase

  • the datacenter hosting ES just went down, now what? How would you
    implement DR?

ES clusters are not supposed to span across data centers, hence you
need to
replicate the data into the DR site yourself (as of now, may change in
the
future). ES indices are stored in the file system, and can be copied
over
periodically. If the data center goes down, you can start new instances
on
the DR site and continue processing new logs.

Any comments appreciated,

There has been fair amount of discussion in the list about how to store
time based data like logs; it would be worth reading them. Also, you may
want to take a look at the logstash project since folks use it with ES
to
store log files the way you're describing.

Thanks,

-raaka

Good luck!

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

--
Regards,
Berkay Mollamustafaoglu
Ph: +1 (571) 766-6292
mberkay on yahoo, google and skype

Also, there are aspects to the indexing that you can do to both reduce the index size and improve indexing. In terms of what gets indexed / stored the important ones are if you store the _source (the actual document indexed), and if so, is it compressed (Elasticsearch Platform — Find real-time answers at scale | Elastic), and if you use _all (if you disable it, it means faster indexing, and smaller index size): Elasticsearch Platform — Find real-time answers at scale | Elastic.

Also, once a time based index is done with, its a good practice to optimize it does to 2-4 segments which means less resources, and smaller index size. This can be done using the optimize API: Elasticsearch Platform — Find real-time answers at scale | Elastic and setting max_num_segments to 4 (for example).

On Sunday, February 12, 2012 at 8:14 PM, Berkay Mollamustafaoglu wrote:

It would indeed be good to hear what people use for hardware specs. It would also be good to hear what people do about RAID, etc.
I suspect we'd see quite different specs between folks who are using cloud (Amazon etc.) and virtual servers vs physical servers on premise.
First question you'll need to answer is about availability requirements. Can you live with down time in case of a server failure? If you cannot, then you'll need a replica for each index which doubles the disk space requirement.
Also, 1k is the raw size, indexed size will be higher. Best thing to do would be to index some data to see what you'd get and extrapolate from there.
About DR, it may take a while to rebuild the entire index. If you use time based index (say create a new index per day) you can copy the index over daily once they are no longer written. This way you'd only have to re-index the last days data.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Sun, Feb 12, 2012 at 2:52 AM, Raaka <flinkster@gmail.com (mailto:flinkster@gmail.com)> wrote:

Berkay, thanks a lot for the reply, much appreciated!

Could you - or anybody else - guesstimate what kind of ES cluster is
needed to support indexing 11,500 1kB documents per sec? This is 1TB/
day log ingest with 1kB/log entry. Assume the search volume would be
light.

E.g. take a commodity server with two intel westmere CPUs (2 x 6
cores), lots of memory if needed (up to 192GB) and SATA (16x 1TB) or
SAS (16x 300GB) disks.

With SATA disks a single server would have 16TB JBOD disk, faster SAS
disk capacity would be 4.8TB.

The disk write speed would be 11.5MB/sec, so it should not be a
bottleneck even for SATA disk. What kind of disk do you usually use
for high volume ES cluster? SATA or SAS?

As for ES DR, I could probably re-build the index from the data stored
in HDFS .. assuming it has DR.

Thanks again,

-raaka

On Feb 11, 9:22 pm, Berkay Mollamustafaoglu <mber...@gmail.com (mailto:mber...@gmail.com)> wrote:

Hi,
...

Questions:

  • would each 1k log entry be considered a "document" in ES?

Yes, typically that's how it is.

  • 1TB/day x two weeks = 14TB of searchable logs. Where does ES
    persists the documents? Local file system?

Yes. ES Cluster consists of multiple nodes. Data would be stored in the
local file system of each of the nodes.

  • is it reasonable to expect ES cluster to be able index & search this> rate of logs?

Yes, it can handle this easily.

  • how does ES scale? E.g. if the log indexing rate goes from 10/sec to
    100/sec to 1000/sec, where's the bottleneck?

ES scales by adding nodes to the cluster. An index is divided into multiple
shards that would be distributed across the nodes, hence write rate goes up
as the number of nodes and shards increase

  • the datacenter hosting ES just went down, now what? How would you
    implement DR?

ES clusters are not supposed to span across data centers, hence you need to
replicate the data into the DR site yourself (as of now, may change in the
future). ES indices are stored in the file system, and can be copied over
periodically. If the data center goes down, you can start new instances on
the DR site and continue processing new logs.

Any comments appreciated,

There has been fair amount of discussion in the list about how to store
time based data like logs; it would be worth reading them. Also, you may
want to take a look at the logstash project since folks use it with ES to
store log files the way you're describing.

Thanks,

-raaka

Good luck!

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

--
Regards,
Berkay Mollamustafaoglu
Ph: +1 (571) 766-6292
mberkay on yahoo, google and skype

Hi Raaka,

I'm really interested in how this will turn out for you because I'm
working on a similar project. I'm expecting similar numbers to yours
pretty soon. Only difference is that we're using rsyslog for now,
instead of Flume. By the way, rsyslog has plugins for both
Elasticsearch and HDFS, in case you're interested :smiley:

Here's our basic setup for now:

  • one index per day of logs (we'll see if we need finer glanularity),
    so that we can simply delete old indexes at night pretty much
    instantly
  • source compression (as Shay already suggested)
  • the rest of the settings were default (gateway was local file
    system, 5 shards per index)

First test was on a high-end laptop (i7@2Ghz, 8GB RAM, 7200 RPM hdd),
and I could get some 5-7K inserts per second using a python script
that was just reading log lines from /var/log/messages, parsed them
and did bulk inserts.

As for searching, it went pretty instant up to about 100M documents.
And it was "acceptable" - as in up to 6 seconds - with some 170M
documents. I remember the index size was 49GB, so I was impressed.

On a cluster, I would need to account one replica per shard (for fault
tolerance), and my calculation was that I would get 1K of index space
per log line. I guess that's largely due to source compression. Maybe
your log lines are bigger, but this is still before disabling _all (as
suggested by Shay).

We will also need to span across multiple datacenters, and the initial
idea was to have an Elasticsearch cluster that will span across all
datacenters, use shard allocation filtering to aggregate logs locally
to each datacenter:

and then search globally from a central interface.

This is all working progress, so I'll keep an eye on this thread. Any
feedback is greatly appreciated :smiley:

Best regards,
Radu

On 12 feb., 07:01, Raaka flinks...@gmail.com wrote:

Hi there,

I'm considering to use Flume to collect logs from systems running in
multiple datacenters. Flume would fork the data flow to HDFS and NRT
indexing. The goal is to have the log entry searchable in less than 30
sec from the time it was generated in the remote server. Basically,
I'd like to build something like Spunk with long term data storage in
Hadoop.

I'm new to ES & Lucene and would like to verify if ES is the right
tool for indexing a large volume of logs. Also, I'd like to understand
better how ES scales, how the persistence works and if it supports
multi datacenter deployments for DR.

Assume a situation where thousands of servers generate logs with rate
of e.g. 1TB/day. To make math easy, let's say a single log entry is
1kB on average. This would translate into 11,500 log entries per
second on average. The log entry should be searchable for 14 days and
then expire from the index.

Questions:

  • would each 1k log entry be considered a "document" in ES?
  • 1TB/day x two weeks = 14TB of searchable logs. Where does ES
    persists the documents? Local file system?
  • is it reasonable to expect ES cluster to be able index & search this
    rate of logs?
  • how does ES scale? E.g. if the log indexing rate goes from 10/sec to
    100/sec to 1000/sec, where's the bottleneck?
  • the datacenter hosting ES just went down, now what? How would you
    implement DR?

Any comments appreciated,

Thanks,

-raaka