Performance problems

Hi everyone.

I'm running elasticsearch-0.9 on the cluster of 5 EC2 instances
(Cluster Compute Quadruple Extra Large Instance) with one of them
configured as frontend (data: false). Index contains 10M of documents,
4 shards, 2 replicas. Total stored size is 23GB and each node has 23
GB of RAM, so it fit's to memory without any problems.
Using JMeter as testing tool I'm getting as little as ~150 requests
per second (that's for 4 worker nodes with 8 CPU cores each). I've
checked with iostat that there's no disk activity on cluster nodes. I
also see CPU utilization near 5-10% on worker nodes during performance
testing. Looks somewhat strange to me.

I made performance measurements on the same cluster when index
contained only 5M documents. I got nearly 1500 requests per second and
CPU utilizations on worker nodes was close to 90%. After importing
another million of documents performance started to degrade very
rapidly.

Could anyone help me with this problem? I'm completely out of ideas
now.

My queries are nothing complex: a single keyword search with a bunch
of filter attached, also faceting by some fields. Typical query looks
like this:

{
"query":
{
"filtered":
{
"query":
{
"query_string":
{
"fields":
[
"keywords.original_keywords^2",
"keywords.keywords"
],
"query": "bright"
}
},
"filter":
{
"and":
{
"filters":
[
{
"term":
{
"content.is_offensive": false
}
},

                    {
                        "term":
                        {
                            "licenses.extended": true
                        }
                    },

                    {
                        "term":
                        {
                            "image.isolated": false
                        }
                    },

                    {
                        "term":
                        {
                            "content.orientation": "horizontal"
                        }
                    },

                    {
                        "term":
                        {
                            "categories.conceptual.depth2": "793"
                        }
                    }
                ]
            }
        }
    },
     "sort": "online.rating",
     "facets":
    {
        "representative_categories":
        {
            "terms":
            {
                "field": "categories.representative.depth2",
                 "size": 100
            }
        },
         "representative_categories":
        {
            "terms":
            {
                "field": "categories.conceptual.depth2",
                 "size": 100
            }
        },
         "licenses":
        {
            "terms":
            {
                "field": "licenses.size",
                 "size": 100
            }
        },
         "prices":
        {
            "histogram":
            {
                "field": "prices.min",
                 "interval": 1
            }
        }
    }
}

}

Hi,

Is there a chance that the response that you get is really large? It

seems like you are getting large result sets for the facets (not sure about
the histogram facet of 1 for price, it depends on the range of it). Can you
try and start with a simple query (no filters, no facets) and slowly add
more to the search request? How much memory do you assign each node? It
very strange that by moving from 5M docs to 6M docs, suddenly you get such
different results, unless those 1M cause the facets to "explode" with the
data they return?

-shay.banon

On Wed, Aug 11, 2010 at 12:09 AM, rmihael rmihael@gmail.com wrote:

Hi everyone.

I'm running elasticsearch-0.9 on the cluster of 5 EC2 instances
(Cluster Compute Quadruple Extra Large Instance) with one of them
configured as frontend (data: false). Index contains 10M of documents,
4 shards, 2 replicas. Total stored size is 23GB and each node has 23
GB of RAM, so it fit's to memory without any problems.
Using JMeter as testing tool I'm getting as little as ~150 requests
per second (that's for 4 worker nodes with 8 CPU cores each). I've
checked with iostat that there's no disk activity on cluster nodes. I
also see CPU utilization near 5-10% on worker nodes during performance
testing. Looks somewhat strange to me.

I made performance measurements on the same cluster when index
contained only 5M documents. I got nearly 1500 requests per second and
CPU utilizations on worker nodes was close to 90%. After importing
another million of documents performance started to degrade very
rapidly.

Could anyone help me with this problem? I'm completely out of ideas
now.

My queries are nothing complex: a single keyword search with a bunch
of filter attached, also faceting by some fields. Typical query looks
like this:

{
"query":
{
"filtered":
{
"query":
{
"query_string":
{
"fields":
[
"keywords.original_keywords^2",
"keywords.keywords"
],
"query": "bright"
}
},
"filter":
{
"and":
{
"filters":
[
{
"term":
{
"content.is_offensive": false
}
},

                   {
                       "term":
                       {
                           "licenses.extended": true
                       }
                   },

                   {
                       "term":
                       {
                           "image.isolated": false
                       }
                   },

                   {
                       "term":
                       {
                           "content.orientation": "horizontal"
                       }
                   },

                   {
                       "term":
                       {
                           "categories.conceptual.depth2": "793"
                       }
                   }
               ]
           }
       }
   },
    "sort": "online.rating",
    "facets":
   {
       "representative_categories":
       {
           "terms":
           {
               "field": "categories.representative.depth2",
                "size": 100
           }
       },
        "representative_categories":
       {
           "terms":
           {
               "field": "categories.conceptual.depth2",
                "size": 100
           }
       },
        "licenses":
       {
           "terms":
           {
               "field": "licenses.size",
                "size": 100
           }
       },
        "prices":
       {
           "histogram":
           {
               "field": "prices.min",
                "interval": 1
           }
       }
   }

}
}

One more thing, I would use the 5th node as data node as well, its a shame
for it not to share the load.

On Wed, Aug 11, 2010 at 12:41 AM, Shay Banon
shay.banon@elasticsearch.comwrote:

Hi,

Is there a chance that the response that you get is really large? It

seems like you are getting large result sets for the facets (not sure about
the histogram facet of 1 for price, it depends on the range of it). Can you
try and start with a simple query (no filters, no facets) and slowly add
more to the search request? How much memory do you assign each node? It
very strange that by moving from 5M docs to 6M docs, suddenly you get such
different results, unless those 1M cause the facets to "explode" with the
data they return?

-shay.banon

On Wed, Aug 11, 2010 at 12:09 AM, rmihael rmihael@gmail.com wrote:

Hi everyone.

I'm running elasticsearch-0.9 on the cluster of 5 EC2 instances
(Cluster Compute Quadruple Extra Large Instance) with one of them
configured as frontend (data: false). Index contains 10M of documents,
4 shards, 2 replicas. Total stored size is 23GB and each node has 23
GB of RAM, so it fit's to memory without any problems.
Using JMeter as testing tool I'm getting as little as ~150 requests
per second (that's for 4 worker nodes with 8 CPU cores each). I've
checked with iostat that there's no disk activity on cluster nodes. I
also see CPU utilization near 5-10% on worker nodes during performance
testing. Looks somewhat strange to me.

I made performance measurements on the same cluster when index
contained only 5M documents. I got nearly 1500 requests per second and
CPU utilizations on worker nodes was close to 90%. After importing
another million of documents performance started to degrade very
rapidly.

Could anyone help me with this problem? I'm completely out of ideas
now.

My queries are nothing complex: a single keyword search with a bunch
of filter attached, also faceting by some fields. Typical query looks
like this:

{
"query":
{
"filtered":
{
"query":
{
"query_string":
{
"fields":
[
"keywords.original_keywords^2",
"keywords.keywords"
],
"query": "bright"
}
},
"filter":
{
"and":
{
"filters":
[
{
"term":
{
"content.is_offensive": false
}
},

                   {
                       "term":
                       {
                           "licenses.extended": true
                       }
                   },

                   {
                       "term":
                       {
                           "image.isolated": false
                       }
                   },

                   {
                       "term":
                       {
                           "content.orientation": "horizontal"
                       }
                   },

                   {
                       "term":
                       {
                           "categories.conceptual.depth2": "793"
                       }
                   }
               ]
           }
       }
   },
    "sort": "online.rating",
    "facets":
   {
       "representative_categories":
       {
           "terms":
           {
               "field": "categories.representative.depth2",
                "size": 100
           }
       },
        "representative_categories":
       {
           "terms":
           {
               "field": "categories.conceptual.depth2",
                "size": 100
           }
       },
        "licenses":
       {
           "terms":
           {
               "field": "licenses.size",
                "size": 100
           }
       },
        "prices":
       {
           "histogram":
           {
               "field": "prices.min",
                "interval": 1
           }
       }
   }

}
}

Typical 'total' is less then 100K items. Price ranging between 1 and
10, so I don't think it can cause much problems. I'll try to remove
faceting now and see how it goes.
I'm bit confused with your question about memory. I didn't assigned
any memory to nodes, it just runs as it is. This kind of EC2 instance
have 23GB of memory if you mean it.
Data I've been uploading to index are very uniform. In fact they are
randomly generated and should cause any kind of statistical explosion.
But I've certainly got this fast degradation between 5M and 6M. Very
strange, looks like something in my setup is very broken.

BTW, I'm getting the following messages in log files:

[14:11:14,211][INFO ][monitor.memory.alpha ] [Sangre] [5]
[Full ] Ran after [2] consecutive clean swipes, memory_to_clean
[155.7mb], lower_memory_threshold [809mb], upper_memory_threshold
[960.6mb], used_memory [964.7mb], total_memory[1011.2mb],
max_memory[1011.2mb]
[14:52:06,191][INFO ][monitor.memory.alpha ] [Sangre] [6]
[Full ] Ran after [2] consecutive clean swipes, memory_to_clean
[179.2mb], lower_memory_threshold [809mb], upper_memory_threshold
[960.6mb], used_memory [988.2mb], total_memory[1011.2mb],
max_memory[1011.2mb]

Is everything OK with it? "total_memory[1011.2mb],
max_memory[1011.2mb]" part is confusing me, why it's so small?

On Aug 11, 12:41 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Hi,

Is there a chance that the response that you get is really large? It

seems like you are getting large result sets for the facets (not sure about
the histogram facet of 1 for price, it depends on the range of it). Can you
try and start with a simple query (no filters, no facets) and slowly add
more to the search request? How much memory do you assign each node? It
very strange that by moving from 5M docs to 6M docs, suddenly you get such
different results, unless those 1M cause the facets to "explode" with the
data they return?

-shay.banon

On Wed, Aug 11, 2010 at 12:09 AM, rmihael rmih...@gmail.com wrote:

Hi everyone.

I'm running elasticsearch-0.9 on the cluster of 5 EC2 instances
(Cluster Compute Quadruple Extra Large Instance) with one of them
configured as frontend (data: false). Index contains 10M of documents,
4 shards, 2 replicas. Total stored size is 23GB and each node has 23
GB of RAM, so it fit's to memory without any problems.
Using JMeter as testing tool I'm getting as little as ~150 requests
per second (that's for 4 worker nodes with 8 CPU cores each). I've
checked with iostat that there's no disk activity on cluster nodes. I
also see CPU utilization near 5-10% on worker nodes during performance
testing. Looks somewhat strange to me.

I made performance measurements on the same cluster when index
contained only 5M documents. I got nearly 1500 requests per second and
CPU utilizations on worker nodes was close to 90%. After importing
another million of documents performance started to degrade very
rapidly.

Could anyone help me with this problem? I'm completely out of ideas
now.

My queries are nothing complex: a single keyword search with a bunch
of filter attached, also faceting by some fields. Typical query looks
like this:

{
"query":
{
"filtered":
{
"query":
{
"query_string":
{
"fields":
[
"keywords.original_keywords^2",
"keywords.keywords"
],
"query": "bright"
}
},
"filter":
{
"and":
{
"filters":
[
{
"term":
{
"content.is_offensive": false
}
},

                   {
                       "term":
                       {
                           "licenses.extended": true
                       }
                   },
                   {
                       "term":
                       {
                           "image.isolated": false
                       }
                   },
                   {
                       "term":
                       {
                           "content.orientation": "horizontal"
                       }
                   },
                   {
                       "term":
                       {
                           "categories.conceptual.depth2": "793"
                       }
                   }
               ]
           }
       }
   },
    "sort": "online.rating",
    "facets":
   {
       "representative_categories":
       {
           "terms":
           {
               "field": "categories.representative.depth2",
                "size": 100
           }
       },
        "representative_categories":
       {
           "terms":
           {
               "field": "categories.conceptual.depth2",
                "size": 100
           }
       },
        "licenses":
       {
           "terms":
           {
               "field": "licenses.size",
                "size": 100
           }
       },
        "prices":
       {
           "histogram":
           {
               "field": "prices.min",
                "interval": 1
           }
       }
   }

}
}

EC2 setup is just for testing. We'll run the system on our own
hardware and frontend node will be much weaker then worker nodes. I
thought that it is the recommended use case:
http://www.elasticsearch.com/docs/elasticsearch/modules/node/data_node/
Do you think that we'll do better with homogeneous cluster? If so,
then I'll try it too.

On Aug 11, 12:45 am, Shay Banon shay.ba...@elasticsearch.com wrote:

One more thing, I would use the 5th node as data node as well, its a shame
for it not to share the load.

On Wed, Aug 11, 2010 at 12:41 AM, Shay Banon
shay.ba...@elasticsearch.comwrote:

Hi,

Is there a chance that the response that you get is really large? It

seems like you are getting large result sets for the facets (not sure about
the histogram facet of 1 for price, it depends on the range of it). Can you
try and start with a simple query (no filters, no facets) and slowly add
more to the search request? How much memory do you assign each node? It
very strange that by moving from 5M docs to 6M docs, suddenly you get such
different results, unless those 1M cause the facets to "explode" with the
data they return?

-shay.banon

On Wed, Aug 11, 2010 at 12:09 AM, rmihael rmih...@gmail.com wrote:

Hi everyone.

I'm running elasticsearch-0.9 on the cluster of 5 EC2 instances
(Cluster Compute Quadruple Extra Large Instance) with one of them
configured as frontend (data: false). Index contains 10M of documents,
4 shards, 2 replicas. Total stored size is 23GB and each node has 23
GB of RAM, so it fit's to memory without any problems.
Using JMeter as testing tool I'm getting as little as ~150 requests
per second (that's for 4 worker nodes with 8 CPU cores each). I've
checked with iostat that there's no disk activity on cluster nodes. I
also see CPU utilization near 5-10% on worker nodes during performance
testing. Looks somewhat strange to me.

I made performance measurements on the same cluster when index
contained only 5M documents. I got nearly 1500 requests per second and
CPU utilizations on worker nodes was close to 90%. After importing
another million of documents performance started to degrade very
rapidly.

Could anyone help me with this problem? I'm completely out of ideas
now.

My queries are nothing complex: a single keyword search with a bunch
of filter attached, also faceting by some fields. Typical query looks
like this:

{
"query":
{
"filtered":
{
"query":
{
"query_string":
{
"fields":
[
"keywords.original_keywords^2",
"keywords.keywords"
],
"query": "bright"
}
},
"filter":
{
"and":
{
"filters":
[
{
"term":
{
"content.is_offensive": false
}
},

                   {
                       "term":
                       {
                           "licenses.extended": true
                       }
                   },
                   {
                       "term":
                       {
                           "image.isolated": false
                       }
                   },
                   {
                       "term":
                       {
                           "content.orientation": "horizontal"
                       }
                   },
                   {
                       "term":
                       {
                           "categories.conceptual.depth2": "793"
                       }
                   }
               ]
           }
       }
   },
    "sort": "online.rating",
    "facets":
   {
       "representative_categories":
       {
           "terms":
           {
               "field": "categories.representative.depth2",
                "size": 100
           }
       },
        "representative_categories":
       {
           "terms":
           {
               "field": "categories.conceptual.depth2",
                "size": 100
           }
       },
        "licenses":
       {
           "terms":
           {
               "field": "licenses.size",
                "size": 100
           }
       },
        "prices":
       {
           "histogram":
           {
               "field": "prices.min",
                "interval": 1
           }
       }
   }

}
}

Good, thats what I was concerned about. When you run a Java virtual machine,
you assign memory to it and it only consumes as much memory as you give it.
By default, it is set to use 1g max memory. Certainly, with your machine,
you can increase that quite significantly, I would say do 10G and see how it
goes (you want to leave memory also for file system cache, and too large
heaps can cause the JVM to hiccup). How to set the max memory is explained
here: http://www.elasticsearch.com/docs/elasticsearch/setup/installation/.
For even better performance, set the minimum and the maximum to the same
value.

One more thing, with your setup, if you use 2 replicas to try and increase
the search performance, then 1 replicas should do. If you use 2 replicas to
increase the availability aspect, then thats fine.

One cool thing to check how the JVM is behaving is to use something like
visualvm to hook into it and check the memory consumption and GC activity.
All that information is already exposed in the node stats API, and once I
get around to build a nice management app for elasticsearch, it will be
exposed there through the REST API.

-shay.banon

On Wed, Aug 11, 2010 at 1:11 AM, rmihael rmihael@gmail.com wrote:

Typical 'total' is less then 100K items. Price ranging between 1 and
10, so I don't think it can cause much problems. I'll try to remove
faceting now and see how it goes.
I'm bit confused with your question about memory. I didn't assigned
any memory to nodes, it just runs as it is. This kind of EC2 instance
have 23GB of memory if you mean it.
Data I've been uploading to index are very uniform. In fact they are
randomly generated and should cause any kind of statistical explosion.
But I've certainly got this fast degradation between 5M and 6M. Very
strange, looks like something in my setup is very broken.

BTW, I'm getting the following messages in log files:

[14:11:14,211][INFO ][monitor.memory.alpha ] [Sangre] [5]
[Full ] Ran after [2] consecutive clean swipes, memory_to_clean
[155.7mb], lower_memory_threshold [809mb], upper_memory_threshold
[960.6mb], used_memory [964.7mb], total_memory[1011.2mb],
max_memory[1011.2mb]
[14:52:06,191][INFO ][monitor.memory.alpha ] [Sangre] [6]
[Full ] Ran after [2] consecutive clean swipes, memory_to_clean
[179.2mb], lower_memory_threshold [809mb], upper_memory_threshold
[960.6mb], used_memory [988.2mb], total_memory[1011.2mb],
max_memory[1011.2mb]

Is everything OK with it? "total_memory[1011.2mb],
max_memory[1011.2mb]" part is confusing me, why it's so small?

On Aug 11, 12:41 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Hi,

Is there a chance that the response that you get is really large? It

seems like you are getting large result sets for the facets (not sure
about
the histogram facet of 1 for price, it depends on the range of it). Can
you
try and start with a simple query (no filters, no facets) and slowly add
more to the search request? How much memory do you assign each node? It
very strange that by moving from 5M docs to 6M docs, suddenly you get
such
different results, unless those 1M cause the facets to "explode" with the
data they return?

-shay.banon

On Wed, Aug 11, 2010 at 12:09 AM, rmihael rmih...@gmail.com wrote:

Hi everyone.

I'm running elasticsearch-0.9 on the cluster of 5 EC2 instances
(Cluster Compute Quadruple Extra Large Instance) with one of them
configured as frontend (data: false). Index contains 10M of documents,
4 shards, 2 replicas. Total stored size is 23GB and each node has 23
GB of RAM, so it fit's to memory without any problems.
Using JMeter as testing tool I'm getting as little as ~150 requests
per second (that's for 4 worker nodes with 8 CPU cores each). I've
checked with iostat that there's no disk activity on cluster nodes. I
also see CPU utilization near 5-10% on worker nodes during performance
testing. Looks somewhat strange to me.

I made performance measurements on the same cluster when index
contained only 5M documents. I got nearly 1500 requests per second and
CPU utilizations on worker nodes was close to 90%. After importing
another million of documents performance started to degrade very
rapidly.

Could anyone help me with this problem? I'm completely out of ideas
now.

My queries are nothing complex: a single keyword search with a bunch
of filter attached, also faceting by some fields. Typical query looks
like this:

{
"query":
{
"filtered":
{
"query":
{
"query_string":
{
"fields":
[
"keywords.original_keywords^2",
"keywords.keywords"
],
"query": "bright"
}
},
"filter":
{
"and":
{
"filters":
[
{
"term":
{
"content.is_offensive": false
}
},

                   {
                       "term":
                       {
                           "licenses.extended": true
                       }
                   },
                   {
                       "term":
                       {
                           "image.isolated": false
                       }
                   },
                   {
                       "term":
                       {
                           "content.orientation": "horizontal"
                       }
                   },
                   {
                       "term":
                       {
                           "categories.conceptual.depth2": "793"
                       }
                   }
               ]
           }
       }
   },
    "sort": "online.rating",
    "facets":
   {
       "representative_categories":
       {
           "terms":
           {
               "field": "categories.representative.depth2",
                "size": 100
           }
       },
        "representative_categories":
       {
           "terms":
           {
               "field": "categories.conceptual.depth2",
                "size": 100
           }
       },
        "licenses":
       {
           "terms":
           {
               "field": "licenses.size",
                "size": 100
           }
       },
        "prices":
       {
           "histogram":
           {
               "field": "prices.min",
                "interval": 1
           }
       }
   }

}
}

It depends on how you index the data. If you do the load balancing yourself
among the nodes (either using the native JVM client, or building on top of
the exposed API like Elasticsearch.pm does), then most times, it makes sense
to go with all data nodes.

-shay.banon

On Wed, Aug 11, 2010 at 1:15 AM, rmihael rmihael@gmail.com wrote:

EC2 setup is just for testing. We'll run the system on our own
hardware and frontend node will be much weaker then worker nodes. I
thought that it is the recommended use case:
http://www.elasticsearch.com/docs/elasticsearch/modules/node/data_node/
Do you think that we'll do better with homogeneous cluster? If so,
then I'll try it too.

On Aug 11, 12:45 am, Shay Banon shay.ba...@elasticsearch.com wrote:

One more thing, I would use the 5th node as data node as well, its a
shame
for it not to share the load.

On Wed, Aug 11, 2010 at 12:41 AM, Shay Banon
shay.ba...@elasticsearch.comwrote:

Hi,

Is there a chance that the response that you get is really large?

It

seems like you are getting large result sets for the facets (not sure
about
the histogram facet of 1 for price, it depends on the range of it). Can
you
try and start with a simple query (no filters, no facets) and slowly
add
more to the search request? How much memory do you assign each node?
It
very strange that by moving from 5M docs to 6M docs, suddenly you get
such
different results, unless those 1M cause the facets to "explode" with
the
data they return?

-shay.banon

On Wed, Aug 11, 2010 at 12:09 AM, rmihael rmih...@gmail.com wrote:

Hi everyone.

I'm running elasticsearch-0.9 on the cluster of 5 EC2 instances
(Cluster Compute Quadruple Extra Large Instance) with one of them
configured as frontend (data: false). Index contains 10M of documents,
4 shards, 2 replicas. Total stored size is 23GB and each node has 23
GB of RAM, so it fit's to memory without any problems.
Using JMeter as testing tool I'm getting as little as ~150 requests
per second (that's for 4 worker nodes with 8 CPU cores each). I've
checked with iostat that there's no disk activity on cluster nodes. I
also see CPU utilization near 5-10% on worker nodes during performance
testing. Looks somewhat strange to me.

I made performance measurements on the same cluster when index
contained only 5M documents. I got nearly 1500 requests per second and
CPU utilizations on worker nodes was close to 90%. After importing
another million of documents performance started to degrade very
rapidly.

Could anyone help me with this problem? I'm completely out of ideas
now.

My queries are nothing complex: a single keyword search with a bunch
of filter attached, also faceting by some fields. Typical query looks
like this:

{
"query":
{
"filtered":
{
"query":
{
"query_string":
{
"fields":
[
"keywords.original_keywords^2",
"keywords.keywords"
],
"query": "bright"
}
},
"filter":
{
"and":
{
"filters":
[
{
"term":
{
"content.is_offensive": false
}
},

                   {
                       "term":
                       {
                           "licenses.extended": true
                       }
                   },
                   {
                       "term":
                       {
                           "image.isolated": false
                       }
                   },
                   {
                       "term":
                       {
                           "content.orientation": "horizontal"
                       }
                   },
                   {
                       "term":
                       {
                           "categories.conceptual.depth2": "793"
                       }
                   }
               ]
           }
       }
   },
    "sort": "online.rating",
    "facets":
   {
       "representative_categories":
       {
           "terms":
           {
               "field": "categories.representative.depth2",
                "size": 100
           }
       },
        "representative_categories":
       {
           "terms":
           {
               "field": "categories.conceptual.depth2",
                "size": 100
           }
       },
        "licenses":
       {
           "terms":
           {
               "field": "licenses.size",
                "size": 100
           }
       },
        "prices":
       {
           "histogram":
           {
               "field": "prices.min",
                "interval": 1
           }
       }
   }

}
}

Ok, I've set both ES_MIN_MEM and ES_MAX_MEM variables to 10g.
Performance increased to ~300 requests per second and I don't see any
garbage collection notifications in logs. CPU load of worker nodes
still very low -- only 20% at most. It there any other parameters that
can be tuned? May be some cache or buffer sizes? I need to get 1000
requests per second before starting move to production deployment.
I can add more servers to the pool but I have a feeling that four
quite powerful machines should have enough capacity for it.

On Aug 11, 1:22 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Good, thats what I was concerned about. When you run a Java virtual machine,
you assign memory to it and it only consumes as much memory as you give it.
By default, it is set to use 1g max memory. Certainly, with your machine,
you can increase that quite significantly, I would say do 10G and see how it
goes (you want to leave memory also for file system cache, and too large
heaps can cause the JVM to hiccup). How to set the max memory is explained
here:http://www.elasticsearch.com/docs/elasticsearch/setup/installation/.
For even better performance, set the minimum and the maximum to the same
value.

One more thing, with your setup, if you use 2 replicas to try and increase
the search performance, then 1 replicas should do. If you use 2 replicas to
increase the availability aspect, then thats fine.

One cool thing to check how the JVM is behaving is to use something like
visualvm to hook into it and check the memory consumption and GC activity.
All that information is already exposed in the node stats API, and once I
get around to build a nice management app for elasticsearch, it will be
exposed there through the REST API.

-shay.banon

On Wed, Aug 11, 2010 at 1:11 AM, rmihael rmih...@gmail.com wrote:

Typical 'total' is less then 100K items. Price ranging between 1 and
10, so I don't think it can cause much problems. I'll try to remove
faceting now and see how it goes.
I'm bit confused with your question about memory. I didn't assigned
any memory to nodes, it just runs as it is. This kind of EC2 instance
have 23GB of memory if you mean it.
Data I've been uploading to index are very uniform. In fact they are
randomly generated and should cause any kind of statistical explosion.
But I've certainly got this fast degradation between 5M and 6M. Very
strange, looks like something in my setup is very broken.

BTW, I'm getting the following messages in log files:

[14:11:14,211][INFO ][monitor.memory.alpha ] [Sangre] [5]
[Full ] Ran after [2] consecutive clean swipes, memory_to_clean
[155.7mb], lower_memory_threshold [809mb], upper_memory_threshold
[960.6mb], used_memory [964.7mb], total_memory[1011.2mb],
max_memory[1011.2mb]
[14:52:06,191][INFO ][monitor.memory.alpha ] [Sangre] [6]
[Full ] Ran after [2] consecutive clean swipes, memory_to_clean
[179.2mb], lower_memory_threshold [809mb], upper_memory_threshold
[960.6mb], used_memory [988.2mb], total_memory[1011.2mb],
max_memory[1011.2mb]

Is everything OK with it? "total_memory[1011.2mb],
max_memory[1011.2mb]" part is confusing me, why it's so small?

On Aug 11, 12:41 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Hi,

Is there a chance that the response that you get is really large? It

seems like you are getting large result sets for the facets (not sure
about
the histogram facet of 1 for price, it depends on the range of it). Can
you
try and start with a simple query (no filters, no facets) and slowly add
more to the search request? How much memory do you assign each node? It
very strange that by moving from 5M docs to 6M docs, suddenly you get
such
different results, unless those 1M cause the facets to "explode" with the
data they return?

-shay.banon

On Wed, Aug 11, 2010 at 12:09 AM, rmihael rmih...@gmail.com wrote:

Hi everyone.

I'm running elasticsearch-0.9 on the cluster of 5 EC2 instances
(Cluster Compute Quadruple Extra Large Instance) with one of them
configured as frontend (data: false). Index contains 10M of documents,
4 shards, 2 replicas. Total stored size is 23GB and each node has 23
GB of RAM, so it fit's to memory without any problems.
Using JMeter as testing tool I'm getting as little as ~150 requests
per second (that's for 4 worker nodes with 8 CPU cores each). I've
checked with iostat that there's no disk activity on cluster nodes. I
also see CPU utilization near 5-10% on worker nodes during performance
testing. Looks somewhat strange to me.

I made performance measurements on the same cluster when index
contained only 5M documents. I got nearly 1500 requests per second and
CPU utilizations on worker nodes was close to 90%. After importing
another million of documents performance started to degrade very
rapidly.

Could anyone help me with this problem? I'm completely out of ideas
now.

My queries are nothing complex: a single keyword search with a bunch
of filter attached, also faceting by some fields. Typical query looks
like this:

{
"query":
{
"filtered":
{
"query":
{
"query_string":
{
"fields":
[
"keywords.original_keywords^2",
"keywords.keywords"
],
"query": "bright"
}
},
"filter":
{
"and":
{
"filters":
[
{
"term":
{
"content.is_offensive": false
}
},

                   {
                       "term":
                       {
                           "licenses.extended": true
                       }
                   },
                   {
                       "term":
                       {
                           "image.isolated": false
                       }
                   },
                   {
                       "term":
                       {
                           "content.orientation": "horizontal"
                       }
                   },
                   {
                       "term":
                       {
                           "categories.conceptual.depth2": "793"
                       }
                   }
               ]
           }
       }
   },
    "sort": "online.rating",
    "facets":
   {
       "representative_categories":
       {
           "terms":
           {
               "field": "categories.representative.depth2",
                "size": 100
           }
       },
        "representative_categories":
       {
           "terms":
           {
               "field": "categories.conceptual.depth2",
                "size": 100
           }
       },
        "licenses":
       {
           "terms":
           {
               "field": "licenses.size",
                "size": 100
           }
       },
        "prices":
       {
           "histogram":
           {
               "field": "prices.min",
                "interval": 1
           }
       }
   }

}
}

I'm intrigued by the choice to only have 4 shards across your 5 machines.

Surely at least one primary shard per core would see a performance increase in distributing the search load?

Andrew

On 11/08/2010, at 9:00 AM, rmihael wrote:

Ok, I've set both ES_MIN_MEM and ES_MAX_MEM variables to 10g.
Performance increased to ~300 requests per second and I don't see any
garbage collection notifications in logs. CPU load of worker nodes
still very low -- only 20% at most. It there any other parameters that
can be tuned? May be some cache or buffer sizes? I need to get 1000
requests per second before starting move to production deployment.
I can add more servers to the pool but I have a feeling that four
quite powerful machines should have enough capacity for it.

On Aug 11, 1:22 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Good, thats what I was concerned about. When you run a Java virtual machine,
you assign memory to it and it only consumes as much memory as you give it.
By default, it is set to use 1g max memory. Certainly, with your machine,
you can increase that quite significantly, I would say do 10G and see how it
goes (you want to leave memory also for file system cache, and too large
heaps can cause the JVM to hiccup). How to set the max memory is explained
here:http://www.elasticsearch.com/docs/elasticsearch/setup/installation/.
For even better performance, set the minimum and the maximum to the same
value.

One more thing, with your setup, if you use 2 replicas to try and increase
the search performance, then 1 replicas should do. If you use 2 replicas to
increase the availability aspect, then thats fine.

One cool thing to check how the JVM is behaving is to use something like
visualvm to hook into it and check the memory consumption and GC activity.
All that information is already exposed in the node stats API, and once I
get around to build a nice management app for elasticsearch, it will be
exposed there through the REST API.

-shay.banon

On Wed, Aug 11, 2010 at 1:11 AM, rmihael rmih...@gmail.com wrote:

Typical 'total' is less then 100K items. Price ranging between 1 and
10, so I don't think it can cause much problems. I'll try to remove
faceting now and see how it goes.
I'm bit confused with your question about memory. I didn't assigned
any memory to nodes, it just runs as it is. This kind of EC2 instance
have 23GB of memory if you mean it.
Data I've been uploading to index are very uniform. In fact they are
randomly generated and should cause any kind of statistical explosion.
But I've certainly got this fast degradation between 5M and 6M. Very
strange, looks like something in my setup is very broken.

BTW, I'm getting the following messages in log files:

[14:11:14,211][INFO ][monitor.memory.alpha ] [Sangre] [5]
[Full ] Ran after [2] consecutive clean swipes, memory_to_clean
[155.7mb], lower_memory_threshold [809mb], upper_memory_threshold
[960.6mb], used_memory [964.7mb], total_memory[1011.2mb],
max_memory[1011.2mb]
[14:52:06,191][INFO ][monitor.memory.alpha ] [Sangre] [6]
[Full ] Ran after [2] consecutive clean swipes, memory_to_clean
[179.2mb], lower_memory_threshold [809mb], upper_memory_threshold
[960.6mb], used_memory [988.2mb], total_memory[1011.2mb],
max_memory[1011.2mb]

Is everything OK with it? "total_memory[1011.2mb],
max_memory[1011.2mb]" part is confusing me, why it's so small?

On Aug 11, 12:41 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Hi,

Is there a chance that the response that you get is really large? It
seems like you are getting large result sets for the facets (not sure
about
the histogram facet of 1 for price, it depends on the range of it). Can
you
try and start with a simple query (no filters, no facets) and slowly add
more to the search request? How much memory do you assign each node? It
very strange that by moving from 5M docs to 6M docs, suddenly you get
such
different results, unless those 1M cause the facets to "explode" with the
data they return?

-shay.banon

On Wed, Aug 11, 2010 at 12:09 AM, rmihael rmih...@gmail.com wrote:

Hi everyone.

I'm running elasticsearch-0.9 on the cluster of 5 EC2 instances
(Cluster Compute Quadruple Extra Large Instance) with one of them
configured as frontend (data: false). Index contains 10M of documents,
4 shards, 2 replicas. Total stored size is 23GB and each node has 23
GB of RAM, so it fit's to memory without any problems.
Using JMeter as testing tool I'm getting as little as ~150 requests
per second (that's for 4 worker nodes with 8 CPU cores each). I've
checked with iostat that there's no disk activity on cluster nodes. I
also see CPU utilization near 5-10% on worker nodes during performance
testing. Looks somewhat strange to me.

I made performance measurements on the same cluster when index
contained only 5M documents. I got nearly 1500 requests per second and
CPU utilizations on worker nodes was close to 90%. After importing
another million of documents performance started to degrade very
rapidly.

Could anyone help me with this problem? I'm completely out of ideas
now.

My queries are nothing complex: a single keyword search with a bunch
of filter attached, also faceting by some fields. Typical query looks
like this:

{
"query":
{
"filtered":
{
"query":
{
"query_string":
{
"fields":
[
"keywords.original_keywords^2",
"keywords.keywords"
],
"query": "bright"
}
},
"filter":
{
"and":
{
"filters":
[
{
"term":
{
"content.is_offensive": false
}
},

                  {
                      "term":
                      {
                          "licenses.extended": true
                      }
                  },
                  {
                      "term":
                      {
                          "image.isolated": false
                      }
                  },
                  {
                      "term":
                      {
                          "content.orientation": "horizontal"
                      }
                  },
                  {
                      "term":
                      {
                          "categories.conceptual.depth2": "793"
                      }
                  }
              ]
          }
      }
  },
   "sort": "online.rating",
   "facets":
  {
      "representative_categories":
      {
          "terms":
          {
              "field": "categories.representative.depth2",
               "size": 100
          }
      },
       "representative_categories":
      {
          "terms":
          {
              "field": "categories.conceptual.depth2",
               "size": 100
          }
      },
       "licenses":
      {
          "terms":
          {
              "field": "licenses.size",
               "size": 100
          }
      },
       "prices":
      {
          "histogram":
          {
              "field": "prices.min",
               "interval": 1
          }
      }
  }

}
}

Andrew Harvey / Developer
lexer
m/
t/ +61 2 9019 6379
w/ http://lexer.com.au
Help put an end to whaling. Visit http://www.givewhalesavoice.com.au/

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person or organisation to whom it is addressed. If you are not the intended recipient, you must not copy, distribute or disseminate the information, or take any action in relation to it and please delete this e-mail. Any views expressed in this message are those of the individual sender, except where the send specifically states them to be the views of any organisation or employer. If you have received this message in error, do not open any attachment but please notify the sender (above). This message has been checked for all known viruses powered by McAfee.

For further information visit Advanced Research Center | Trellix
Please rely on your own virus check as no responsibility is taken by the sender for any damage rising out of any virus infection this communication may contain.

This message has been scanned for malware by Websense. www.websense.com

Among my 5 machines only 4 acts as data nodes. One node is performing
solely as frontend, as described in
http://www.elasticsearch.com/docs/elasticsearch/modules/node/data_node/.
So I have one primary shard per each data node.
Do you suggest that each CPU core should have it's own shard? I have
64 cores in total on data nodes, wouldn't it be too much to make 64
shards?

On Aug 11, 2:02 am, Andrew Harvey Andrew.Har...@lexer.com.au wrote:

I'm intrigued by the choice to only have 4 shards across your 5 machines.

Surely at least one primary shard per core would see a performance increase in distributing the search load?

Andrew

On 11/08/2010, at 9:00 AM, rmihael wrote:

Ok, I've set both ES_MIN_MEM and ES_MAX_MEM variables to 10g.
Performance increased to ~300 requests per second and I don't see any
garbage collection notifications in logs. CPU load of worker nodes
still very low -- only 20% at most. It there any other parameters that
can be tuned? May be some cache or buffer sizes? I need to get 1000
requests per second before starting move to production deployment.
I can add more servers to the pool but I have a feeling that four
quite powerful machines should have enough capacity for it.

On Aug 11, 1:22 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Good, thats what I was concerned about. When you run a Java virtual machine,
you assign memory to it and it only consumes as much memory as you give it.
By default, it is set to use 1g max memory. Certainly, with your machine,
you can increase that quite significantly, I would say do 10G and see how it
goes (you want to leave memory also for file system cache, and too large
heaps can cause the JVM to hiccup). How to set the max memory is explained
here:http://www.elasticsearch.com/docs/elasticsearch/setup/installation/.
For even better performance, set the minimum and the maximum to the same
value.

One more thing, with your setup, if you use 2 replicas to try and increase
the search performance, then 1 replicas should do. If you use 2 replicas to
increase the availability aspect, then thats fine.

One cool thing to check how the JVM is behaving is to use something like
visualvm to hook into it and check the memory consumption and GC activity.
All that information is already exposed in the node stats API, and once I
get around to build a nice management app for elasticsearch, it will be
exposed there through the REST API.

-shay.banon

On Wed, Aug 11, 2010 at 1:11 AM, rmihael rmih...@gmail.com wrote:

Typical 'total' is less then 100K items. Price ranging between 1 and
10, so I don't think it can cause much problems. I'll try to remove
faceting now and see how it goes.
I'm bit confused with your question about memory. I didn't assigned
any memory to nodes, it just runs as it is. This kind of EC2 instance
have 23GB of memory if you mean it.
Data I've been uploading to index are very uniform. In fact they are
randomly generated and should cause any kind of statistical explosion.
But I've certainly got this fast degradation between 5M and 6M. Very
strange, looks like something in my setup is very broken.

BTW, I'm getting the following messages in log files:

[14:11:14,211][INFO ][monitor.memory.alpha ] [Sangre] [5]
[Full ] Ran after [2] consecutive clean swipes, memory_to_clean
[155.7mb], lower_memory_threshold [809mb], upper_memory_threshold
[960.6mb], used_memory [964.7mb], total_memory[1011.2mb],
max_memory[1011.2mb]
[14:52:06,191][INFO ][monitor.memory.alpha ] [Sangre] [6]
[Full ] Ran after [2] consecutive clean swipes, memory_to_clean
[179.2mb], lower_memory_threshold [809mb], upper_memory_threshold
[960.6mb], used_memory [988.2mb], total_memory[1011.2mb],
max_memory[1011.2mb]

Is everything OK with it? "total_memory[1011.2mb],
max_memory[1011.2mb]" part is confusing me, why it's so small?

On Aug 11, 12:41 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Hi,

Is there a chance that the response that you get is really large? It
seems like you are getting large result sets for the facets (not sure
about
the histogram facet of 1 for price, it depends on the range of it). Can
you
try and start with a simple query (no filters, no facets) and slowly add
more to the search request? How much memory do you assign each node? It
very strange that by moving from 5M docs to 6M docs, suddenly you get
such
different results, unless those 1M cause the facets to "explode" with the
data they return?

-shay.banon

On Wed, Aug 11, 2010 at 12:09 AM, rmihael rmih...@gmail.com wrote:

Hi everyone.

I'm running elasticsearch-0.9 on the cluster of 5 EC2 instances
(Cluster Compute Quadruple Extra Large Instance) with one of them
configured as frontend (data: false). Index contains 10M of documents,
4 shards, 2 replicas. Total stored size is 23GB and each node has 23
GB of RAM, so it fit's to memory without any problems.
Using JMeter as testing tool I'm getting as little as ~150 requests
per second (that's for 4 worker nodes with 8 CPU cores each). I've
checked with iostat that there's no disk activity on cluster nodes. I
also see CPU utilization near 5-10% on worker nodes during performance
testing. Looks somewhat strange to me.

I made performance measurements on the same cluster when index
contained only 5M documents. I got nearly 1500 requests per second and
CPU utilizations on worker nodes was close to 90%. After importing
another million of documents performance started to degrade very
rapidly.

Could anyone help me with this problem? I'm completely out of ideas
now.

My queries are nothing complex: a single keyword search with a bunch
of filter attached, also faceting by some fields. Typical query looks
like this:

{
"query":
{
"filtered":
{
"query":
{
"query_string":
{
"fields":
[
"keywords.original_keywords^2",
"keywords.keywords"
],
"query": "bright"
}
},
"filter":
{
"and":
{
"filters":
[
{
"term":
{
"content.is_offensive": false
}
},

                  {
                      "term":
                      {
                          "licenses.extended": true
                      }
                  },
                  {
                      "term":
                      {
                          "image.isolated": false
                      }
                  },
                  {
                      "term":
                      {
                          "content.orientation": "horizontal"
                      }
                  },
                  {
                      "term":
                      {
                          "categories.conceptual.depth2": "793"
                      }
                  }
              ]
          }
      }
  },
   "sort": "online.rating",
   "facets":
  {
      "representative_categories":
      {
          "terms":
          {
              "field": "categories.representative.depth2",
               "size": 100
          }
      },
       "representative_categories":
      {
          "terms":
          {
              "field": "categories.conceptual.depth2",
               "size": 100
          }
      },
       "licenses":
      {
          "terms":
          {
              "field": "licenses.size",
               "size": 100
          }
      },
       "prices":
      {
          "histogram":
          {
              "field": "prices.min",
               "interval": 1
          }
      }
  }

}
}

Andrew Harvey / Developer
lexer
m/
t/ +61 2 9019 6379
w/ http://lexer.com.au
Help put an end to whaling. Visithttp://www.givewhalesavoice.com.au/

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person or organisation to whom it is addressed. If you are not the intended recipient, you must not copy, distribute or disseminate the information, or take any action in relation to it and please delete this e-mail. Any views expressed in this message are those of the individual sender, except where the send specifically states them to be the views of any organisation or employer. If you have received this message in error, do not open any attachment but please notify the sender (above). This message has been checked for all known viruses powered by McAfee.

For further information visithttp://www.mcafee.com/us/threat_center/default.asp
Please rely on your own virus check as no responsibility is taken by the sender for any damage rising out of any virus infection this communication may contain.

This message has been scanned for malware by Websense.www.websense.com

I'm not recommending anything at his point, but in my experience, increasing the shard count helped with performance. This was across 3 large instances on EC2. So long as you've got enough memory configured (and now you do), it's probably worth trying. Of course, with the old gateway format it caused issues with hitting the limit of S3 buckets, but if you're looking for a local production environment, I'd expect you're going to be using something like NFS for that anyway.

Andrew

On 11/08/2010, at 9:07 AM, rmihael wrote:

Among my 5 machines only 4 acts as data nodes. One node is performing
solely as frontend, as described in
http://www.elasticsearch.com/docs/elasticsearch/modules/node/data_node/.
So I have one primary shard per each data node.
Do you suggest that each CPU core should have it's own shard? I have
64 cores in total on data nodes, wouldn't it be too much to make 64
shards?

On Aug 11, 2:02 am, Andrew Harvey Andrew.Har...@lexer.com.au wrote:

I'm intrigued by the choice to only have 4 shards across your 5 machines.

Surely at least one primary shard per core would see a performance increase in distributing the search load?

Andrew

On 11/08/2010, at 9:00 AM, rmihael wrote:

Ok, I've set both ES_MIN_MEM and ES_MAX_MEM variables to 10g.
Performance increased to ~300 requests per second and I don't see any
garbage collection notifications in logs. CPU load of worker nodes
still very low -- only 20% at most. It there any other parameters that
can be tuned? May be some cache or buffer sizes? I need to get 1000
requests per second before starting move to production deployment.
I can add more servers to the pool but I have a feeling that four
quite powerful machines should have enough capacity for it.

On Aug 11, 1:22 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Good, thats what I was concerned about. When you run a Java virtual machine,
you assign memory to it and it only consumes as much memory as you give it.
By default, it is set to use 1g max memory. Certainly, with your machine,
you can increase that quite significantly, I would say do 10G and see how it
goes (you want to leave memory also for file system cache, and too large
heaps can cause the JVM to hiccup). How to set the max memory is explained
here:http://www.elasticsearch.com/docs/elasticsearch/setup/installation/.
For even better performance, set the minimum and the maximum to the same
value.

One more thing, with your setup, if you use 2 replicas to try and increase
the search performance, then 1 replicas should do. If you use 2 replicas to
increase the availability aspect, then thats fine.

One cool thing to check how the JVM is behaving is to use something like
visualvm to hook into it and check the memory consumption and GC activity.
All that information is already exposed in the node stats API, and once I
get around to build a nice management app for elasticsearch, it will be
exposed there through the REST API.

-shay.banon

On Wed, Aug 11, 2010 at 1:11 AM, rmihael rmih...@gmail.com wrote:

Typical 'total' is less then 100K items. Price ranging between 1 and
10, so I don't think it can cause much problems. I'll try to remove
faceting now and see how it goes.
I'm bit confused with your question about memory. I didn't assigned
any memory to nodes, it just runs as it is. This kind of EC2 instance
have 23GB of memory if you mean it.
Data I've been uploading to index are very uniform. In fact they are
randomly generated and should cause any kind of statistical explosion.
But I've certainly got this fast degradation between 5M and 6M. Very
strange, looks like something in my setup is very broken.

BTW, I'm getting the following messages in log files:

[14:11:14,211][INFO ][monitor.memory.alpha ] [Sangre] [5]
[Full ] Ran after [2] consecutive clean swipes, memory_to_clean
[155.7mb], lower_memory_threshold [809mb], upper_memory_threshold
[960.6mb], used_memory [964.7mb], total_memory[1011.2mb],
max_memory[1011.2mb]
[14:52:06,191][INFO ][monitor.memory.alpha ] [Sangre] [6]
[Full ] Ran after [2] consecutive clean swipes, memory_to_clean
[179.2mb], lower_memory_threshold [809mb], upper_memory_threshold
[960.6mb], used_memory [988.2mb], total_memory[1011.2mb],
max_memory[1011.2mb]

Is everything OK with it? "total_memory[1011.2mb],
max_memory[1011.2mb]" part is confusing me, why it's so small?

On Aug 11, 12:41 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Hi,

Is there a chance that the response that you get is really large? It
seems like you are getting large result sets for the facets (not sure
about
the histogram facet of 1 for price, it depends on the range of it). Can
you
try and start with a simple query (no filters, no facets) and slowly add
more to the search request? How much memory do you assign each node? It
very strange that by moving from 5M docs to 6M docs, suddenly you get
such
different results, unless those 1M cause the facets to "explode" with the
data they return?

-shay.banon

On Wed, Aug 11, 2010 at 12:09 AM, rmihael rmih...@gmail.com wrote:

Hi everyone.

I'm running elasticsearch-0.9 on the cluster of 5 EC2 instances
(Cluster Compute Quadruple Extra Large Instance) with one of them
configured as frontend (data: false). Index contains 10M of documents,
4 shards, 2 replicas. Total stored size is 23GB and each node has 23
GB of RAM, so it fit's to memory without any problems.
Using JMeter as testing tool I'm getting as little as ~150 requests
per second (that's for 4 worker nodes with 8 CPU cores each). I've
checked with iostat that there's no disk activity on cluster nodes. I
also see CPU utilization near 5-10% on worker nodes during performance
testing. Looks somewhat strange to me.

I made performance measurements on the same cluster when index
contained only 5M documents. I got nearly 1500 requests per second and
CPU utilizations on worker nodes was close to 90%. After importing
another million of documents performance started to degrade very
rapidly.

Could anyone help me with this problem? I'm completely out of ideas
now.

My queries are nothing complex: a single keyword search with a bunch
of filter attached, also faceting by some fields. Typical query looks
like this:

{
"query":
{
"filtered":
{
"query":
{
"query_string":
{
"fields":
[
"keywords.original_keywords^2",
"keywords.keywords"
],
"query": "bright"
}
},
"filter":
{
"and":
{
"filters":
[
{
"term":
{
"content.is_offensive": false
}
},

                 {
                     "term":
                     {
                         "licenses.extended": true
                     }
                 },
                 {
                     "term":
                     {
                         "image.isolated": false
                     }
                 },
                 {
                     "term":
                     {
                         "content.orientation": "horizontal"
                     }
                 },
                 {
                     "term":
                     {
                         "categories.conceptual.depth2": "793"
                     }
                 }
             ]
         }
     }
 },
  "sort": "online.rating",
  "facets":
 {
     "representative_categories":
     {
         "terms":
         {
             "field": "categories.representative.depth2",
              "size": 100
         }
     },
      "representative_categories":
     {
         "terms":
         {
             "field": "categories.conceptual.depth2",
              "size": 100
         }
     },
      "licenses":
     {
         "terms":
         {
             "field": "licenses.size",
              "size": 100
         }
     },
      "prices":
     {
         "histogram":
         {
             "field": "prices.min",
              "interval": 1
         }
     }
 }

}
}

Andrew Harvey / Developer
lexer
m/
t/ +61 2 9019 6379
w/ http://lexer.com.au
Help put an end to whaling. Visithttp://www.givewhalesavoice.com.au/

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person or organisation to whom it is addressed. If you are not the intended recipient, you must not copy, distribute or disseminate the information, or take any action in relation to it and please delete this e-mail. Any views expressed in this message are those of the individual sender, except where the send specifically states them to be the views of any organisation or employer. If you have received this message in error, do not open any attachment but please notify the sender (above). This message has been checked for all known viruses powered by McAfee.

For further information visithttp://www.mcafee.com/us/threat_center/default.asp
Please rely on your own virus check as no responsibility is taken by the sender for any damage rising out of any virus infection this communication may contain.

This message has been scanned for malware by Websense.www.websense.com

Andrew Harvey / Developer
lexer
m/
t/ +61 2 9019 6379
w/ http://lexer.com.au
Help put an end to whaling. Visit http://www.givewhalesavoice.com.au/

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person or organisation to whom it is addressed. If you are not the intended recipient, you must not copy, distribute or disseminate the information, or take any action in relation to it and please delete this e-mail. Any views expressed in this message are those of the individual sender, except where the send specifically states them to be the views of any organisation or employer. If you have received this message in error, do not open any attachment but please notify the sender (above). This message has been checked for all known viruses powered by McAfee.

For further information visit Advanced Research Center | Trellix
Please rely on your own virus check as no responsibility is taken by the sender for any damage rising out of any virus infection this communication may contain.

Can you follow what I suggested before, an first start with a simple query,
and then slowly add filters and facets, and see when you see
performance degradation?

-shay.banon

On Wed, Aug 11, 2010 at 2:00 AM, rmihael rmihael@gmail.com wrote:

Ok, I've set both ES_MIN_MEM and ES_MAX_MEM variables to 10g.
Performance increased to ~300 requests per second and I don't see any
garbage collection notifications in logs. CPU load of worker nodes
still very low -- only 20% at most. It there any other parameters that
can be tuned? May be some cache or buffer sizes? I need to get 1000
requests per second before starting move to production deployment.
I can add more servers to the pool but I have a feeling that four
quite powerful machines should have enough capacity for it.

On Aug 11, 1:22 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Good, thats what I was concerned about. When you run a Java virtual
machine,
you assign memory to it and it only consumes as much memory as you give
it.
By default, it is set to use 1g max memory. Certainly, with your machine,
you can increase that quite significantly, I would say do 10G and see how
it
goes (you want to leave memory also for file system cache, and too large
heaps can cause the JVM to hiccup). How to set the max memory is
explained
here:http://www.elasticsearch.com/docs/elasticsearch/setup/installation/
.
For even better performance, set the minimum and the maximum to the same
value.

One more thing, with your setup, if you use 2 replicas to try and
increase
the search performance, then 1 replicas should do. If you use 2 replicas
to
increase the availability aspect, then thats fine.

One cool thing to check how the JVM is behaving is to use something like
visualvm to hook into it and check the memory consumption and GC
activity.
All that information is already exposed in the node stats API, and once I
get around to build a nice management app for elasticsearch, it will be
exposed there through the REST API.

-shay.banon

On Wed, Aug 11, 2010 at 1:11 AM, rmihael rmih...@gmail.com wrote:

Typical 'total' is less then 100K items. Price ranging between 1 and
10, so I don't think it can cause much problems. I'll try to remove
faceting now and see how it goes.
I'm bit confused with your question about memory. I didn't assigned
any memory to nodes, it just runs as it is. This kind of EC2 instance
have 23GB of memory if you mean it.
Data I've been uploading to index are very uniform. In fact they are
randomly generated and should cause any kind of statistical explosion.
But I've certainly got this fast degradation between 5M and 6M. Very
strange, looks like something in my setup is very broken.

BTW, I'm getting the following messages in log files:

[14:11:14,211][INFO ][monitor.memory.alpha ] [Sangre] [5]
[Full ] Ran after [2] consecutive clean swipes, memory_to_clean
[155.7mb], lower_memory_threshold [809mb], upper_memory_threshold
[960.6mb], used_memory [964.7mb], total_memory[1011.2mb],
max_memory[1011.2mb]
[14:52:06,191][INFO ][monitor.memory.alpha ] [Sangre] [6]
[Full ] Ran after [2] consecutive clean swipes, memory_to_clean
[179.2mb], lower_memory_threshold [809mb], upper_memory_threshold
[960.6mb], used_memory [988.2mb], total_memory[1011.2mb],
max_memory[1011.2mb]

Is everything OK with it? "total_memory[1011.2mb],
max_memory[1011.2mb]" part is confusing me, why it's so small?

On Aug 11, 12:41 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Hi,

Is there a chance that the response that you get is really large?

It

seems like you are getting large result sets for the facets (not sure
about
the histogram facet of 1 for price, it depends on the range of it).
Can
you
try and start with a simple query (no filters, no facets) and slowly
add
more to the search request? How much memory do you assign each node?
It
very strange that by moving from 5M docs to 6M docs, suddenly you get
such
different results, unless those 1M cause the facets to "explode" with
the
data they return?

-shay.banon

On Wed, Aug 11, 2010 at 12:09 AM, rmihael rmih...@gmail.com wrote:

Hi everyone.

I'm running elasticsearch-0.9 on the cluster of 5 EC2 instances
(Cluster Compute Quadruple Extra Large Instance) with one of them
configured as frontend (data: false). Index contains 10M of
documents,
4 shards, 2 replicas. Total stored size is 23GB and each node has
23
GB of RAM, so it fit's to memory without any problems.
Using JMeter as testing tool I'm getting as little as ~150 requests
per second (that's for 4 worker nodes with 8 CPU cores each). I've
checked with iostat that there's no disk activity on cluster nodes.
I
also see CPU utilization near 5-10% on worker nodes during
performance
testing. Looks somewhat strange to me.

I made performance measurements on the same cluster when index
contained only 5M documents. I got nearly 1500 requests per second
and
CPU utilizations on worker nodes was close to 90%. After importing
another million of documents performance started to degrade very
rapidly.

Could anyone help me with this problem? I'm completely out of ideas
now.

My queries are nothing complex: a single keyword search with a
bunch
of filter attached, also faceting by some fields. Typical query
looks
like this:

{
"query":
{
"filtered":
{
"query":
{
"query_string":
{
"fields":
[
"keywords.original_keywords^2",
"keywords.keywords"
],
"query": "bright"
}
},
"filter":
{
"and":
{
"filters":
[
{
"term":
{
"content.is_offensive": false
}
},

                   {
                       "term":
                       {
                           "licenses.extended": true
                       }
                   },
                   {
                       "term":
                       {
                           "image.isolated": false
                       }
                   },
                   {
                       "term":
                       {
                           "content.orientation": "horizontal"
                       }
                   },
                   {
                       "term":
                       {
                           "categories.conceptual.depth2":

"793"

                       }
                   }
               ]
           }
       }
   },
    "sort": "online.rating",
    "facets":
   {
       "representative_categories":
       {
           "terms":
           {
               "field": "categories.representative.depth2",
                "size": 100
           }
       },
        "representative_categories":
       {
           "terms":
           {
               "field": "categories.conceptual.depth2",
                "size": 100
           }
       },
        "licenses":
       {
           "terms":
           {
               "field": "licenses.size",
                "size": 100
           }
       },
        "prices":
       {
           "histogram":
           {
               "field": "prices.min",
                "interval": 1
           }
       }
   }

}
}