output jvm nodes info
gist: nodes jvm · GitHub
"jvm" : {
"pid" : 8064,
"version" :
"1.7.0_21",
"vm_name" :
"Java HotSpot(TM) 64-Bit Server VM",
"vm_version"
: "23.21-b01",
"vm_vendor" :
"Oracle Corporation",
"start_time"
: 1370391059981,
"mem" : {
"heap_init" : "32gb",
"heap_init_in_bytes" : 34359738368,
"heap_max"
: "31.8gb",
"heap_max_in_bytes" : 34202714112,
"non_heap_init" : "23.1mb",
"non_heap_init_in_bytes" : 24313856,
"non_heap_max" : "130mb",
"non_heap_max_in_bytes" : 136314880,
"direct_max" : "31.8gb",
"direct_max_in_bytes" : 34202714112
}
}
On Wednesday, June 5, 2013 2:06:14 PM UTC-7, Matt Weber wrote:
OK, that version number looked weird to me. What does the output of jvm
nodes info?
curl -XGET 'http://localhost:9200/_nodes/jvm?pretty=true'
On Wed, Jun 5, 2013 at 1:57 PM, Han <hradus...@gmail.com <javascript:>>wrote:
Nope. its official Oracle JDK 7.
On Wednesday, June 5, 2013 1:54:24 PM UTC-7, Matt Weber wrote:
Is that OpenJDK? If yes, you should give the latest official Oracle JDK
7 a try. There have been quite a few issues like this popping up and the
common theme seems to be OpenJDK.
On Wed, Jun 5, 2013 at 1:40 PM, Han hradus...@gmail.com wrote:
We are using version 7.0.210.11
Regarding sorting, most of the time we are using the default sorting
provided by ES (which is sort by the score), on very few queries we do have
sorting based on a couple of numeric fields.
On Wednesday, June 5, 2013 1:25:58 PM UTC-7, Martijn v Groningen wrote:
I'm not 100% sure, but in general it is a waste to allocate a big heap
space and not use it. While the actual memory can be used if it is not
allocated to ES. I also expect garbage collections to be faster with
smaller jvm's. Btw what Java version are you using?
Are you sorting by score or a field?
On 5 June 2013 22:14, Han hradus...@gmail.com wrote:
Will try that. but do you think its due to having not enough memory
for Lucene file system cache? we have a total of 64gb memory on each and we
have allocated half of it (32gb) to ES_HEAP_SPACE.
We do not have faceting or script but we do have "sorting" in our
queries.
On Wednesday, June 5, 2013 12:59:24 PM UTC-7, Martijn v Groningen
wrote:
The actual heap usage (at most 3.9GB) is way lower than the
allocated heap. I assume you're not using faceting, script or sorting by a
field, right? If that is the case I'd lower the ES_HEAP_SPACE to something
like 5GB. This way you give the filesystem cache more space. Lucene (The
underlying search library the ES uses) depends a lot on the filesystem
cache to execute queres. The more space is available in the filesystem
cache the more Lucene index files end up in it and this will result in
faster queries.
On 5 June 2013 21:19, Han hradus...@gmail.com wrote:
gist on heal usage
here is the gist heap usage
https://gist.github.com/**anonym******ous/5716142https://gist.github.com/anonymous/5716142
also, we have already enabled term vectors on the fields that we
are doing the highlights.
On Wednesday, June 5, 2013 11:46:05 AM UTC-7, Han wrote:
Thanks Martin.. i will look at the "bool" filter and see if we can
upgrade to 0.90.1, i will keep you posted.
here is the gist heap usage
https://gist.github.com/**anonym******ous/5716142https://gist.github.com/anonymous/5716142
let me know if you notice anything weird..
On Wednesday, June 5, 2013 11:40:23 AM UTC-7, Martijn v Groningen
wrote:
- What kind of searches are you executing? If possible, can
you perhaps them share examples of your queries via a gist?
Here is the gist of our query, most of our queries are like this
with little bit of changes.
https://gist.github.com/**anonym******ous/5715994https://gist.github.com/anonymous/5715994
I see that you use the top level filter. Unless you are also
using facets (which is not case here), I would recommend putting all
filters in the filtered query. Also if you upgrade to version 0.90.1
I
would use the bool
filter over the and
, or
and not
filter in your
case. This will most likely execute your query in a more efficient manner.
In 0.90.0 there is a bug in the bool
filter.
Are you highlighting on large fields? If so I would maybe enable
term vectors ("term_vector" : "with_positions_offsets") for these fields.
This will make your index larger, but highlighting will be much faster.
--
Met vriendelijke groet,
Martijn van Groningen
--
Met vriendelijke groet,
Martijn van Groningen
--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@**googlegroups.com.
For more options, visit https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
.
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.