Elasticsearch with a 90m doc index limit?

So after resolving my issue importing my issues from couchdb I am seeing a
new issue. After importing about 90million docs for 9hours suddenly
Elasticsearch will stop important the information. Using TRACE for couchdb
I see the river is still sending the data but my _seq is not changing. We
have about 200million docs in couchdb and while I can use filter to shard
it out to different docs I can see a case which we have more then 90
million docs per index so I am betting there is some configuration I am
missing so I am open to ideas what i need to tweak.

Thanks
Zuhaib

--

After starting over again I get the same result. This time I set logging
to DEBUG and I get the following:

[2012-08-24 01:14:09,490][DEBUG][index.merge.scheduler ] [search-e1]
[history][9] merge [_1ox] done, took [1m]
[2012-08-24 01:14:11,899][DEBUG][index.merge.scheduler ] [search-e1]
[history][5] merge [_1pc] done, took [23.6s]
[2012-08-24 01:14:34,954][DEBUG][index.merge.scheduler ] [search-e1]
[history][7] merge [_1ph] done, took [21.6s]

curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true'
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 20,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

And then it stops indexing as you can see from the screenshot attached.

On Thursday, August 23, 2012 9:39:04 AM UTC-7, zuhaib wrote:

So after resolving my issue importing my issues from couchdb I am seeing a
new issue. After importing about 90million docs for 9hours suddenly
Elasticsearch will stop important the information. Using TRACE for couchdb
I see the river is still sending the data but my _seq is not changing. We
have about 200million docs in couchdb and while I can use filter to shard
it out to different docs I can see a case which we have more then 90
million docs per index so I am betting there is some configuration I am
missing so I am open to ideas what i need to tweak.

Thanks
Zuhaib

--

Hey,

Could you perhaps send along any configuration you've done, as well as
any information you can provide around your system, and how many nodes
are in your cluster. An alternative for you would be to create an
index per 'series', or 'flow' of data (so if it's monthly data,
perhaps an index per month, or whatever), and then use aliases to
create groupings of indices, but without more information on your
system or data, it'd be hard to suggest alternatives. In the
meanwhile, lets work out why it's hitting a 'document limit'

Patrick


patrick eefy net

On Thu, Aug 23, 2012 at 7:26 PM, zuhaib zsiddique@atlassian.com wrote:

After starting over again I get the same result. This time I set logging to
DEBUG and I get the following:

[2012-08-24 01:14:09,490][DEBUG][index.merge.scheduler ] [search-e1]
[history][9] merge [_1ox] done, took [1m]
[2012-08-24 01:14:11,899][DEBUG][index.merge.scheduler ] [search-e1]
[history][5] merge [_1pc] done, took [23.6s]
[2012-08-24 01:14:34,954][DEBUG][index.merge.scheduler ] [search-e1]
[history][7] merge [_1ph] done, took [21.6s]

curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true'
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 20,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

And then it stops indexing as you can see from the screenshot attached.

On Thursday, August 23, 2012 9:39:04 AM UTC-7, zuhaib wrote:

So after resolving my issue importing my issues from couchdb I am seeing a
new issue. After importing about 90million docs for 9hours suddenly
Elasticsearch will stop important the information. Using TRACE for couchdb
I see the river is still sending the data but my _seq is not changing. We
have about 200million docs in couchdb and while I can use filter to shard it
out to different docs I can see a case which we have more then 90 million
docs per index so I am betting there is some configuration I am missing so I
am open to ideas what i need to tweak.

Thanks
Zuhaib

--

--

Patrick,

Sure so this is an EC2 Instance m2.4xlarge. I am using only a single node
currently with no replication to load the data on and plan is to add more
nodes and replication once I am in sync or near sync with couchdb. As for
configuration this this the elasitcsearch.yml:

################################### Cluster
###################################

cluster.name: 'elasticsearch'
node.name: 'search-e1'

#################################### Index
####################################

index.number_of_shards: 10
index.number_of_replicas: 0
action.auto_create_index: true
index.mapper.dynamic: true

#################################### Paths
####################################

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch

################################### Memory
####################################

bootstrap.mlockall: false

################################### Varia
#####################################

action.disable_delete_all_indices: true

(note the path.data is symlinked to an EBS mount, i am changing my chef
configs to point to it directly in the future). I
disabled refresh_interval as its recommended when importing a large mount
of data. I am using the couchdb river and its configuration is the
following:

{
"type" : "couchdb",
"couchdb" : {
"host" : "localhost",
"port" : 5984,
"db" : "history",
"filter" : null
},
"index" : {
"index" : "history",
"type" : "history",
"bulk_size": "400",
"bulk_timeout": "40ms"
}
}

I did have couchdb at one point on a remote box but still had the same
problem. This is just a snapshot copy of our couchdb data for testing and
when we go live I will have it upodate off an read-only replication.

I could add more nodes but I wonder why that would make a difference when I
am throwing so much HP with this one instance. I can do some filters but
currently was looking for something drop in replacement for couchdb-lucene
and in small VM testing it works great, just when going to production data
its falling apart.

Thanks
Zuhaib
On Thu, Aug 23, 2012 at 6:34 PM, Patrick patrick@eefy.net wrote:

Hey,

Could you perhaps send along any configuration you've done, as well as
any information you can provide around your system, and how many nodes
are in your cluster. An alternative for you would be to create an
index per 'series', or 'flow' of data (so if it's monthly data,
perhaps an index per month, or whatever), and then use aliases to
create groupings of indices, but without more information on your
system or data, it'd be hard to suggest alternatives. In the
meanwhile, lets work out why it's hitting a 'document limit'

Patrick

http://about.me/patrick.ancillotti
patrick eefy net

On Thu, Aug 23, 2012 at 7:26 PM, zuhaib zsiddique@atlassian.com wrote:

After starting over again I get the same result. This time I set
logging to
DEBUG and I get the following:

[2012-08-24 01:14:09,490][DEBUG][index.merge.scheduler ] [search-e1]
[history][9] merge [_1ox] done, took [1m]
[2012-08-24 01:14:11,899][DEBUG][index.merge.scheduler ] [search-e1]
[history][5] merge [_1pc] done, took [23.6s]
[2012-08-24 01:14:34,954][DEBUG][index.merge.scheduler ] [search-e1]
[history][7] merge [_1ph] done, took [21.6s]

curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true'
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 20,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

And then it stops indexing as you can see from the screenshot attached.

On Thursday, August 23, 2012 9:39:04 AM UTC-7, zuhaib wrote:

So after resolving my issue importing my issues from couchdb I am
seeing a

new issue. After importing about 90million docs for 9hours suddenly
Elasticsearch will stop important the information. Using TRACE for
couchdb

I see the river is still sending the data but my _seq is not changing.
We

have about 200million docs in couchdb and while I can use filter to
shard it

out to different docs I can see a case which we have more then 90
million

docs per index so I am betting there is some configuration I am missing
so I

am open to ideas what i need to tweak.

Thanks
Zuhaib

--

--

--

Hi Zuhaib,

Great! So, I see a few places where I could suggest improvements
initially that may help with your speed of indexing, as well as your
overall response time currently, and may perhaps help with your number
of document problem. The biggest thing that stands out is the
'kaggillion shard problem' (tm kimchy), which is essentially you're
battering that server with multiple shards on the same disks, and it's
no doubt having problems keeping up with the work to all indicies at
the same time. In a perfect world you would keep the number of shards
to the number of nodes you have in your cluster (depending on
workload, and data flow/shape), but because you cannot add additional
shards post creation, alot of people tend to create a many-shard
approach to attempt to 'capacity plan' in future, and which then pays
the shard penalty from day 1. This should be avoided if at all
possible, and things like aliases, routing, and a flow that makes
sense for your data preferred. Have you watched any of the videos from
the last BerlinBuzzwords? Shay did a great talk that may help explain
some of this far better than I can in an email.

The first part would be to use the local disks as your instance
storage for your elastic search nodes, and use EBS as your 'gateway',
or even S3. If you're just doing this for indexing, the faster you can
do the indexing, the better, and you can get faster access to the
local disks, and not pay a latency penalty. This would allow you to
restore your cluster quickly from your gateway, and keep backups with
either snaps to S3, or direct storage in S3. You can then shut your
cluster down later, and restore from gateway should you so wish, but
the gateway will ensure persistence.

The second is that you're paying a LOT for that instance right now,
and you're not going to see the same results as if you built say
smaller instances, and more of them. Changing to that sort of a model
should (when added to the local storage change) see a significant
change in the speeds of the overall solution. Shards need to be spread
out to gain from them, so why don't you give it a try on 'large'
instances, or even 'extra-large' instances? Just, try keep the number
higher than 1 :wink:

The third part would be the 90m document hit, I don't know of a limit
myself that would keep you to 90 million documents, but that said, it
seems like a bad idea, and in my experience, there is more to be
gained from creating smaller indices with less shards, and using
routing, aliases and filters to hit the document tree that makes sense
for your workload, what you really want to keep your searches as fast
as possible is to help ES by hitting as few shards / indices as
possible, but of course if that isn't possible, you can throw more
hardware at it.

Can you gist a copy of an example document perhaps? and describe how
you're importing your documents? Do you plan on using the couchdb
river?

Patrick


patrick eefy net

On Thu, Aug 23, 2012 at 8:16 PM, Zuhaib Siddique
zsiddique@atlassian.com wrote:

Patrick,

Sure so this is an EC2 Instance m2.4xlarge. I am using only a single node
currently with no replication to load the data on and plan is to add more
nodes and replication once I am in sync or near sync with couchdb. As for
configuration this this the elasitcsearch.yml:

################################### Cluster
###################################

cluster.name: 'elasticsearch'
node.name: 'search-e1'

#################################### Index
####################################

index.number_of_shards: 10
index.number_of_replicas: 0
action.auto_create_index: true
index.mapper.dynamic: true

#################################### Paths
####################################

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch

################################### Memory
####################################

bootstrap.mlockall: false

################################### Varia
#####################################

action.disable_delete_all_indices: true

(note the path.data is symlinked to an EBS mount, i am changing my chef
configs to point to it directly in the future). I disabled refresh_interval
as its recommended when importing a large mount of data. I am using the
couchdb river and its configuration is the following:

{
"type" : "couchdb",
"couchdb" : {
"host" : "localhost",
"port" : 5984,
"db" : "history",
"filter" : null
},
"index" : {
"index" : "history",
"type" : "history",
"bulk_size": "400",
"bulk_timeout": "40ms"
}
}

I did have couchdb at one point on a remote box but still had the same
problem. This is just a snapshot copy of our couchdb data for testing and
when we go live I will have it upodate off an read-only replication.

I could add more nodes but I wonder why that would make a difference when I
am throwing so much HP with this one instance. I can do some filters but
currently was looking for something drop in replacement for couchdb-lucene
and in small VM testing it works great, just when going to production data
its falling apart.

Thanks
Zuhaib
On Thu, Aug 23, 2012 at 6:34 PM, Patrick patrick@eefy.net wrote:

Hey,

Could you perhaps send along any configuration you've done, as well as
any information you can provide around your system, and how many nodes
are in your cluster. An alternative for you would be to create an
index per 'series', or 'flow' of data (so if it's monthly data,
perhaps an index per month, or whatever), and then use aliases to
create groupings of indices, but without more information on your
system or data, it'd be hard to suggest alternatives. In the
meanwhile, lets work out why it's hitting a 'document limit'

Patrick

http://about.me/patrick.ancillotti
patrick eefy net

On Thu, Aug 23, 2012 at 7:26 PM, zuhaib zsiddique@atlassian.com wrote:

After starting over again I get the same result. This time I set
logging to
DEBUG and I get the following:

[2012-08-24 01:14:09,490][DEBUG][index.merge.scheduler ] [search-e1]
[history][9] merge [_1ox] done, took [1m]
[2012-08-24 01:14:11,899][DEBUG][index.merge.scheduler ] [search-e1]
[history][5] merge [_1pc] done, took [23.6s]
[2012-08-24 01:14:34,954][DEBUG][index.merge.scheduler ] [search-e1]
[history][7] merge [_1ph] done, took [21.6s]

curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true'
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 20,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

And then it stops indexing as you can see from the screenshot attached.

On Thursday, August 23, 2012 9:39:04 AM UTC-7, zuhaib wrote:

So after resolving my issue importing my issues from couchdb I am
seeing a
new issue. After importing about 90million docs for 9hours suddenly
Elasticsearch will stop important the information. Using TRACE for
couchdb
I see the river is still sending the data but my _seq is not changing.
We
have about 200million docs in couchdb and while I can use filter to
shard it
out to different docs I can see a case which we have more then 90
million
docs per index so I am betting there is some configuration I am missing
so I
am open to ideas what i need to tweak.

Thanks
Zuhaib

--

--

--

--

Patrick,

Let me see if I can get a example doc but its pretty simple as you
can imagine, think what you would need for chat history searching (body,
to, from etc) with some extra stuff we do. Nothing fancy like geo data and
etc, the only tweaking I do to the mapping is multitype the date as I had
issues sorting by date at first.

A lot of what you said is what i originally planned. The design is going
to be 4 elasticsearch boxes all m1.xlarges using S3. I decided to go with
one large box and now remove S3 was purely as a troubleshooting method as I
ran in problems with index speed pulling from couchdb and now this
90million limit it seems to hit. As for EBS i am using the high IO drives
and I see them hovering around 800 to 1000 op/s. You can see from my
screenshot you see the box was busy doing a lot of work and then suddenly
it goes idle and the log confirms that is doing nothing. I would expect
that if the system or drive was not able to keep up the system would try to
process the data but hitting some sort of wall, currently its idle. Now if
i put the couchdb river log in TRACE i see that its still pulling data from
couchdb but its doing nothing with it, its not indexing it or even bulking
it up and that is what bothers me. And as far as my understanding with
couchdb-river it only runs on a single node and yes the plan was to stick
with couchdb-river.

I understand that breaking up the data would be best and we already have
some logically partitions I can think of to implement, I just wonder what
would happens if each partition grows very large, our current projects says
by this time next week we should have 2 billion chat history to search
over. The plan was move to this and then address the problem later but I
think I need to do some more planing in to this. I will google search for
those videos and give them a watch.

Thanks
Zuhaib

On Thu, Aug 23, 2012 at 8:09 PM, Patrick patrick@eefy.net wrote:

Hi Zuhaib,

Great! So, I see a few places where I could suggest improvements
initially that may help with your speed of indexing, as well as your
overall response time currently, and may perhaps help with your number
of document problem. The biggest thing that stands out is the
'kaggillion shard problem' (tm kimchy), which is essentially you're
battering that server with multiple shards on the same disks, and it's
no doubt having problems keeping up with the work to all indicies at
the same time. In a perfect world you would keep the number of shards
to the number of nodes you have in your cluster (depending on
workload, and data flow/shape), but because you cannot add additional
shards post creation, alot of people tend to create a many-shard
approach to attempt to 'capacity plan' in future, and which then pays
the shard penalty from day 1. This should be avoided if at all
possible, and things like aliases, routing, and a flow that makes
sense for your data preferred. Have you watched any of the videos from
the last BerlinBuzzwords? Shay did a great talk that may help explain
some of this far better than I can in an email.

The first part would be to use the local disks as your instance
storage for your elastic search nodes, and use EBS as your 'gateway',
or even S3. If you're just doing this for indexing, the faster you can
do the indexing, the better, and you can get faster access to the
local disks, and not pay a latency penalty. This would allow you to
restore your cluster quickly from your gateway, and keep backups with
either snaps to S3, or direct storage in S3. You can then shut your
cluster down later, and restore from gateway should you so wish, but
the gateway will ensure persistence.

The second is that you're paying a LOT for that instance right now,
and you're not going to see the same results as if you built say
smaller instances, and more of them. Changing to that sort of a model
should (when added to the local storage change) see a significant
change in the speeds of the overall solution. Shards need to be spread
out to gain from them, so why don't you give it a try on 'large'
instances, or even 'extra-large' instances? Just, try keep the number
higher than 1 :wink:

The third part would be the 90m document hit, I don't know of a limit
myself that would keep you to 90 million documents, but that said, it
seems like a bad idea, and in my experience, there is more to be
gained from creating smaller indices with less shards, and using
routing, aliases and filters to hit the document tree that makes sense
for your workload, what you really want to keep your searches as fast
as possible is to help ES by hitting as few shards / indices as
possible, but of course if that isn't possible, you can throw more
hardware at it.

Can you gist a copy of an example document perhaps? and describe how
you're importing your documents? Do you plan on using the couchdb
river?

Patrick

http://about.me/patrick.ancillotti
patrick eefy net

On Thu, Aug 23, 2012 at 8:16 PM, Zuhaib Siddique
zsiddique@atlassian.com wrote:

Patrick,

Sure so this is an EC2 Instance m2.4xlarge. I am using only a single node
currently with no replication to load the data on and plan is to add more
nodes and replication once I am in sync or near sync with couchdb. As
for
configuration this this the elasitcsearch.yml:

################################### Cluster
###################################

cluster.name: 'elasticsearch'
node.name: 'search-e1'

#################################### Index
####################################

index.number_of_shards: 10
index.number_of_replicas: 0
action.auto_create_index: true
index.mapper.dynamic: true

#################################### Paths
####################################

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch

################################### Memory
####################################

bootstrap.mlockall: false

################################### Varia
#####################################

action.disable_delete_all_indices: true

(note the path.data is symlinked to an EBS mount, i am changing my chef
configs to point to it directly in the future). I disabled
refresh_interval
as its recommended when importing a large mount of data. I am using the
couchdb river and its configuration is the following:

{
"type" : "couchdb",
"couchdb" : {
"host" : "localhost",
"port" : 5984,
"db" : "history",
"filter" : null
},
"index" : {
"index" : "history",
"type" : "history",
"bulk_size": "400",
"bulk_timeout": "40ms"
}
}

I did have couchdb at one point on a remote box but still had the same
problem. This is just a snapshot copy of our couchdb data for testing
and
when we go live I will have it upodate off an read-only replication.

I could add more nodes but I wonder why that would make a difference
when I
am throwing so much HP with this one instance. I can do some filters but
currently was looking for something drop in replacement for
couchdb-lucene
and in small VM testing it works great, just when going to production
data
its falling apart.

Thanks
Zuhaib
On Thu, Aug 23, 2012 at 6:34 PM, Patrick patrick@eefy.net wrote:

Hey,

Could you perhaps send along any configuration you've done, as well as
any information you can provide around your system, and how many nodes
are in your cluster. An alternative for you would be to create an
index per 'series', or 'flow' of data (so if it's monthly data,
perhaps an index per month, or whatever), and then use aliases to
create groupings of indices, but without more information on your
system or data, it'd be hard to suggest alternatives. In the
meanwhile, lets work out why it's hitting a 'document limit'

Patrick

http://about.me/patrick.ancillotti
patrick eefy net

On Thu, Aug 23, 2012 at 7:26 PM, zuhaib zsiddique@atlassian.com
wrote:

After starting over again I get the same result. This time I set
logging to
DEBUG and I get the following:

[2012-08-24 01:14:09,490][DEBUG][index.merge.scheduler ]
[search-e1]

[history][9] merge [_1ox] done, took [1m]
[2012-08-24 01:14:11,899][DEBUG][index.merge.scheduler ]
[search-e1]

[history][5] merge [_1pc] done, took [23.6s]
[2012-08-24 01:14:34,954][DEBUG][index.merge.scheduler ]
[search-e1]

[history][7] merge [_1ph] done, took [21.6s]

curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true'
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 20,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

And then it stops indexing as you can see from the screenshot
attached.

On Thursday, August 23, 2012 9:39:04 AM UTC-7, zuhaib wrote:

So after resolving my issue importing my issues from couchdb I am
seeing a
new issue. After importing about 90million docs for 9hours suddenly
Elasticsearch will stop important the information. Using TRACE for
couchdb
I see the river is still sending the data but my _seq is not
changing.

We
have about 200million docs in couchdb and while I can use filter to
shard it
out to different docs I can see a case which we have more then 90
million
docs per index so I am betting there is some configuration I am
missing

so I
am open to ideas what i need to tweak.

Thanks
Zuhaib

--

--

--

--

--

I'm just wondering if your problem is relative to your document content.
If ES stops each time at the same document...

Or, perhaps the _changes API gives a sequence number which is too high for ES. I don't remember so far the Java type used by the river to deal with sequence (lastseq).

I suggest that you identify the lastseq value in the couchDb river metadata and report it here. Then, try to get the document content from couchDb relative to this sequence number (use changes API) and fetch also the sequence+1 document.

Perhaps, there's something weird with your document (encoding problem...)

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 24 août 2012 à 05:36, Zuhaib Siddique zsiddique@atlassian.com a écrit :

Patrick,

Let me see if I can get a example doc but its pretty simple as you can imagine, think what you would need for chat history searching (body, to, from etc) with some extra stuff we do. Nothing fancy like geo data and etc, the only tweaking I do to the mapping is multitype the date as I had issues sorting by date at first.

A lot of what you said is what i originally planned. The design is going to be 4 elasticsearch boxes all m1.xlarges using S3. I decided to go with one large box and now remove S3 was purely as a troubleshooting method as I ran in problems with index speed pulling from couchdb and now this 90million limit it seems to hit. As for EBS i am using the high IO drives and I see them hovering around 800 to 1000 op/s. You can see from my screenshot you see the box was busy doing a lot of work and then suddenly it goes idle and the log confirms that is doing nothing. I would expect that if the system or drive was not able to keep up the system would try to process the data but hitting some sort of wall, currently its idle. Now if i put the couchdb river log in TRACE i see that its still pulling data from couchdb but its doing nothing with it, its not indexing it or even bulking it up and that is what bothers me. And as far as my understanding with couchdb-river it only runs on a single node and yes the plan was to stick with couchdb-river.

I understand that breaking up the data would be best and we already have some logically partitions I can think of to implement, I just wonder what would happens if each partition grows very large, our current projects says by this time next week we should have 2 billion chat history to search over. The plan was move to this and then address the problem later but I think I need to do some more planing in to this. I will google search for those videos and give them a watch.

Thanks
Zuhaib

On Thu, Aug 23, 2012 at 8:09 PM, Patrick patrick@eefy.net wrote:
Hi Zuhaib,

Great! So, I see a few places where I could suggest improvements
initially that may help with your speed of indexing, as well as your
overall response time currently, and may perhaps help with your number
of document problem. The biggest thing that stands out is the
'kaggillion shard problem' (tm kimchy), which is essentially you're
battering that server with multiple shards on the same disks, and it's
no doubt having problems keeping up with the work to all indicies at
the same time. In a perfect world you would keep the number of shards
to the number of nodes you have in your cluster (depending on
workload, and data flow/shape), but because you cannot add additional
shards post creation, alot of people tend to create a many-shard
approach to attempt to 'capacity plan' in future, and which then pays
the shard penalty from day 1. This should be avoided if at all
possible, and things like aliases, routing, and a flow that makes
sense for your data preferred. Have you watched any of the videos from
the last BerlinBuzzwords? Shay did a great talk that may help explain
some of this far better than I can in an email.

The first part would be to use the local disks as your instance
storage for your elastic search nodes, and use EBS as your 'gateway',
or even S3. If you're just doing this for indexing, the faster you can
do the indexing, the better, and you can get faster access to the
local disks, and not pay a latency penalty. This would allow you to
restore your cluster quickly from your gateway, and keep backups with
either snaps to S3, or direct storage in S3. You can then shut your
cluster down later, and restore from gateway should you so wish, but
the gateway will ensure persistence.

The second is that you're paying a LOT for that instance right now,
and you're not going to see the same results as if you built say
smaller instances, and more of them. Changing to that sort of a model
should (when added to the local storage change) see a significant
change in the speeds of the overall solution. Shards need to be spread
out to gain from them, so why don't you give it a try on 'large'
instances, or even 'extra-large' instances? Just, try keep the number
higher than 1 :wink:

The third part would be the 90m document hit, I don't know of a limit
myself that would keep you to 90 million documents, but that said, it
seems like a bad idea, and in my experience, there is more to be
gained from creating smaller indices with less shards, and using
routing, aliases and filters to hit the document tree that makes sense
for your workload, what you really want to keep your searches as fast
as possible is to help ES by hitting as few shards / indices as
possible, but of course if that isn't possible, you can throw more
hardware at it.

Can you gist a copy of an example document perhaps? and describe how
you're importing your documents? Do you plan on using the couchdb
river?

Patrick

http://about.me/patrick.ancillotti
patrick eefy net

On Thu, Aug 23, 2012 at 8:16 PM, Zuhaib Siddique
zsiddique@atlassian.com wrote:

Patrick,

Sure so this is an EC2 Instance m2.4xlarge. I am using only a single node
currently with no replication to load the data on and plan is to add more
nodes and replication once I am in sync or near sync with couchdb. As for
configuration this this the elasitcsearch.yml:

################################### Cluster
###################################

cluster.name: 'elasticsearch'
node.name: 'search-e1'

#################################### Index
####################################

index.number_of_shards: 10
index.number_of_replicas: 0
action.auto_create_index: true
index.mapper.dynamic: true

#################################### Paths
####################################

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch

################################### Memory
####################################

bootstrap.mlockall: false

################################### Varia
#####################################

action.disable_delete_all_indices: true

(note the path.data is symlinked to an EBS mount, i am changing my chef
configs to point to it directly in the future). I disabled refresh_interval
as its recommended when importing a large mount of data. I am using the
couchdb river and its configuration is the following:

{
"type" : "couchdb",
"couchdb" : {
"host" : "localhost",
"port" : 5984,
"db" : "history",
"filter" : null
},
"index" : {
"index" : "history",
"type" : "history",
"bulk_size": "400",
"bulk_timeout": "40ms"
}
}

I did have couchdb at one point on a remote box but still had the same
problem. This is just a snapshot copy of our couchdb data for testing and
when we go live I will have it upodate off an read-only replication.

I could add more nodes but I wonder why that would make a difference when I
am throwing so much HP with this one instance. I can do some filters but
currently was looking for something drop in replacement for couchdb-lucene
and in small VM testing it works great, just when going to production data
its falling apart.

Thanks
Zuhaib
On Thu, Aug 23, 2012 at 6:34 PM, Patrick patrick@eefy.net wrote:

Hey,

Could you perhaps send along any configuration you've done, as well as
any information you can provide around your system, and how many nodes
are in your cluster. An alternative for you would be to create an
index per 'series', or 'flow' of data (so if it's monthly data,
perhaps an index per month, or whatever), and then use aliases to
create groupings of indices, but without more information on your
system or data, it'd be hard to suggest alternatives. In the
meanwhile, lets work out why it's hitting a 'document limit'

Patrick

http://about.me/patrick.ancillotti
patrick eefy net

On Thu, Aug 23, 2012 at 7:26 PM, zuhaib zsiddique@atlassian.com wrote:

After starting over again I get the same result. This time I set
logging to
DEBUG and I get the following:

[2012-08-24 01:14:09,490][DEBUG][index.merge.scheduler ] [search-e1]
[history][9] merge [_1ox] done, took [1m]
[2012-08-24 01:14:11,899][DEBUG][index.merge.scheduler ] [search-e1]
[history][5] merge [_1pc] done, took [23.6s]
[2012-08-24 01:14:34,954][DEBUG][index.merge.scheduler ] [search-e1]
[history][7] merge [_1ph] done, took [21.6s]

curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true'
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 20,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

And then it stops indexing as you can see from the screenshot attached.

On Thursday, August 23, 2012 9:39:04 AM UTC-7, zuhaib wrote:

So after resolving my issue importing my issues from couchdb I am
seeing a
new issue. After importing about 90million docs for 9hours suddenly
Elasticsearch will stop important the information. Using TRACE for
couchdb
I see the river is still sending the data but my _seq is not changing.
We
have about 200million docs in couchdb and while I can use filter to
shard it
out to different docs I can see a case which we have more then 90
million
docs per index so I am betting there is some configuration I am missing
so I
am open to ideas what i need to tweak.

Thanks
Zuhaib

--

--

--

--

--

--

Hi Patrick

Great! So, I see a few places where I could suggest improvements
initially that may help with your speed of indexing, as well as your
overall response time currently, and may perhaps help with your number
of document problem. The biggest thing that stands out is the
'kaggillion shard problem' (tm kimchy), which is essentially you're
battering that server with multiple shards on the same disks, and it's
no doubt having problems keeping up with the work to all indicies at
the same time.

To be honest, I don't think 20 shards on one box could be considered a
kagillion shards. Sure, it's quite a few shards, but it's still
manageable.

Zuhaib - one thing you don't mention is your heap size. Also, you are
enabling bootstrap.mlockall, but have you also set (and checked that it
is taking effect) ulimit -l unlimited. And while we're about it, ulimit
-n as well?

The second is that you're paying a LOT for that instance right now,
and you're not going to see the same results as if you built say
smaller instances, and more of them. Changing to that sort of a model
should (when added to the local storage change) see a significant
change in the speeds of the overall solution. Shards need to be spread
out to gain from them, so why don't you give it a try on 'large'
instances, or even 'extra-large' instances? Just, try keep the number
higher than 1 :wink:

For initial indexing, using a single node is not a bad idea. Before
going into production, I'd add more nodes. At that stage, the ready made
segment files just get copied over to the new nodes, which is probably
faster than doing all the indexing on multiple nodes initially.

The third part would be the 90m document hit, I don't know of a limit
myself that would keep you to 90 million documents,

Yeah, there isn't a limit. Although you may be bumping into some other
limit eg memory, file handles, but then I'd expect to see something in
your logs. Set your logging to DEBUG and see if you get any more
information.

How many documents are you expecting?

Are you absolutely sure that all of the IDs are unique? Something I
hear quite often with couch db is that not all documents have been
indexed, but then it turns out that there are ID clashes, so newer
documents just overwrite older documents.

clint

--

David,

I was thinking the same thing, so when it stopped index I turned on TRACE
for the river logging and I checked out the data it pulled first and it
looks normal with normal encoding. The seq number is "seq":96967136 but
its not always the same it stops at that is just the one
I investigated last, currently it stopped at 96967115

Looking at the river source code it seems to be a string(?)

private class Slurper implements Runnable {
@SuppressWarnings({"unchecked"})
@Override
public void run() {

        while (true) {
            if (closed) {
                return;
            }

            String lastSeq = null;
            try {

I would expect an error or something thrown if it it had a bad encoding or
something, but, again the data is totally searchable currently using
couchdb-lucene.

Clinton,
if you see the screenshot you will see I have a lot of heap memory :slight_smile:
close to 43GB of heap thanks to the m2.4xlarge instance. Also ulimit you
see in that screenshot and I confirmed that.
I am searching for the document Id and I dont see any conflicts on couchdb.

Zuhaib
On Thu, Aug 23, 2012 at 9:32 PM, David Pilato david@pilato.fr wrote:

I'm just wondering if your problem is relative to your document content.
If ES stops each time at the same document...

Or, perhaps the _changes API gives a sequence number which is too high for
ES. I don't remember so far the Java type used by the river to deal with
sequence (lastseq).

I suggest that you identify the lastseq value in the couchDb river
metadata and report it here. Then, try to get the document content from
couchDb relative to this sequence number (use changes API) and fetch also
the sequence+1 document.

Perhaps, there's something weird with your document (encoding problem...)

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 24 août 2012 à 05:36, Zuhaib Siddique zsiddique@atlassian.com a
écrit :

Patrick,

Let me see if I can get a example doc but its pretty simple as you
can imagine, think what you would need for chat history searching (body,
to, from etc) with some extra stuff we do. Nothing fancy like geo data and
etc, the only tweaking I do to the mapping is multitype the date as I had
issues sorting by date at first.

A lot of what you said is what i originally planned. The design is going
to be 4 elasticsearch boxes all m1.xlarges using S3. I decided to go with
one large box and now remove S3 was purely as a troubleshooting method as I
ran in problems with index speed pulling from couchdb and now this
90million limit it seems to hit. As for EBS i am using the high IO drives
and I see them hovering around 800 to 1000 op/s. You can see from my
screenshot you see the box was busy doing a lot of work and then suddenly
it goes idle and the log confirms that is doing nothing. I would expect
that if the system or drive was not able to keep up the system would try to
process the data but hitting some sort of wall, currently its idle. Now if
i put the couchdb river log in TRACE i see that its still pulling data from
couchdb but its doing nothing with it, its not indexing it or even bulking
it up and that is what bothers me. And as far as my understanding with
couchdb-river it only runs on a single node and yes the plan was to stick
with couchdb-river.

I understand that breaking up the data would be best and we already have
some logically partitions I can think of to implement, I just wonder what
would happens if each partition grows very large, our current projects says
by this time next week we should have 2 billion chat history to search
over. The plan was move to this and then address the problem later but I
think I need to do some more planing in to this. I will google search for
those videos and give them a watch.

Thanks
Zuhaib

On Thu, Aug 23, 2012 at 8:09 PM, Patrick patrick@eefy.net wrote:

Hi Zuhaib,

Great! So, I see a few places where I could suggest improvements
initially that may help with your speed of indexing, as well as your
overall response time currently, and may perhaps help with your number
of document problem. The biggest thing that stands out is the
'kaggillion shard problem' (tm kimchy), which is essentially you're
battering that server with multiple shards on the same disks, and it's
no doubt having problems keeping up with the work to all indicies at
the same time. In a perfect world you would keep the number of shards
to the number of nodes you have in your cluster (depending on
workload, and data flow/shape), but because you cannot add additional
shards post creation, alot of people tend to create a many-shard
approach to attempt to 'capacity plan' in future, and which then pays
the shard penalty from day 1. This should be avoided if at all
possible, and things like aliases, routing, and a flow that makes
sense for your data preferred. Have you watched any of the videos from
the last BerlinBuzzwords? Shay did a great talk that may help explain
some of this far better than I can in an email.

The first part would be to use the local disks as your instance
storage for your elastic search nodes, and use EBS as your 'gateway',
or even S3. If you're just doing this for indexing, the faster you can
do the indexing, the better, and you can get faster access to the
local disks, and not pay a latency penalty. This would allow you to
restore your cluster quickly from your gateway, and keep backups with
either snaps to S3, or direct storage in S3. You can then shut your
cluster down later, and restore from gateway should you so wish, but
the gateway will ensure persistence.

The second is that you're paying a LOT for that instance right now,
and you're not going to see the same results as if you built say
smaller instances, and more of them. Changing to that sort of a model
should (when added to the local storage change) see a significant
change in the speeds of the overall solution. Shards need to be spread
out to gain from them, so why don't you give it a try on 'large'
instances, or even 'extra-large' instances? Just, try keep the number
higher than 1 :wink:

The third part would be the 90m document hit, I don't know of a limit
myself that would keep you to 90 million documents, but that said, it
seems like a bad idea, and in my experience, there is more to be
gained from creating smaller indices with less shards, and using
routing, aliases and filters to hit the document tree that makes sense
for your workload, what you really want to keep your searches as fast
as possible is to help ES by hitting as few shards / indices as
possible, but of course if that isn't possible, you can throw more
hardware at it.

Can you gist a copy of an example document perhaps? and describe how
you're importing your documents? Do you plan on using the couchdb
river?

Patrick

http://about.me/patrick.ancillotti
patrick eefy net

On Thu, Aug 23, 2012 at 8:16 PM, Zuhaib Siddique
zsiddique@atlassian.com wrote:

Patrick,

Sure so this is an EC2 Instance m2.4xlarge. I am using only a single
node
currently with no replication to load the data on and plan is to add
more
nodes and replication once I am in sync or near sync with couchdb. As
for
configuration this this the elasitcsearch.yml:

################################### Cluster
###################################

cluster.name: 'elasticsearch'
node.name: 'search-e1'

#################################### Index
####################################

index.number_of_shards: 10
index.number_of_replicas: 0
action.auto_create_index: true
index.mapper.dynamic: true

#################################### Paths
####################################

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch

################################### Memory
####################################

bootstrap.mlockall: false

################################### Varia
#####################################

action.disable_delete_all_indices: true

(note the path.data is symlinked to an EBS mount, i am changing my chef
configs to point to it directly in the future). I disabled
refresh_interval
as its recommended when importing a large mount of data. I am using the
couchdb river and its configuration is the following:

{
"type" : "couchdb",
"couchdb" : {
"host" : "localhost",
"port" : 5984,
"db" : "history",
"filter" : null
},
"index" : {
"index" : "history",
"type" : "history",
"bulk_size": "400",
"bulk_timeout": "40ms"
}
}

I did have couchdb at one point on a remote box but still had the same
problem. This is just a snapshot copy of our couchdb data for testing
and
when we go live I will have it upodate off an read-only replication.

I could add more nodes but I wonder why that would make a difference
when I
am throwing so much HP with this one instance. I can do some filters
but
currently was looking for something drop in replacement for
couchdb-lucene
and in small VM testing it works great, just when going to production
data
its falling apart.

Thanks
Zuhaib
On Thu, Aug 23, 2012 at 6:34 PM, Patrick patrick@eefy.net wrote:

Hey,

Could you perhaps send along any configuration you've done, as well as
any information you can provide around your system, and how many nodes
are in your cluster. An alternative for you would be to create an
index per 'series', or 'flow' of data (so if it's monthly data,
perhaps an index per month, or whatever), and then use aliases to
create groupings of indices, but without more information on your
system or data, it'd be hard to suggest alternatives. In the
meanwhile, lets work out why it's hitting a 'document limit'

Patrick

http://about.me/patrick.ancillotti
patrick eefy net

On Thu, Aug 23, 2012 at 7:26 PM, zuhaib zsiddique@atlassian.com
wrote:

After starting over again I get the same result. This time I set
logging to
DEBUG and I get the following:

[2012-08-24 01:14:09,490][DEBUG][index.merge.scheduler ]
[search-e1]

[history][9] merge [_1ox] done, took [1m]
[2012-08-24 01:14:11,899][DEBUG][index.merge.scheduler ]
[search-e1]

[history][5] merge [_1pc] done, took [23.6s]
[2012-08-24 01:14:34,954][DEBUG][index.merge.scheduler ]
[search-e1]

[history][7] merge [_1ph] done, took [21.6s]

curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true'
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 20,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

And then it stops indexing as you can see from the screenshot
attached.

On Thursday, August 23, 2012 9:39:04 AM UTC-7, zuhaib wrote:

So after resolving my issue importing my issues from couchdb I am
seeing a
new issue. After importing about 90million docs for 9hours suddenly
Elasticsearch will stop important the information. Using TRACE for
couchdb
I see the river is still sending the data but my _seq is not
changing.

We
have about 200million docs in couchdb and while I can use filter to
shard it
out to different docs I can see a case which we have more then 90
million
docs per index so I am betting there is some configuration I am
missing

so I
am open to ideas what i need to tweak.

Thanks
Zuhaib

--

--

--

--

--

--

--

Hi Zuhaib

I was thinking the same thing, so when it stopped index I turned on
TRACE for the river logging and I checked out the data it pulled first
and it looks normal with normal encoding. The seq number
is "seq":96967136 but its not always the same it stops at that is just
the one I investigated last, currently it stopped at 96967115

Looking at the river source code it seems to be a string(?)

That's interesting - yes, ES wouldn't be able to handle that.

I would expect an error or something thrown if it it had a bad
encoding or something, but, again the data is totally searchable
currently using couchdb-lucene.

lucene is different. it doesn't have a schema and isn't expecting JSON.

Clinton,
if you see the screenshot you will see I have a lot of heap memory :slight_smile:
close to 43GB of heap thanks to the m2.4xlarge instance. Also ulimit
you see in that screenshot and I confirmed that.

better to include that info in text - i don't bother looking at any
screenshots :slight_smile:

but it sounds like the string doc is your problem.

clint

--

Hi Clint,

Why do you say that ES can not handle the seq number as a String?
IMHO, I don't see what can bother ES river. When the river ask couchDb for new changes, it only append the lastseq (String) as is in the URL (_changes API)

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 24 août 2012 à 08:51, Clinton Gormley clint@traveljury.com a écrit :

Hi Zuhaib

I was thinking the same thing, so when it stopped index I turned on
TRACE for the river logging and I checked out the data it pulled first
and it looks normal with normal encoding. The seq number
is "seq":96967136 but its not always the same it stops at that is just
the one I investigated last, currently it stopped at 96967115

Looking at the river source code it seems to be a string(?)

That's interesting - yes, ES wouldn't be able to handle that.

I would expect an error or something thrown if it it had a bad
encoding or something, but, again the data is totally searchable
currently using couchdb-lucene.

lucene is different. it doesn't have a schema and isn't expecting JSON.

Clinton,
if you see the screenshot you will see I have a lot of heap memory :slight_smile:
close to 43GB of heap thanks to the m2.4xlarge instance. Also ulimit
you see in that screenshot and I confirmed that.

better to include that info in text - i don't bother looking at any
screenshots :slight_smile:

but it sounds like the string doc is your problem.

clint

--

--

Yeah I dont see the seq ID being the problem and looking at the data around
the point it stops indexing it seems to be valid with no funny encoding, we
are UTF-8 and I assume that elasticsearch has no problem with this (During
my testing I saw no problem in my VM).

What I have done now is created two index's and using the couchdb filters
to split them up based on public and private chats. I am seeing slower
index time but I guess its expected as couchdbjs is now doing more work to
filter the _changes to each index. I will let it run the weekend and see
where it goes but I think elasticsearch and couchdb are not primetime ready
and I will have to shelve this project.

Zuhaib

On Fri, Aug 24, 2012 at 12:00 AM, David Pilato david@pilato.fr wrote:

Hi Clint,

Why do you say that ES can not handle the seq number as a String?
IMHO, I don't see what can bother ES river. When the river ask couchDb for
new changes, it only append the lastseq (String) as is in the URL (_changes
API)

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 24 août 2012 à 08:51, Clinton Gormley clint@traveljury.com a écrit :

Hi Zuhaib

I was thinking the same thing, so when it stopped index I turned on
TRACE for the river logging and I checked out the data it pulled first
and it looks normal with normal encoding. The seq number
is "seq":96967136 but its not always the same it stops at that is just
the one I investigated last, currently it stopped at 96967115

Looking at the river source code it seems to be a string(?)

That's interesting - yes, ES wouldn't be able to handle that.

I would expect an error or something thrown if it it had a bad
encoding or something, but, again the data is totally searchable
currently using couchdb-lucene.

lucene is different. it doesn't have a schema and isn't expecting JSON.

Clinton,
if you see the screenshot you will see I have a lot of heap memory :slight_smile:
close to 43GB of heap thanks to the m2.4xlarge instance. Also ulimit
you see in that screenshot and I confirmed that.

better to include that info in text - i don't bother looking at any
screenshots :slight_smile:

but it sounds like the string doc is your problem.

clint

--

--

--

Hi David

Why do you say that ES can not handle the seq number as a String?
IMHO, I don't see what can bother ES river. When the river ask couchDb
for new changes, it only append the lastseq (String) as is in the URL
(_changes API)

Sorry I misread that in my haste - I thought Zuhaib was saying that the
document itself was just a string.

clint

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 24 août 2012 à 08:51, Clinton Gormley clint@traveljury.com a écrit :

Hi Zuhaib

I was thinking the same thing, so when it stopped index I turned on
TRACE for the river logging and I checked out the data it pulled first
and it looks normal with normal encoding. The seq number
is "seq":96967136 but its not always the same it stops at that is just
the one I investigated last, currently it stopped at 96967115

Looking at the river source code it seems to be a string(?)

That's interesting - yes, ES wouldn't be able to handle that.

I would expect an error or something thrown if it it had a bad
encoding or something, but, again the data is totally searchable
currently using couchdb-lucene.

lucene is different. it doesn't have a schema and isn't expecting JSON.

Clinton,
if you see the screenshot you will see I have a lot of heap memory :slight_smile:
close to 43GB of heap thanks to the m2.4xlarge instance. Also ulimit
you see in that screenshot and I confirmed that.

better to include that info in text - i don't bother looking at any
screenshots :slight_smile:

but it sounds like the string doc is your problem.

clint

--

--

So this time I split the data in to two index's using couchdb filters and
again at the 98millin seq count it stops indexing. Restarting
elasticsearch will get it to index one or two doc and then it stops.
Logging shows nothign but the river pulling data from couchdb so I know
that link is working. This is the jstack so maybe someone with better Java
skills can maybe take a look at this and figure out whats going on:

Deadlock Detection:

No deadlocks found.

Thread 14584: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 14583: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 13373: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10820: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10426: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10425: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10424: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10423: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10422: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10421: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10420: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10419: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10418: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10417: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9789: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9763: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9762: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9745: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9744: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9711: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9710: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9709: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9582: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9563: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9543: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9541: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9513: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9512: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9448: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9443: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9424: (state = IN_NATIVE)

  • java.net.SocketInputStream.socketRead0(java.io.FileDescriptor, byte[],
    int, int, int) @bci=0 (Interpreted frame)
  • java.net.SocketInputStream.read(byte[], int, int) @bci=84, line=146
    (Interpreted frame)
  • java.io.BufferedInputStream.read1(byte[], int, int) @bci=39, line=273
    (Interpreted frame)
  • java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=334
    (Interpreted frame)
  • sun.net.www.http.ChunkedInputStream.readAheadBlocking() @bci=38,
    line=543 (Compiled frame)
  • sun.net.www.http.ChunkedInputStream.readAhead(boolean) @bci=36, line=600
    (Compiled frame)
  • sun.net.www.http.ChunkedInputStream.read(byte[], int, int) @bci=80,
    line=687 (Interpreted frame)
  • java.io.FilterInputStream.read(byte[], int, int) @bci=7, line=133
    (Interpreted frame)
  • sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(byte[],
    int, int) @bci=4, line=2582 (Interpreted frame)
  • sun.nio.cs.StreamDecoder.readBytes() @bci=130, line=282 (Compiled frame)
  • sun.nio.cs.StreamDecoder.implRead(char[], int, int) @bci=112, line=324
    (Compiled frame)
  • sun.nio.cs.StreamDecoder.read(char[], int, int) @bci=180, line=176
    (Interpreted frame)
  • java.io.InputStreamReader.read(char[], int, int) @bci=7, line=184
    (Interpreted frame)
  • java.io.BufferedReader.fill() @bci=145, line=153 (Interpreted frame)
  • java.io.BufferedReader.readLine(boolean) @bci=44, line=316 (Compiled
    frame)
  • java.io.BufferedReader.readLine() @bci=2, line=379 (Interpreted frame)
  • org.elasticsearch.river.couchdb.CouchdbRiver$Slurper.run() @bci=550,
    line=472 (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9423: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14,
    line=186 (Interpreted frame)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await()
@bci=42, line=2043 (Interpreted frame)

  • java.util.concurrent.ArrayBlockingQueue.take() @bci=20, line=345
    (Interpreted frame)
  • org.elasticsearch.river.couchdb.CouchdbRiver$Indexer.run() @bci=18,
    line=312 (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9410: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9409: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9408: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14,
    line=186 (Interpreted frame)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await()
@bci=42, line=2043 (Interpreted frame)

  • java.util.concurrent.ArrayBlockingQueue.put(java.lang.Object) @bci=39,
    line=280 (Interpreted frame)
  • org.elasticsearch.river.couchdb.CouchdbRiver$Slurper.run() @bci=678,
    line=484 (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9405: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9369: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9366: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9271: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9270: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9269: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9268: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9267: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9265: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9264: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9263: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9080: (state = BLOCKED)

Thread 9247: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14,
    line=186 (Interpreted frame)

java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt()
@bci=1, line=838 (Interpreted frame)

java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(int)
@bci=66, line=998 (Interpreted frame)

java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(int)
@bci=24, line=1304 (Interpreted frame)

  • java.util.concurrent.CountDownLatch.await() @bci=5, line=235
    (Interpreted frame)
  • org.elasticsearch.bootstrap.Bootstrap$3.run() @bci=3, line=222
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9246: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run()
@bci=23, line=229 (Interpreted frame)

  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9245: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14,
    line=186 (Interpreted frame)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await()
@bci=42, line=2043 (Interpreted frame)

  • java.util.concurrent.LinkedBlockingQueue.take() @bci=29, line=386
    (Interpreted frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=156, line=1043
    (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9244: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9243: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9242: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9241: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9240: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9239: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9238: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9237: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9236: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9235: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9234: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9233: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9232: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9231: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9230: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14,
    line=186 (Interpreted frame)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await()
@bci=42, line=2043 (Interpreted frame)

  • java.util.concurrent.LinkedBlockingQueue.take() @bci=29, line=386
    (Interpreted frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=156, line=1043
    (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9172: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9171: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9170: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9169: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9168: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9167: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9166: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9165: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9164: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9163: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9162: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9161: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9160: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9159: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9158: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9157: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9156: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9154: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9150: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run()
@bci=23, line=229 (Interpreted frame)

  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)
  • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
    @bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9149: (state = BLOCKED)

  • java.lang.Thread.sleep(long) @bci=0 (Interpreted frame)
  • org.elasticsearch.indices.ttl.IndicesTTLService$PurgerThread.run()
    @bci=60, line=135 (Interpreted frame)

Thread 9148: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long)
    @bci=20, line=226 (Compiled frame)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(long)
@bci=68, line=2081 (Compiled frame)

  • java.util.concurrent.DelayQueue.take() @bci=57, line=193 (Compiled frame)
  • java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take()
    @bci=4, line=688 (Compiled frame)
  • java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take()
    @bci=1, line=681 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=156, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9124: (state = BLOCKED)

  • java.lang.Thread.sleep(long) @bci=0 (Interpreted frame)
  • org.elasticsearch.threadpool.ThreadPool$EstimatedTimeThread.run()
    @bci=18, line=374 (Interpreted frame)

Thread 9096: (state = BLOCKED)

Thread 9095: (state = BLOCKED)

Thread 9094: (state = BLOCKED)

  • java.lang.Object.wait(long) @bci=0 (Interpreted frame)
  • java.lang.ref.ReferenceQueue.remove(long) @bci=44, line=133 (Interpreted
    frame)
  • java.lang.ref.ReferenceQueue.remove() @bci=2, line=149 (Interpreted
    frame)
  • java.lang.ref.Finalizer$FinalizerThread.run() @bci=3, line=177
    (Interpreted frame)

Thread 9093: (state = BLOCKED)

  • java.lang.Object.wait(long) @bci=0 (Interpreted frame)
  • java.lang.Object.wait() @bci=2, line=502 (Interpreted frame)
  • java.lang.ref.Reference$ReferenceHandler.run() @bci=46, line=133
    (Interpreted frame)

On Sat, Aug 25, 2012 at 2:26 AM, Clinton Gormley clint@traveljury.comwrote:

Hi David

Why do you say that ES can not handle the seq number as a String?
IMHO, I don't see what can bother ES river. When the river ask couchDb
for new changes, it only append the lastseq (String) as is in the URL
(_changes API)

Sorry I misread that in my haste - I thought Zuhaib was saying that the
document itself was just a string.

clint

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 24 août 2012 à 08:51, Clinton Gormley clint@traveljury.com a écrit
:

Hi Zuhaib

I was thinking the same thing, so when it stopped index I turned on
TRACE for the river logging and I checked out the data it pulled first
and it looks normal with normal encoding. The seq number
is "seq":96967136 but its not always the same it stops at that is just
the one I investigated last, currently it stopped at 96967115

Looking at the river source code it seems to be a string(?)

That's interesting - yes, ES wouldn't be able to handle that.

I would expect an error or something thrown if it it had a bad
encoding or something, but, again the data is totally searchable
currently using couchdb-lucene.

lucene is different. it doesn't have a schema and isn't expecting JSON.

Clinton,
if you see the screenshot you will see I have a lot of heap memory :slight_smile:
close to 43GB of heap thanks to the m2.4xlarge instance. Also ulimit
you see in that screenshot and I confirmed that.

better to include that info in text - i don't bother looking at any
screenshots :slight_smile:

but it sounds like the string doc is your problem.

clint

--

--

--

Just out of curiosity, did you guys solve this problem?

On Tuesday, August 28, 2012 9:28:30 PM UTC+3, zuhaib wrote:

So this time I split the data in to two index's using couchdb filters and
again at the 98millin seq count it stops indexing. Restarting
elasticsearch will get it to index one or two doc and then it stops.
Logging shows nothign but the river pulling data from couchdb so I know
that link is working. This is the jstack so maybe someone with better Java
skills can maybe take a look at this and figure out whats going on:

Deadlock Detection:

No deadlocks found.

Thread 14584: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 14583: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 13373: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10820: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10426: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10425: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10424: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10423: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10422: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10421: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10420: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10419: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10418: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 10417: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9789: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9763: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9762: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9745: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9744: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9711: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9710: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9709: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9582: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9563: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9543: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9541: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9513: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9512: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9448: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9443: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9424: (state = IN_NATIVE)

  • java.net.SocketInputStream.socketRead0(java.io.FileDescriptor, byte[],
    int, int, int) @bci=0 (Interpreted frame)
  • java.net.SocketInputStream.read(byte[], int, int) @bci=84, line=146
    (Interpreted frame)
  • java.io.BufferedInputStream.read1(byte[], int, int) @bci=39, line=273
    (Interpreted frame)
  • java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=334
    (Interpreted frame)
  • sun.net.www.http.ChunkedInputStream.readAheadBlocking() @bci=38,
    line=543 (Compiled frame)
  • sun.net.www.http.ChunkedInputStream.readAhead(boolean) @bci=36,
    line=600 (Compiled frame)
  • sun.net.www.http.ChunkedInputStream.read(byte[], int, int) @bci=80,
    line=687 (Interpreted frame)
  • java.io.FilterInputStream.read(byte[], int, int) @bci=7, line=133
    (Interpreted frame)

sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(byte[],
int, int) @bci=4, line=2582 (Interpreted frame)

  • sun.nio.cs.StreamDecoder.readBytes() @bci=130, line=282 (Compiled frame)
  • sun.nio.cs.StreamDecoder.implRead(char[], int, int) @bci=112, line=324
    (Compiled frame)
  • sun.nio.cs.StreamDecoder.read(char[], int, int) @bci=180, line=176
    (Interpreted frame)
  • java.io.InputStreamReader.read(char[], int, int) @bci=7, line=184
    (Interpreted frame)
  • java.io.BufferedReader.fill() @bci=145, line=153 (Interpreted frame)
  • java.io.BufferedReader.readLine(boolean) @bci=44, line=316 (Compiled
    frame)
  • java.io.BufferedReader.readLine() @bci=2, line=379 (Interpreted frame)
  • org.elasticsearch.river.couchdb.CouchdbRiver$Slurper.run() @bci=550,
    line=472 (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9423: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14,
    line=186 (Interpreted frame)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await()
@bci=42, line=2043 (Interpreted frame)

  • java.util.concurrent.ArrayBlockingQueue.take() @bci=20, line=345
    (Interpreted frame)
  • org.elasticsearch.river.couchdb.CouchdbRiver$Indexer.run() @bci=18,
    line=312 (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9410: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9409: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(java.util.concurrent.SynchronousQueue$TransferStack$SNode,
boolean, long) @bci=174, line=453 (Compiled frame)

java.util.concurrent.SynchronousQueue$TransferStack.transfer(java.lang.Object,
boolean, long) @bci=102, line=352 (Interpreted frame)

  • java.util.concurrent.SynchronousQueue.poll(long,
    java.util.concurrent.TimeUnit) @bci=11, line=903 (Compiled frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9408: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14,
    line=186 (Interpreted frame)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await()
@bci=42, line=2043 (Interpreted frame)

  • java.util.concurrent.ArrayBlockingQueue.put(java.lang.Object) @bci=39,
    line=280 (Interpreted frame)
  • org.elasticsearch.river.couchdb.CouchdbRiver$Slurper.run() @bci=678,
    line=484 (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9405: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9369: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9366: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9271: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9270: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9269: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9268: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9267: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue$Node,
java.lang.Object, boolean, long) @bci=180, line=702 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(java.lang.Object,
boolean, int, long) @bci=286, line=615 (Compiled frame)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(long,
java.util.concurrent.TimeUnit) @bci=9, line=1117 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=141, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9265: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9264: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9263: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9080: (state = BLOCKED)

Thread 9247: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14,
    line=186 (Interpreted frame)

java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt()
@bci=1, line=838 (Interpreted frame)

java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(int)
@bci=66, line=998 (Interpreted frame)

java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(int)
@bci=24, line=1304 (Interpreted frame)

  • java.util.concurrent.CountDownLatch.await() @bci=5, line=235
    (Interpreted frame)
  • org.elasticsearch.bootstrap.Bootstrap$3.run() @bci=3, line=222
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9246: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run()
@bci=23, line=229 (Interpreted frame)

  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9245: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14,
    line=186 (Interpreted frame)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await()
@bci=42, line=2043 (Interpreted frame)

  • java.util.concurrent.LinkedBlockingQueue.take() @bci=29, line=386
    (Interpreted frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=156, line=1043
    (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9244: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9243: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9242: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9241: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9240: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9239: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9238: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9237: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9236: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9235: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9234: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9233: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9232: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9231: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9230: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Interpreted frame)
  • java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14,
    line=186 (Interpreted frame)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await()
@bci=42, line=2043 (Interpreted frame)

  • java.util.concurrent.LinkedBlockingQueue.take() @bci=29, line=386
    (Interpreted frame)
  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=156, line=1043
    (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9172: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9171: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9170: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9169: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9168: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9167: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9166: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9165: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9164: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9163: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9162: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9161: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9160: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9159: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9158: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9157: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9156: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9154: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector)
@bci=4, line=52 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run()
@bci=57, line=223 (Compiled frame)

  • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run()
    @bci=1, line=35 (Interpreted frame)
  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9150: (state = IN_NATIVE)

  • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0
    (Compiled frame; information may be imprecise)
  • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=228 (Compiled
    frame)
  • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=83 (Compiled
    frame)
  • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=87
    (Compiled frame)
  • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=98 (Compiled frame)

org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run()
@bci=23, line=229 (Interpreted frame)

  • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run()
    @bci=55, line=102 (Interpreted frame)

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run()
@bci=14, line=42 (Interpreted frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=46, line=1110 (Interpreted frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9149: (state = BLOCKED)

  • java.lang.Thread.sleep(long) @bci=0 (Interpreted frame)
  • org.elasticsearch.indices.ttl.IndicesTTLService$PurgerThread.run()
    @bci=60, line=135 (Interpreted frame)

Thread 9148: (state = BLOCKED)

  • sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information
    may be imprecise)
  • java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object,
    long) @bci=20, line=226 (Compiled frame)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(long)
@bci=68, line=2081 (Compiled frame)

  • java.util.concurrent.DelayQueue.take() @bci=57, line=193 (Compiled
    frame)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take()
@bci=4, line=688 (Compiled frame)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take()
@bci=1, line=681 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor.getTask() @bci=156, line=1043
    (Compiled frame)

java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
@bci=17, line=1103 (Compiled frame)

  • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=603
    (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=679 (Interpreted frame)

Thread 9124: (state = BLOCKED)

  • java.lang.Thread.sleep(long) @bci=0 (Interpreted frame)
  • org.elasticsearch.threadpool.ThreadPool$EstimatedTimeThread.run()
    @bci=18, line=374 (Interpreted frame)

Thread 9096: (state = BLOCKED)

Thread 9095: (state = BLOCKED)

Thread 9094: (state = BLOCKED)

  • java.lang.Object.wait(long) @bci=0 (Interpreted frame)
  • java.lang.ref.ReferenceQueue.remove(long) @bci=44, line=133
    (Interpreted frame)
  • java.lang.ref.ReferenceQueue.remove() @bci=2, line=149 (Interpreted
    frame)
  • java.lang.ref.Finalizer$FinalizerThread.run() @bci=3, line=177
    (Interpreted frame)

Thread 9093: (state = BLOCKED)

  • java.lang.Object.wait(long) @bci=0 (Interpreted frame)
  • java.lang.Object.wait() @bci=2, line=502 (Interpreted frame)
  • java.lang.ref.Reference$ReferenceHandler.run() @bci=46, line=133
    (Interpreted frame)

On Sat, Aug 25, 2012 at 2:26 AM, Clinton Gormley <cl...@traveljury.com<javascript:>

wrote:

Hi David

Why do you say that ES can not handle the seq number as a String?
IMHO, I don't see what can bother ES river. When the river ask couchDb
for new changes, it only append the lastseq (String) as is in the URL
(_changes API)

Sorry I misread that in my haste - I thought Zuhaib was saying that the
document itself was just a string.

clint

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 24 août 2012 à 08:51, Clinton Gormley <cl...@traveljury.com<javascript:>>
a écrit :

Hi Zuhaib

I was thinking the same thing, so when it stopped index I turned on
TRACE for the river logging and I checked out the data it pulled first
and it looks normal with normal encoding. The seq number
is "seq":96967136 but its not always the same it stops at that is just
the one I investigated last, currently it stopped at 96967115

Looking at the river source code it seems to be a string(?)

That's interesting - yes, ES wouldn't be able to handle that.

I would expect an error or something thrown if it it had a bad
encoding or something, but, again the data is totally searchable
currently using couchdb-lucene.

lucene is different. it doesn't have a schema and isn't expecting JSON.

Clinton,
if you see the screenshot you will see I have a lot of heap memory :slight_smile:
close to 43GB of heap thanks to the m2.4xlarge instance. Also ulimit
you see in that screenshot and I confirmed that.

better to include that info in text - i don't bother looking at any
screenshots :slight_smile:

but it sounds like the string doc is your problem.

clint

--

--

--

ES is doing well, it seems, there is something in the couchdb data going on.

Jörg

On Sunday, November 11, 2012 7:37:15 PM UTC+1, Juraj Vitko wrote:

Just out of curiosity, did you guys solve this problem?

--

Juraj,

So it turned out the issue was on our side (couchdb) but the way
elasticsearch/river handled was not ideal. We had committed some data as
Int in couchdb that should have been stored as string as they where very
large. Accuracy would FUBAR when sent as a JSON (Javascript has
a limitation on the max number it can store and then precision goes to
hell, you can google about it for more detail as I am no JS expert) and
elasitsearch would just refuse to index the data yet the river kept on
pulling the data. The lack of any error or anything tipping us off really
set me back and it was not till i looked very closely at the data it would
stop on did we figure out the problem. If i had more time I would try to
dig deeper in the river code to see if it can catch an issue like this in
the future but we went ahead and corrected our app and fixed the old data
and it imported fine after that.

Zuhaib

On Mon, Nov 12, 2012 at 12:01 AM, Jörg Prante joergprante@gmail.com wrote:

ES is doing well, it seems, there is something in the couchdb data going
on.

Jörg

On Sunday, November 11, 2012 7:37:15 PM UTC+1, Juraj Vitko wrote:

Just out of curiosity, did you guys solve this problem?

--

--