Too many open files

Hi,

How to prevent the following error:
Failed to accept a connection.
java.io.IOException: Too many open files

I have max open descriptors set to 65535. I have currently 640 indices,
each has 3 document types.

Thanks for help.

Best regards.

Hello!

Increase the number of max opened files allowed. 640 indices are quite a big number, considering that you can have multiple shards for each and multiple replicas too.

--

Regards,

Rafał Kuć

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

Hi,

How to prevent the following error:

Failed to accept a connection.

java.io.IOException: Too many open files

I have max open descriptors set to 65535. I have currently 640 indices, each has 3 document types.

Thanks for help.

Best regards.

Hi, thanks for the answer. Now I consider switching indices and document
types. Currently I have increasing number of indices containing 3 documents
types (always the same). Would it be better to use only 3 indices
containing increasing number of document types (up to eg. 1 mln document
types) ?

Best regards.

2012/8/8 Rafał Kuć r.kuc@solr.pl

Hello!

Increase the number of max opened files allowed. 640 indices are quite a
big number, considering that you can have multiple shards for each and
multiple replicas too.

*--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch -
Elasticsearch

Hi,

How to prevent the following error:
Failed to accept a connection.
java.io.IOException: Too many open files

I have max open descriptors set to 65535. I have currently 640 indices,
each has 3 document types.

Thanks for help.

Best regards.

Hello!

I think I would go for three indices with multiple types of documents, but that also depend on your data structure and what you try to achieve. Having three indices has its pros and cons, but having a million indices (if I understand you correctly, that can happen in the future right ?) in the cluster can lead to resource allocation problems. With three indices, you can use routing based on the document type and in that case place documents of the same type in the same shard. This can also be helpful in some situations.

--

Regards,

Rafał Kuć

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

Hi, thanks for the answer. Now I consider switching indices and document types. Currently I have increasing number of indices containing 3 documents types (always the same). Would it be better to use only 3 indices containing increasing number of document types (up to eg. 1 mln document types) ?

Best regards.

2012/8/8 Rafał Kuć <r.kuc@solr.pl>

Hello!

Increase the number of max opened files allowed. 640 indices are quite a big number, considering that you can have multiple shards for each and multiple replicas too.

--

Regards,

Rafał Kuć

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

Hi,

How to prevent the following error:

Failed to accept a connection.

java.io.IOException: Too many open files

I have max open descriptors set to 65535. I have currently 640 indices, each has 3 document types.

Thanks for help.

Best regards.

Thank you Rafał for all your help.

Best regards.

2012/8/8 Rafał Kuć r.kuc@solr.pl

Hello!

I think I would go for three indices with multiple types of documents, but
that also depend on your data structure and what you try to achieve. Having
three indices has its pros and cons, but having a million indices (if I
understand you correctly, that can happen in the future right ?) in the
cluster can lead to resource allocation problems. With three indices, you
can use routing based on the document type and in that case place documents
of the same type in the same shard. This can also be helpful in some
situations.

--
Regards,
Rafał Kuć
Sematext :: *
http://sematext.com/
:: Solr - Lucene - Nutch - Elasticsearch

Hi, thanks for the answer. Now I consider switching indices and document
types. Currently I have increasing number of indices containing 3 documents
types (always the same). Would it be better to use only 3 indices
containing increasing number of document types (up to eg. 1 mln document
types) ?

Best regards.

2012/8/8 Rafał Kuć r.kuc@solr.pl
Hello!

Increase the number of max opened files allowed. 640 indices are quite a
big number, considering that you can have multiple shards for each and
multiple replicas too.

*--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch -
Elasticsearch

Hi,

How to prevent the following error:
Failed to accept a connection.
java.io.IOException: Too many open files

I have max open descriptors set to 65535. I have currently 640 indices,
each has 3 document types.

Thanks for help.

Best regards.

Is there any other way to use routing based on the document type than using
'_routing' field in each query URL ?

Best regards.
Marcin Dojwa

2012/8/8 Marcin Dojwa m.dojwa@livechatinc.com

Thank you Rafał for all your help.

Best regards.

2012/8/8 Rafał Kuć r.kuc@solr.pl

Hello!

I think I would go for three indices with multiple types of documents,
but that also depend on your data structure and what you try to achieve.
Having three indices has its pros and cons, but having a million indices
(if I understand you correctly, that can happen in the future right ?) in
the cluster can lead to resource allocation problems. With three indices,
you can use routing based on the document type and in that case place
documents of the same type in the same shard. This can also be helpful in
some situations.

--
Regards,
Rafał Kuć
Sematext :: *
http://sematext.com/
:: Solr - Lucene - Nutch - Elasticsearch

Hi, thanks for the answer. Now I consider switching indices and
document types. Currently I have increasing number of indices containing 3
documents types (always the same). Would it be better to use only 3 indices
containing increasing number of document types (up to eg. 1 mln document
types) ?

Best regards.

2012/8/8 Rafał Kuć r.kuc@solr.pl
Hello!

Increase the number of max opened files allowed. 640 indices are quite a
big number, considering that you can have multiple shards for each and
multiple replicas too.

*--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch -
Elasticsearch

Hi,

How to prevent the following error:
Failed to accept a connection.
java.io.IOException: Too many open files

I have max open descriptors set to 65535. I have currently 640 indices,
each has 3 document types.

Thanks for help.

Best regards.

Hello!

Take a look at routing field Marcin - http://www.elasticsearch.org/guide/reference/mapping/routing-field.html

--

Regards,

Rafał Kuć

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

Is there any other way to use routing based on the document type than using '_routing' field in each query URL ?

Best regards.

Marcin Dojwa

2012/8/8 Marcin Dojwa <m.dojwa@livechatinc.com>

Thank you Rafał for all your help.

Best regards.

2012/8/8 Rafał Kuć <r.kuc@solr.pl>

Hello!

I think I would go for three indices with multiple types of documents, but that also depend on your data structure and what you try to achieve. Having three indices has its pros and cons, but having a million indices (if I understand you correctly, that can happen in the future right ?) in the cluster can lead to resource allocation problems. With three indices, you can use routing based on the document type and in that case place documents of the same type in the same shard. This can also be helpful in some situations.

--

Regards,

Rafał Kuć

Sematext ::

http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

Hi, thanks for the answer. Now I consider switching indices and document types. Currently I have increasing number of indices containing 3 documents types (always the same). Would it be better to use only 3 indices containing increasing number of document types (up to eg. 1 mln document types) ?

Best regards.

2012/8/8 Rafał Kuć <r.kuc@solr.pl>

Hello!

Increase the number of max opened files allowed. 640 indices are quite a big number, considering that you can have multiple shards for each and multiple replicas too.

--

Regards,

Rafał Kuć

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

Hi,

How to prevent the following error:

Failed to accept a connection.

java.io.IOException: Too many open files

I have max open descriptors set to 65535. I have currently 640 indices, each has 3 document types.

Thanks for help.

Best regards.

Thank you Rafał.

Best regards.

2012/8/8 Rafał Kuć r.kuc@solr.pl

Hello!

Take a look at routing field Marcin -
Elasticsearch Platform — Find real-time answers at scale | Elastic

*--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch -
Elasticsearch

Is there any other way to use routing based on the document type than
using '_routing' field in each query URL ?

Best regards.
Marcin Dojwa

2012/8/8 Marcin Dojwa m.dojwa@livechatinc.com
Thank you Rafał for all your help.

Best regards.

2012/8/8 Rafał Kuć r.kuc@solr.pl
Hello!

I think I would go for three indices with multiple types of documents, but
that also depend on your data structure and what you try to achieve. Having
three indices has its pros and cons, but having a million indices (if I
understand you correctly, that can happen in the future right ?) in the
cluster can lead to resource allocation problems. With three indices, you
can use routing based on the document type and in that case place documents
of the same type in the same shard. This can also be helpful in some
situations.

*--
Regards,
Rafał Kuć
Sematext ::
http://sematext.com/ :: Solr - Lucene - Nutch - Elasticsearch

Hi, thanks for the answer. Now I consider switching indices and document
types. Currently I have increasing number of indices containing 3 documents
types (always the same). Would it be better to use only 3 indices
containing increasing number of document types (up to eg. 1 mln document
types) ?

Best regards.

2012/8/8 Rafał Kuć r.kuc@solr.pl
Hello!

Increase the number of max opened files allowed. 640 indices are quite a
big number, considering that you can have multiple shards for each and
multiple replicas too.

*--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch -
Elasticsearch

Hi,

How to prevent the following error:
Failed to accept a connection.
java.io.IOException: Too many open files

I have max open descriptors set to 65535. I have currently 640 indices,
each has 3 document types.

Thanks for help.

Best regards.