Elasticsearch with mongo-connector - ConnectionTimeout

Hi

I am using mongodb and elasticsearch with the mongo-connector.
The problem is that my mongo-connector very often get the following error:

2016-02-03 12:10:31,415 [WARNING] elasticsearch:82 - POST http://localhost:9200/_bulk [status:N/A request:10.011s]
Traceback (most recent call last):
  File "build/bdist.linux-x86_64/egg/elasticsearch/connection/http_urllib3.py", line 78, in perform_request
    response = self.pool.urlopen(method, url, body, retries=False, headers=self.headers, **kw)
  File "build/bdist.linux-x86_64/egg/urllib3/connectionpool.py", line 608, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "build/bdist.linux-x86_64/egg/urllib3/util/retry.py", line 224, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "build/bdist.linux-x86_64/egg/urllib3/connectionpool.py", line 558, in urlopen
    body=body, headers=headers)
  File "build/bdist.linux-x86_64/egg/urllib3/connectionpool.py", line 380, in _make_request
    self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
  File "build/bdist.linux-x86_64/egg/urllib3/connectionpool.py", line 308, in _raise_timeout
    raise ReadTimeoutError(self, url, "Read timed out. (read timeout=%s)" % timeout_value)
ReadTimeoutError: HTTPConnectionPool(host=u'localhost', port=9200): Read timed out. (read timeout=10)
2016-02-03 12:10:31,417 [CRITICAL] mongo_connector.oplog_manager:543 - Exception during collection dump
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/mongo_connector-2.2.dev0-py2.7.egg/mongo_connector/oplog_manager.py", line 495, in do_dump
    upsert_all(dm)
  File "/usr/local/lib/python2.7/dist-packages/mongo_connector-2.2.dev0-py2.7.egg/mongo_connector/oplog_manager.py", line 479, in upsert_all
    dm.bulk_upsert(docs_to_dump(namespace), mapped_ns, long_ts)
  File "/usr/local/lib/python2.7/dist-packages/mongo_connector-2.2.dev0-py2.7.egg/mongo_connector/util.py", line 32, in wrapped
    return f(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/mongo_connector-2.2.dev0-py2.7.egg/mongo_connector/doc_managers/elastic_doc_manager.py", line 190, in bulk_upsert
    for ok, resp in responses:
  File "build/bdist.linux-x86_64/egg/elasticsearch/helpers/__init__.py", line 160, in streaming_bulk
    for result in _process_bulk_chunk(client, bulk_actions, raise_on_exception, raise_on_error, **kwargs):
  File "build/bdist.linux-x86_64/egg/elasticsearch/helpers/__init__.py", line 89, in _process_bulk_chunk
    raise e
ConnectionTimeout: ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host=u'localhost', port=9200): Read timed out. (read timeout=10))
2016-02-03 12:10:31,418 [ERROR] mongo_connector.oplog_manager:551 - OplogThread: Failed during dump collection cannot recover! Collection(Database(MongoClient(u'localhost', 27017), u'local'), u'oplog.rs')
2016-02-03 12:10:32,174 [ERROR] __main__:302 - MongoConnector: OplogThread <OplogThread(Thread-2, started 140086062610176)> unexpectedly stopped! Shutting down

So it seems that the Elasticsearch instance does not respond a few times when the mongo-connector does insert a bulk. I dont know why this happans. Can you give me some hints what i could check to solve this issue?

My elasticsearch configuration looks like this:


cluster.name: myclustername

node.name: "Midas"

index.number_of_replicas: 0

path.work: /elastic_tmp

bootstrap.mlockall: true

fyi i also post my mongo-connector config:

{
    "__comment__": "Configuration options starting with '__' are disabled",
    "__comment__": "To enable them, remove the preceding '__'",
    "mainAddress": "localhost:27017",
    "oplogFile": "/var/log/mongo-connector/oplog.timestamp",
    "noDump": false,
    "batchSize": -1,
    "verbosity": 1,
    "continueOnError": false,

    "logging": {
        "type": "file",
        "filename": "/var/log/mongo-connector/mongo-connector.log"
    },

    "docManagers": [
        {
            "docManager": "elastic_doc_manager",
            "targetURL": "localhost:9200",
            "__bulkSize": 1000,
            "__uniqueKey": "_id",
            "__autoCommitInterval": null
        }
    ]
}