Bunch of errors when running curator test suite command


(anthony) #1

When I run python setup.py test, I get this output. Does anyone get the same error?

-----
No handlers could be found for logger "elasticsearch.trace"
test_add_and_remove (test.integration.test_alias.TestCLIAlias) ... FAIL
ERROR
test_add_only (test.integration.test_alias.TestCLIAlias) ... ERROR
ERROR
test_add_only_skip_closed (test.integration.test_alias.TestCLIAlias) ... ERROR
ERROR
test_add_only_with_extra_settings (test.integration.test_alias.TestCLIAlias) ... ERROR
ERROR
test_add_with_empty_list (test.integration.test_alias.TestCLIAlias) ... ERROR
ERROR
test_add_with_empty_remove (test.integration.test_alias.TestCLIAlias) ... ERROR
ERROR
test_alias_remove_only (test.integration.test_alias.TestCLIAlias) ... ERROR
ERROR
test_extra_options (test.integration.test_alias.TestCLIAlias) ... ERROR
test_no_add_remove (test.integration.test_alias.TestCLIAlias) ... ERROR
ERROR
test_no_alias (test.integration.test_alias.TestCLIAlias) ... ERROR
ERROR
test_remove_index_not_in_alias (test.integration.test_alias.TestCLIAlias) ... FAIL
ERROR
test_remove_with_empty_add (test.integration.test_alias.TestCLIAlias) ... ERROR
ERROR
test_remove_with_empty_list (test.integration.test_alias.TestCLIAlias) ... ERROR
ERROR
test_exclude (test.integration.test_allocation.TestCLIAllocation) ... ERROR
ERROR
more-----
======================================================================
ERROR: test_add_only (test.integration.test_alias.TestCLIAlias)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/var/opt/synced/test/integration/test_alias.py", line 26, in test_add_only
    self.create_index('my_index')
  File "/var/opt/synced/test/integration/__init__.py", line 121, in create_index
    body={'settings': {'number_of_shards': shards, 'number_of_replicas': 0}}
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/utils.py", line 69, in _wrapped
    return func(*args, params=params, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/indices.py", line 110, in create
    params=params, body=body)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/transport.py", line 327, in perform_request
    status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 110, in perform_request
    self._raise_error(response.status, raw_data)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/base.py", line 114, in _raise_error
    raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
RequestError: TransportError(400, u'index_already_exists_exception', u'already exists')
-------------------- >> begin captured logging << --------------------
DEBUG     urllib3.util.retry               from_int:200  Converted retries value: False -> Retry(total=False, connect=None, read=None, redirect=0, status=None)
DEBUG urllib3.connectionpool          _make_request:396  http://localhost:9280 "PUT /my_index HTTP/1.1" 400 211
WARNING          elasticsearch       log_request_fail:88   PUT /my_index [status:400 request:0.001s]
DEBUG          elasticsearch       log_request_fail:96   > {"settings": {"number_of_replicas": 0, "number_of_shards": 1}}
DEBUG          elasticsearch       log_request_fail:99   < {"error":{"root_cause":[{"type":"index_already_exists_exception","reason":"already exists","index":"my_index"}],"type":"index_already_exists_exception","reason":"already exists","index":"my_index"},"status":400}
--------------------- >> end captured logging << ---------------------

--- more----

(Aaron Mildenstein) #2

Please encapsulate pasted text in < / > code tags, or within triple back-ticks, like this:

```
PASTED TEXT
```

It's too hard to read errors without this consistency.


(Aaron Mildenstein) #3

If the Elasticsearch instance you're running against has any index in it, you will get some failures.

How are you launching it? I assume you set the appropriate environment variables to change the default http://localhost:9200 that it expects, as I see http://localhost:9280 in your output.

Is there something else at http://localhost:9280?

Currently, Curator is running tests using Travis CI, as defined in this file. It represents a mash-up of each of these versions:

python:
  - "2.7"
  - "3.4"
  - "3.5"
  - "3.6"

env:
  - ES_VERSION=5.0.2
  - ES_VERSION=5.1.2
  - ES_VERSION=5.2.2
  - ES_VERSION=5.3.3
  - ES_VERSION=5.4.3
  - ES_VERSION=5.5.2

The master branch has added 5.6.2 as yet another ES_VERSION. Every commit results in each python version defined being tested against each ES_VERSION. It's a lot of tests. They're all passing.

Note: The current branch of Curator will not run against Elasticsearch 6.x


(anthony) #4

When I do; curl localhost:9280/_cat/indices, i only see .kibana if that's what you're referring to. Currently the elasticsearch version im running is 2.4.4 (i got that version from curl localhost:9280), and i'm using the curator 4.x branch.


(Aaron Mildenstein) #5

That will do it every time. You cannot have a Kibana instance connected to this Elasticsearch. It must be standalone, with nothing else connected. The reason these integration tests are failing is because the .kibana index is not expected. When the test completes (failed or passed), it cleans up all indices in the cluster in preparation for the next test. The problem is that Kibana will automatically create another .kibana index if the original was deleted.


(anthony) #6

Having trouble trying to detach kibana. I attempted to just stop the service that prevents it from recreating .kibana, but I still get the same error. Should I continue trying to detach?


(Aaron Mildenstein) #7

You should not be using an Elasticsearch instance for testing that is used for anything else. If you're having a hard time detaching, you should instead spin up another isolated instance of Elasticsearch and run tests against that.


(system) #8

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.