Offline install of curator failing

I'm trying to install curator in an offline CentOS 7 server. I am running 5.6 of ELK with x-pack installed. This CentOS server does not have any ELK components installed on it, but it is in the same subnet as my cluster. For curator, I have installed the components listed in the following link:

I've tried with both "pip install" and with "python install"

I first tried using "python install"

I was successful for each of the listed components including elasticsearch-py and urllib3. But when I tried to install curator, it got to this part:

Processing dependancies for elasticsearch-curator==5.3.0b1
Searching for elasticsearch=>5.4.0,<6.0.0
Download error on [Errno 104} Connection reset by peer -- Some packages may not be found!
Couldn't retrieve index page for 'elasticsearch'
Scanning index of all packages (this may take a while)

I hit control-C at that point because there was no sense in letting it try to get to a site it can't get to being offline and all. Remember, I have already installed (with seeming success) elasticsearch-py and urllib3.

So I tried a different tack. I tried using pip. With "pip install curator-master" I get the following:

Processing ./
Requirement already satisfied (use --upgrade to upgrade):  elasticsearch-curator==5.3.0b1 from file:///opt/ in /usr/lib/python2.7/site-packages/elasticsearch_curator-5.3.0b1-py2.7.egg
Collecting elasticsearch<6.0.0,>=5.4.0 (from elasticsearch-curator==5.3.0b1)
Retrying  (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ConnectionTimeoutError(<pip._vendor.requrest.packages.urllib3.conneciton.VerifiedHTTPSConnection object at 0x7fbbc479d550>, 'Connection to timed out. (connect timeout=15)')': /simple/elasticsearch
Retrying  (Retry(total=3, connect=None, read=None, redirect=None)) after connection broken by 'ConnectionTimeoutError(<pip._vendor.requrest.packages.urllib3.conneciton.VerifiedHTTPSConnection object at 0x7fbbc479d550>, 'Connection to timed out. (connect timeout=15)')': /simple/elasticsearch

at which point I hit control-C

The weird thing is that just before that I installed the elasticsearch-py using 'pip install' and the results were this:

Processing ./
Requirement already satisfied (use --upgrade to upgrade):  elasticsearch==6.2.0 from file:///opt/ in /usr/lib/python2.7/site-packages/elasticsearch-6.2.0-py2.7.egg
Requirement already satisfied (use --upgrade to upgrade): urllib3<1.23,>=1.32.1 in /usr/lib/python2.7/site-packages/urllib3-1.22-py2.7.egg (from elasticsearch==6.2.0)

So it appears that what is hanging up the install of curator is something that is already installed? Any suggestions for getting this thing to realized elasticsearch-py is installed?

Do I need to have a node of elasticsearch running on this server to get this to work?


You are using Python 2.7, it appears. It may be that your instance does not work with the recently upgraded PyPI URLs (hence the unreachable message).

Also, if you are using, and it says it's version 5.3.0b1, that's in error. You should be using the package at instead, as that's the latest release.

Lastly, the system version of Python in CentOS 7 is old. It's 2.7.6 or so, and Curator, to work properly, wants 2.7.9+. I recommend using the RPM provided here as it provides a usable binary with bundled/frozen Python 3.6 binary + libraries so you don't have to worry about them.

Thanks for the tips. Just as I was leaving work we got it to work and I was going to update tomorrow.

But here is what we did: First we retrieved curator 5.0.4 and tried to install that. We got the same error. Then we tried to make sure there were no remains of elasticsearch anywhere by doing a 'yum remove elasticsearch' and 'pip remove elasticsearch'. Then we retried 'pip install elasticsearch-curator-5.0.4' and that did finally install without errors.

But ultimately, I think the real culprit turned up when I tried to exit my ssh session. I got a notice saying there are stopped jobs. Apparently one of the installs was hung in the background. I didn't remember putting it there, but I have a feeling that may have been the cause all along, or at least contributed to it.


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.