Hi All,
I am going to update elasticsearch 2.3.3 to 5.2.0 in my organization. But 1 bug I noticed which was before in elasticsearch 2 also that if i run a script which continuously pushes data to elasticsearch without any sleep or anything. Then within 3 minutes i am observing that :
curl: (7) Failed to connect to ::1: Cannot assign requested address
curl: (7) Failed to connect to ::1: Cannot assign requested address
curl: (7) Failed to connect to ::1: Cannot assign requested address
curl: (7) Failed to connect to ::1: Cannot assign requested address
curl: (7) Failed to connect to ::1: Cannot assign requested address
messages are coming. after a second or 2, again datas are successfully pushed. Within that fraction of time datas are being lost. Please reply me why it is happening??
Hi David,
At that time root logger was in debug mode. In that everything was normal. Like i said that "Cannot assign requested address " situation persists for at least a second or 2 and then again it was successful in inserting the values.
But if need the logs which log you want normal mode logs or debug mode logs. I will again reproduce this and send you the logs.
I wonder also if you can use x-pack to monitor a bit what is happening in your cluster.
May be you are running out of some resources?
Note that the right way to send a lot of data to elasticsearch is by using bulk but I guess you are trying to overload here elasticsearch to run some "performance tests".
Hi david,
you were saying right. When i gave more heap space this situation is not coming, i observed for atleast 7 mins. Is there any direct impact of heap space on this??
And yaa i was doing some performance testing. I need to know how it is behaving under heavy load.
And i think X-pack is not free. So we are not using it only. Thanks David for your input.
Again in another server i gave 16GB as heap size, here also this problem "Failed to connect to ::1: Cannot assign requested address" is coming. May be you are right, bulk is the procedure i should go for.
But in between please check what is the cause for it to happen.
I am testing in one server cluster(just for testing). It has 32 processor cores and 132 GB RAM. I gave 16 GB as heap space. If you can point me in the right direction, then it will be great use.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.