I have a single elasticsearch vm running and receiving data from logstash.
To see about improving performance, and to learn how to use clustering, I
created a second elasticsearch vm, and told it to use the same cluster.name
as the first one.
I think that's all I'm supposed to do in order to get clustering working,
right?
So, why do my elasticsearch nodes not see each other?
My first one sees itself and the logstash indexer. The second one only sees
itself. (I don't have logstash configured to send data to it...)
Both VM's are in the same DMZ, and neither have their own firewalls turned
on. According to my firewall person, the firewall doesn't block ports
between two servers in the same DMZ.
Running nmap on the first vm from the second vm says that ports 9200 and
9300 are open, but any others I test in between 9200-9400 are closed.
Is there another step I need to take in order to get clustering working?
Maybe something to tell elasticsearch to listen on all the ports it's
supposed to listen to?
8 novembre 2013 at 18:45:45, David Reagan (jerrac@gmail.com) a écrit:
I have a single elasticsearch vm running and receiving data from logstash. To see about improving performance, and to learn how to use clustering, I created a second elasticsearch vm, and told it to use the same cluster.name as the first one.
I think that's all I'm supposed to do in order to get clustering working, right?
So, why do my elasticsearch nodes not see each other?
My first one sees itself and the logstash indexer. The second one only sees itself. (I don't have logstash configured to send data to it...)
Both VM's are in the same DMZ, and neither have their own firewalls turned on. According to my firewall person, the firewall doesn't block ports between two servers in the same DMZ.
Running nmap on the first vm from the second vm says that ports 9200 and 9300 are open, but any others I test in between 9200-9400 are closed.
Is there another step I need to take in order to get clustering working? Maybe something to tell elasticsearch to listen on all the ports it's supposed to listen to?
Are the two elastic servers in the same subnet? you can telnet them each
other on port 9200 and 9300?
Sometimes it helped me to adjust the following config properties in
elasticsearch.yml :
discovery.zen.ping.timeout: 3s -> 10s #maybe you have a slow network?
In your none master node set
discovery.zen.ping.unicast.hosts: ["hostname.masternode"] #or ip adress if
no dns
Make sure not only TCP, also UDP is not blocked by firewalls. You may check
this with nmap.
KR
Hendrik
Am Freitag, 8. November 2013 18:44:39 UTC+1 schrieb David Reagan:
I have a single elasticsearch vm running and receiving data from logstash.
To see about improving performance, and to learn how to use clustering, I
created a second elasticsearch vm, and told it to use the same
cluster.name as the first one.
I think that's all I'm supposed to do in order to get clustering working,
right?
So, why do my elasticsearch nodes not see each other?
My first one sees itself and the logstash indexer. The second one only
sees itself. (I don't have logstash configured to send data to it...)
Both VM's are in the same DMZ, and neither have their own firewalls turned
on. According to my firewall person, the firewall doesn't block ports
between two servers in the same DMZ.
Running nmap on the first vm from the second vm says that ports 9200 and
9300 are open, but any others I test in between 9200-9400 are closed.
Is there another step I need to take in order to get clustering working?
Maybe something to tell elasticsearch to listen on all the ports it's
supposed to listen to?
Are the two elastic servers in the same subnet? you can telnet them each
other on port 9200 and 9300?
Sometimes it helped me to adjust the following config properties in
elasticsearch.yml :
discovery.zen.ping.timeout: 3s -> 10s #maybe you have a slow network?
In your none master node set
discovery.zen.ping.unicast.hosts: ["hostname.masternode"] #or ip adress if
no dns
Make sure not only TCP, also UDP is not blocked by firewalls. You may
check this with nmap.
KR
Hendrik
Am Freitag, 8. November 2013 18:44:39 UTC+1 schrieb David Reagan:
I have a single elasticsearch vm running and receiving data from
logstash. To see about improving performance, and to learn how to use
clustering, I created a second elasticsearch vm, and told it to use the
same cluster.name as the first one.
I think that's all I'm supposed to do in order to get clustering working,
right?
So, why do my elasticsearch nodes not see each other?
My first one sees itself and the logstash indexer. The second one only
sees itself. (I don't have logstash configured to send data to it...)
Both VM's are in the same DMZ, and neither have their own firewalls
turned on. According to my firewall person, the firewall doesn't block
ports between two servers in the same DMZ.
Running nmap on the first vm from the second vm says that ports 9200 and
9300 are open, but any others I test in between 9200-9400 are closed.
Is there another step I need to take in order to get clustering working?
Maybe something to tell elasticsearch to listen on all the ports it's
supposed to listen to?
Try using a monitoring plugin to give you a more complete view on things,
the API is nice but it can be tedious to have to cur every time you want to
check something.
By the looks of that you have two nodes in the cluster -
log-indexer-01 and log-elasticsearch-01
running on 10.255.0.82 and 10.255.0.84 respectively.
Are the two elastic servers in the same subnet? you can telnet them each
other on port 9200 and 9300?
Sometimes it helped me to adjust the following config properties in
elasticsearch.yml :
discovery.zen.ping.timeout: 3s -> 10s #maybe you have a slow network?
In your none master node set
discovery.zen.ping.unicast.hosts: ["hostname.masternode"] #or ip adress
if no dns
Make sure not only TCP, also UDP is not blocked by firewalls. You may
check this with nmap.
KR
Hendrik
Am Freitag, 8. November 2013 18:44:39 UTC+1 schrieb David Reagan:
I have a single elasticsearch vm running and receiving data from
logstash. To see about improving performance, and to learn how to use
clustering, I created a second elasticsearch vm, and told it to use the
same cluster.name as the first one.
I think that's all I'm supposed to do in order to get clustering
working, right?
So, why do my elasticsearch nodes not see each other?
My first one sees itself and the logstash indexer. The second one only
sees itself. (I don't have logstash configured to send data to it...)
Both VM's are in the same DMZ, and neither have their own firewalls
turned on. According to my firewall person, the firewall doesn't block
ports between two servers in the same DMZ.
Running nmap on the first vm from the second vm says that ports 9200 and
9300 are open, but any others I test in between 9200-9400 are closed.
Is there another step I need to take in order to get clustering working?
Maybe something to tell elasticsearch to listen on all the ports it's
supposed to listen to?
8 novembre 2013 at 18:45:45, David Reagan (jerrac@gmail.com) a écrit:
I have a single elasticsearch vm running and receiving data from logstash. To see about improving performance, and to learn how to use clustering, I created a second elasticsearch vm, and told it to use the same cluster.name as the first one.
I think that's all I'm supposed to do in order to get clustering working, right?
So, why do my elasticsearch nodes not see each other?
My first one sees itself and the logstash indexer. The second one only sees itself. (I don't have logstash configured to send data to it...)
Both VM's are in the same DMZ, and neither have their own firewalls turned on. According to my firewall person, the firewall doesn't block ports between two servers in the same DMZ.
Running nmap on the first vm from the second vm says that ports 9200 and 9300 are open, but any others I test in between 9200-9400 are closed.
Is there another step I need to take in order to get clustering working? Maybe something to tell elasticsearch to listen on all the ports it's supposed to listen to?
I have a single elasticsearch vm running and receiving data from logstash.
To see about improving performance, and to learn how to use clustering, I
created a second elasticsearch vm, and told it to use the same
cluster.name as the first one.
I think that's all I'm supposed to do in order to get clustering working,
right?
So, why do my elasticsearch nodes not see each other?
My first one sees itself and the logstash indexer. The second one only
sees itself. (I don't have logstash configured to send data to it...)
Both VM's are in the same DMZ, and neither have their own firewalls turned
on. According to my firewall person, the firewall doesn't block ports
between two servers in the same DMZ.
Running nmap on the first vm from the second vm says that ports 9200 and
9300 are open, but any others I test in between 9200-9400 are closed.
Is there another step I need to take in order to get clustering working?
Maybe something to tell elasticsearch to listen on all the ports it's
supposed to listen to?
8 novembre 2013 at 18:45:45, David Reagan (jerrac@gmail.com) a écrit:
I have a single elasticsearch vm running and receiving data from logstash. To see about improving performance, and to learn how to use clustering, I created a second elasticsearch vm, and told it to use the same cluster.name as the first one.
I think that's all I'm supposed to do in order to get clustering working, right?
So, why do my elasticsearch nodes not see each other?
My first one sees itself and the logstash indexer. The second one only sees itself. (I don't have logstash configured to send data to it...)
Both VM's are in the same DMZ, and neither have their own firewalls turned on. According to my firewall person, the firewall doesn't block ports between two servers in the same DMZ.
Running nmap on the first vm from the second vm says that ports 9200 and 9300 are open, but any others I test in between 9200-9400 are closed.
Is there another step I need to take in order to get clustering working? Maybe something to tell elasticsearch to listen on all the ports it's supposed to listen to?
I have a single elasticsearch vm running and receiving data from
logstash. To see about improving performance, and to learn how to use
clustering, I created a second elasticsearch vm, and told it to use the
same cluster.name as the first one.
I think that's all I'm supposed to do in order to get clustering working,
right?
So, why do my elasticsearch nodes not see each other?
My first one sees itself and the logstash indexer. The second one only
sees itself. (I don't have logstash configured to send data to it...)
Both VM's are in the same DMZ, and neither have their own firewalls
turned on. According to my firewall person, the firewall doesn't block
ports between two servers in the same DMZ.
Running nmap on the first vm from the second vm says that ports 9200 and
9300 are open, but any others I test in between 9200-9400 are closed.
Is there another step I need to take in order to get clustering working?
Maybe something to tell elasticsearch to listen on all the ports it's
supposed to listen to?
I hope we have provided all the answers you need to make it work. Let me sum up that:
1st: multicast
Make sure it can work between your VMs on a Network level (UDP and port described in doc).
2nd: unicast. If you can't do multicast (which is the case in cloud environment for example), switch to unicast using the two lines I wrote.
Also, note that you don't need to run VMs if you just want to test Elasticsearch clustering. Just start from the same dir:
bin/elasticsearch -f
bin/elasticsearch -f
bin/elasticsearch -f
bin/elasticsearch -f
And you're done.
I can't see any other information I can add here. May be others have new ideas?
--
David
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
8 novembre 2013 at 18:45:45, David Reagan (jerrac@gmail.com) a écrit:
I have a single elasticsearch vm running and receiving data from logstash. To see about improving performance, and to learn how to use clustering, I created a second elasticsearch vm, and told it to use the same cluster.name as the first one.
I think that's all I'm supposed to do in order to get clustering working, right?
So, why do my elasticsearch nodes not see each other?
My first one sees itself and the logstash indexer. The second one only sees itself. (I don't have logstash configured to send data to it...)
Both VM's are in the same DMZ, and neither have their own firewalls turned on. According to my firewall person, the firewall doesn't block ports between two servers in the same DMZ.
Running nmap on the first vm from the second vm says that ports 9200 and 9300 are open, but any others I test in between 9200-9400 are closed.
Is there another step I need to take in order to get clustering working? Maybe something to tell elasticsearch to listen on all the ports it's supposed to listen to?
These VM's are on a VMWare cloud. So I'm wondering if that's why multicast
won't work. I've been searching around trying to confirm that before I test
unicast.
--David Reagan
On Fri, Nov 8, 2013 at 1:00 PM, David Pilato david@pilato.fr wrote:
I hope we have provided all the answers you need to make it work. Let me
sum up that:
1st: multicast
Make sure it can work between your VMs on a Network level (UDP and port
described in doc).
2nd: unicast. If you can't do multicast (which is the case in cloud
environment for example), switch to unicast using the two lines I wrote.
Also, note that you don't need to run VMs if you just want to test
Elasticsearch clustering. Just start from the same dir:
bin/elasticsearch -f
bin/elasticsearch -f
bin/elasticsearch -f
bin/elasticsearch -f
And you're done.
I can't see any other information I can add here. May be others have new
ideas?
--
David
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
I have a single elasticsearch vm running and receiving data from
logstash. To see about improving performance, and to learn how to use
clustering, I created a second elasticsearch vm, and told it to use the
same cluster.name as the first one.
I think that's all I'm supposed to do in order to get clustering
working, right?
So, why do my elasticsearch nodes not see each other?
My first one sees itself and the logstash indexer. The second one only
sees itself. (I don't have logstash configured to send data to it...)
Both VM's are in the same DMZ, and neither have their own firewalls
turned on. According to my firewall person, the firewall doesn't block
ports between two servers in the same DMZ.
Running nmap on the first vm from the second vm says that ports 9200 and
9300 are open, but any others I test in between 9200-9400 are closed.
Is there another step I need to take in order to get clustering working?
Maybe something to tell elasticsearch to listen on all the ports it's
supposed to listen to?
On Fri, Nov 8, 2013 at 1:13 PM, David Reagan jerrac@gmail.com wrote:
These VM's are on a VMWare cloud. So I'm wondering if that's why multicast
won't work. I've been searching around trying to confirm that before I test
unicast.
--David Reagan
On Fri, Nov 8, 2013 at 1:00 PM, David Pilato david@pilato.fr wrote:
I hope we have provided all the answers you need to make it work. Let me
sum up that:
1st: multicast
Make sure it can work between your VMs on a Network level (UDP and port
described in doc).
2nd: unicast. If you can't do multicast (which is the case in cloud
environment for example), switch to unicast using the two lines I wrote.
Also, note that you don't need to run VMs if you just want to test
Elasticsearch clustering. Just start from the same dir:
bin/elasticsearch -f
bin/elasticsearch -f
bin/elasticsearch -f
bin/elasticsearch -f
And you're done.
I can't see any other information I can add here. May be others have new
ideas?
--
David
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
I have a single elasticsearch vm running and receiving data from
logstash. To see about improving performance, and to learn how to use
clustering, I created a second elasticsearch vm, and told it to use the
same cluster.name as the first one.
I think that's all I'm supposed to do in order to get clustering
working, right?
So, why do my elasticsearch nodes not see each other?
My first one sees itself and the logstash indexer. The second one only
sees itself. (I don't have logstash configured to send data to it...)
Both VM's are in the same DMZ, and neither have their own firewalls
turned on. According to my firewall person, the firewall doesn't block
ports between two servers in the same DMZ.
Running nmap on the first vm from the second vm says that ports 9200
and 9300 are open, but any others I test in between 9200-9400 are closed.
Is there another step I need to take in order to get clustering
working? Maybe something to tell elasticsearch to listen on all the ports
it's supposed to listen to?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.