Should http.enabled be true or false on a master only node?

I was reading over this page:

Node

And it doesn't answer this question. Was wondering what the correct setting was for a master only node with regards to http.enabled. I am setting up dedicated instances for client only, data only, and master only nodes.

Thanks for the help!
Chris

1 Like

If you disable HTTP on any node then Marvel or any other HTTP calls (obviously) won't work.

There are no "correct" settings for this, it's up to your requirements.

Thank you Mark!

I was hoping this would fix an issue I'm having with Marvel, but it did not. Perhaps this belongs in the Marvel group, but on the off chance you might know the answer, I'll start here first. :smile:

I now have http.enabled = true on my 3 nodes in the cluster.

1 client node:

{"node.master": "false",
 "node.data": "false",
 "discovery.zen.ping.multicast.enabled": "false",
 "discovery.zen.ping.unicast.hosts": "n5,n6,n9",
 "discovery.zen.minimum_master_nodes": "1",
 "gateway.recover_after_data_nodes": "1",
 "gateway.recover_after_master_nodes": "1",
 "gateway.expected_nodes": "2",
 "cluster.name": "elasticsearch-dev",
 "script.disable_dynamic": "false",
 "action.disable_delete_all_indices": "true",
 "index.refresh_interval": "60s",
 "bootstrap.mlockall": "true",
 "marvel.agent.exporter.es.hosts": "n8:9200",
 "http.enabled": "true",
 "http.cors.enabled": "true",
 "http.cors.allow-origin": "/.*/",
 "http.cors.allow-credentials": "true",
 "threadpool.bulk.queue_size": "500",
 "threadpool.bulk.size": "32",
 "threadpool.index.queue_size": "500",
 "threadpool.index.size": "32",
 "threadpool.search.queue_size": "2000",
 "index.search.slowlog.threshold.query.warn": "10s",
 "index.search.slowlog.threshold.query.info": "5s",
 "index.search.slowlog.threshold.fetch.warn": "2s",
 "index.search.slowlog.threshold.fetch.info": "1s",
 "index.search.slowlog.threshold.index.warn": "10s",
 "index.search.slowlog.threshold.index.info": "5s",
 "index.codec.bloom.load": "false" }

1 master node:

{"node.master":"true",
 "node.data": "false",
 "discovery.zen.ping.multicast.enabled": "false",
 "discovery.zen.ping.unicast.hosts": "n5,n6,n9",
 "discovery.zen.minimum_master_nodes": "1",
 "gateway.recover_after_data_nodes": "1",
 "gateway.recover_after_master_nodes": "1",
 "gateway.expected_nodes": "2",
 "cluster.name": "elasticsearch-dev",
 "script.disable_dynamic": "false",
 "action.disable_delete_all_indices": "true",
 "index.refresh_interval": "60s",
 "bootstrap.mlockall": "true",
 "marvel.agent.exporter.es.hosts": "n8:9200",
 "http.enabled": "true",
 "http.cors.enabled": "true",
 "http.cors.allow-origin": "/.*/",
 "http.cors.allow-credentials": "true" }

1 data node:

 {"node.master": "false",
 "node.data": "true",
 "discovery.zen.ping.multicast.enabled": "false",
 "discovery.zen.ping.unicast.hosts": "n5,n6,n9",
 "discovery.zen.minimum_master_nodes": "1",
 "gateway.recover_after_data_nodes": "1",
 "gateway.recover_after_master_nodes": "1",
 "gateway.expected_nodes": "2",
 "cluster.name": "elasticsearch-dev",
 "script.disable_dynamic": "false",
 "action.disable_delete_all_indices": "true",
 "index.refresh_interval": "60s",
 "bootstrap.mlockall": "true",
 "marvel.agent.exporter.es.hosts": "n8:9200",
 "http.enabled": "true",
 "http.cors.enabled": "true",
 "http.cors.allow-origin": "/.*/",
 "http.cors.allow-credentials": "true",
 "indices.fielddata.cache.size": "30%",
 "indices.breaker.total.limit": "70%",
 "indices.breaker.request.limit": "30%",
 "indices.breaker.fielddata.limit": "35%",
 "indices.memory.index_buffer_size": "20%",
 "index.compound_on_flush": "false",
 "threadpool.bulk.queue_size": "500",
 "threadpool.bulk.size": "32",
 "threadpool.index.queue_size": "500",
 "threadpool.index.size": "32",
 "threadpool.search.queue_size": "2000",
 "index.search.slowlog.threshold.query.warn": "10s",
 "index.search.slowlog.threshold.query.info": "5s",
 "index.search.slowlog.threshold.fetch.warn": "2s",
 "index.search.slowlog.threshold.fetch.info": "1s",
 "index.search.slowlog.threshold.index.warn": "10s",
 "index.search.slowlog.threshold.index.info": "5s",
 "index.codec.bloom.load": "false",
 "index.merge.scheduler.max_thread_count": "1",
 "index.merge.policy.type": "tiered",
 "index.merge.policy.max_merged_segment": "5gb",
 "index.merge.policy.segments_per_tier": "10",
 "index.merge.policy.max_merge_at_once": "10",
 "index.merge.policy.max_merge_at_once_explicit": "10",
 "indices.store.throttle.type": "none",
 "index.translog.flush_threshold_size": "1GB",
 "cluster.routing.allocation.disk.watermark.low": "95%",
 "cluster.routing.allocation.disk.watermark.high": "98%",
 "cluster.routing.allocation.cluster_concurrent_rebalance": "2" }

And I have an external cluster for Marvel by itself, with just one node (in development):

{"discovery.zen.minimum_master_nodes": "1",
 "discovery.zen.ping.multicast.enabled": "false",
 "discovery.zen.ping.unicast.hosts": "n8",
 "gateway.recover_after_nodes": "1",
 "cluster.name": "elasticsearch-dev-marvel",
 "index.number_of_replicas": "0",
 "index.number_of_shards": "1",
 "script.disable_dynamic": "false",
 "action.disable_delete_all_indices": "true",
 "index.refresh_interval": "5s",
 "http.enabled": "true",
 "marvel.agent.enabled": "false"}

In Marvel, the only node that shows up in the "NODES" section is the master, and there are no updates received for the last 6 hours (longer than the cluster has existed).

However, in the "CLUSTER SUMMARY" section, the number of nodes is correctly listed as "3". So, something is getting to Marvel correctly! I've been experimenting with various options and have not been able to get the NODES section to show properly. Do you happen to see anything in my configurations above that looks suspicious?

Thank you VERY MUCH for all your help.
Chris

Do you have time synced with each node (ie NTP)?

1 Like

Grumble.
Yes, that was it.
It was a new VM environment and ntp was not enabled.
Sorry about that.

Thank you for your time.

This thread saved my bacon - thanks guys!