Setting up cluster within same physical server which is divided into four VM's

HI,

I have a 64GB physical server, i have converted this into 4 VM's , how do i create a cluster with 1 master node and 2 data node, as local ip will be same for all the servers, i don't want to allow access of data outside the server so i can't give public ip in elasticsearch.yml file.

Many Thanks

Maybe f you give some more information someone could help you. How do you convert a server into 4? What kind of virtualization software do you use? Create an internal network and use it for the cluster traffic? Then let the ingest node have an external IP..

I have linux server , i have installed hypervisor on top of it and created 4 VMs. I now want to setup elasticsearch cluster using these inter vms.

sorry for being so naive , i was wondering if you could help me with some direct question and suggestion that i should put here .

Why have you decided to split up the server and run multiple small nodes on it? Why not run a single large node instead?

My data volume has increased and one node is not able to perform well.

Adding nodes does not necessarily help if you do not scale out or up and add hardware that way.

Ohk.. i was going for cluster setup so that i can make multiple nodes and each node will have 8GB of ram , so in case of garbage collection and cleanup every node will have dedicated cpu power and less ram which may make query fast. This is all i am assuming i might be completely wrong here.

At present i have one node with 32 GB of RAM . Total server RAM is 64 GB

Have you identified what is limiting performance? Is it CPU? Is it disk I/O?

I have linux server and on giving top command i can see out of 32 cores of cpu only 2 is utilized , RAM utilization goes till 80% . I have enterprise grade ssd installed in my server so i/o should be good.

What does heap usage look like?

What does a grep command that looks for a running tomcat process have to do with elasticsearch? And this shows your PID and PPID. Which has nothing to do with heap usage.

are you looking for this information.

"jvm": {
                    "timestamp": 1555397450160,
                    "uptime_in_millis": 487637353,
                    "mem": {
                        "heap_used_in_bytes": 9819971360,
                        "heap_used_percent": 57,
                        "heap_committed_in_bytes": 16979263488,
                        "heap_max_in_bytes": 16979263488,
                        "non_heap_used_in_bytes": 168745848,
                        "non_heap_committed_in_bytes": 179281920,
                        "pools": {
                            "young": {
                                "used_in_bytes": 711078912,
                                "max_in_bytes": 1605304320,
                                "peak_used_in_bytes": 1605304320,
                                "peak_max_in_bytes": 1605304320
                            },
                            "survivor": {
                                "used_in_bytes": 323440,
                                "max_in_bytes": 200605696,
                                "peak_used_in_bytes": 200605696,
                                "peak_max_in_bytes": 200605696
                            },
                            "old": {
                                "used_in_bytes": 9108569008,
                                "max_in_bytes": 15173353472,
                                "peak_used_in_bytes": 12201948216,
                                "peak_max_in_bytes": 15173353472
                            }
                        }
                    },
                    "threads": {
                        "count": 304,
                        "peak_count": 305
                    },
                    "gc": {
                        "collectors": {
                            "young": {
                                "collection_count": 1156,
                                "collection_time_in_millis": 54552
                            },
                            "old": {
                                "collection_count": 26,
                                "collection_time_in_millis": 12392
                            }
                        }
                    },
                    "buffer_pools": {
                        "direct": {
                            "count": 193,
                            "used_in_bytes": 1078637781,
                            "total_capacity_in_bytes": 1078637780
                        },
                        "mapped": {
                            "count": 12105,
                            "used_in_bytes": 1090471247872,
                            "total_capacity_in_bytes": 1090471247872
                        }
                    },
                    "classes": {
                        "current_loaded_count": 16635,
                        "total_loaded_count": 17166,
                        "total_unloaded_count": 531
                    }
                },

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.