What are the requirements?

Hello
I want to make 200 nodes I don't have the time so I used the clone

When I want to make a clone for the environment where one of the elasticsearch nodes is located, I have problems with the certificate

Is there a solution to this?
Or do I have to create each node separately?

You can not just clone nodes and expect it to work. Make sure you set up 3 dedicared master nodes in a cluster that size and do not have all nodes be master eligible.

Yes, I did.

Mean
I am doing a virtual node from the server with the following specifications:
RAM=64
CPU=8
SSD=100
HDD=5TB

When I do 225 nodes, it must take a little long, so I made a clone for the environment.
And when I linked each nude separated from the other nodes as if there was a conflict in something and I think that the cause of the problem is the certificate

Question
Is it possible to do this method (clone) or do I have to create each environment from the beginning?

Even though all data nodes will have the same discovery settings (the 3 dedicated master nodes) they each require a unique name, so a straight clone will not work. I have never tried to clone nodes so can not help much with any issues around that.

This sounds like a potentially reasonable specification for warm zone nodes (depends on your query latency requirements), but if you are going to be indexing a lot of data into these I suspect you likely will run into performance problems quite quickly.

Also
I now have a server with high specifications and I divide it into virtual environments in order to make a cluster, but I thought if I install a system on the server and make it one with high specifications

Question
What is the correct and practical way to make the server several nodes or one node with the efficiency of the entire server?

I also named each node with a unique name, it didn't work. Each node started as if it was in a different group.

If you have very large servers it is most common to set up multiple nodes per host using containers or virtualisation. A single very large node per host is in my experience likely going to be suboptimal.

Did you make each data node a dediacted data node, e.g. not master eligible, and provide it with the dedicated master nodes as seed hosts?

Yes, and I think I will face some challenges for him, I thought of putting one node and exploiting all the efficiency of the server with this node

But I don't know if this method is correct or not

I have only one server, so either I divide it by default or I make only one node and use all the server specifications, which means that it is bad, I made only one node or I made several nodes on it, in the end, it is one server

Yes, I did.

What do you recommend

Do I make one node on the entire server, or do I divide it into virtual environments and make more than one node, or do I redo the server and take several small servers in order to make nodes in these servers and they are physically separate?

No.

Yes. Also make sure to not overprovision CPU and/or RAM as Elasticsearch assumes it has complete access to any assigned resources.

Deploying across multiple servers will generally improve resiliency as one host can fail without taking down the whole cluster, so that is also an option.