What is the best practice that can be carried out?

hi

I have a server with great specifications and I want to use elasticsearch to store data from 3100 devices What is the best practice?
1 Do I install the Linux system on the server and install one elasticsearch node, knowing that it will take all the capabilities of the server, which means that it will have full server features and full capabilities and therefore will be strong

2 I divide the server into several virtual environments and make one node on each of these virtual environments

3 To take more than one server with specifications less than the existing server and install Linux system in each of the three servers and on each server I install one elasticsearch node where each node takes all the capabilities of the server installed on it and I link the nodes physically

What is the best practice for big data tolerance and speed? And what is the reason?

Is it the same question that has been answered here?

Almost yes but there are other details and I want an answer that I can rely on because the requirements have now been raised and I want a reason for that
can you explain to me

I know it was a suitable answer
But I have a question, which is
Logically, when I make one node on the entire server, all the features and features of the server are for these nodes, and the resources are fully exploited

When I make several virtual nodes, it will need to install more than one system, and I need to partition the storage, so some of the server resources are used by the systems, etc.

Can you guide me in this matter to what is appropriate?

Can you give a little more context of what are you talking about when you mention virtual nodes?

Are you talking about using VMs vs an entire bare metal server?

What I'm looking for is a server with one node that exploits all the resources of the server
But a server and in this server I create virtual environments and in each system I make a node and connect them together in one network

For example in the case of one node, the network size reaches 100 GB and connects to the switch directly because the connection is physical

But in the case of multiple nodes, up to 10 GB because the connection is virtual between the nodes

Which one is the best?

yes.

What is the specification? Do you have local SSD storage?

Elasticsearch does in my experience not scale vertically well above a certain limit. There are use cases where having a very large page cache can lead to nodes with more than 64GB RAM, but as you can see on Elastic Cloud (which follows best practices) they use virtualization/containers to scale out once nodes reach around 64GB RAM.

Given the data volume estimates per device you provided in the other thread this sounds like a proof-of-concept or test environment given the large number of devices and limited amount of hardware. Is that what you are setting up?