Big data on the one server without cluster

Hello everyone, I am currently studying the work of elastic and plan to transfer the search to elastic. I'm calculating how much space my database will take up and realized that I need several disks. My database will weigh approximately 10-12 terabytes, at the moment I am using the built-in database solution, but I am no longer satisfied with the quality of the results and search settings. I read that elastic supported path.data with the ability to add new disks, but this is now marked obsolete. I have a very nice server with 1TB of RAM and about 100TB of storage, totaling 8TB each. The question is, how can I configure the division of indexes on several disks without using Docker or other virtualization and clustering tools? Can I do this if I only have 1 elastic node?
A small clarification, everything I use works on Windows and I also plan to install elastic on this system without docker and elastic cluster.

Hi @habajol675

Well I think you are kind of answering your own question.

Most of us with such a nice server would run a couple nodes like 3 nodes with separate data paths... 1 node would probably not be as performant, and flexible etc etc.

But that said per the docs here

As an alternative to multiple data paths, you can create a filesystem which spans multiple disks with a hardware virtualisation layer such as RAID, or a software virtualisation layer such as Logical Volume Manager (LVM) on Linux or Storage Spaces on Windows. If you wish to use multiple data paths on a single machine then you must run one node for each data path.

Curious why you don't want to use virtualization / Docker, 1 compose file and you could have 3 nodes running very easily...

You Could also probably multiple nodes on bare metal but that would require more work for sure

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.