I am running Ubuntu server 20.04 LTS in a VM hosted on a NAS (Synology.) Configuration is:
- Compute: Intel Celeron J3455 (4 cores allocated to the VM.) 1.5 (base) / 2.3 (burst) GHz
- Memory: 4 GB DDR3.
- Storage: 7200 RPM drive.
Following is the usage:
I am planning to move this from the NAS to a workstation running:
- VMWare ESXi (free)
- Compute: Intel Xeon W 1290 with 10 cores (20 threads) with 3.20 GHz base and 5.20 GHz Turbo frequency.
- Memory: 64 GB DDR4 RAM @ 2933 MHz
- Storage: 7200 RPM drive + PCIe M.2 SSD (Dell Class 40)
I am a self-funded student, and I need to use this system as part of my final year perusing masters in software and systems security. I wanted assistance calculating systems requirements if the load on the system remains consistent. I am using Elasticsearch stack on this system. I am facing the following issues:
- System load is always over 4.0 is this due to IOPs issue? If so, will adding RAM during migration to another system help? Or does this require faster storage such as SSD?
- Will providing the following configuration help situation load and improve performance without over provisioning?
Compute: 3 processors x 2 cores per processor = 6 16 GB RAM Storage will be local (SATA, 7200 RPM NAS grade drive.)
Underlying operating system continue to be Ubuntu 20.04 LTS. I want to comprehensively learn the Elastic Stack and hence I am hoping to put multiple nodes to test roles, query scheduling, etc. for using Elastic stack as an enterprise SIEM.
Current primary purpose of the system is to ingest logs from 50 honeypots deployed around the world in AWS and Azure with 500 being average EPS (events per second.) These events go into a logstash pipeline running on a raspberry pi before being sent the elastic stack.
What steps should I take to ensure I am not over provisioning and able to use the system for other projects too?