> [!sub]- [[šŸ“ŒSubscribe & Engage|šŸ§”FREE subscriber-only Products. Subscribe and get a chance to win!]] ## Preamble [[PVE]] is a type-1 [[hypervisor]], which provides infrastructure glue and a [[CLI]] & [[webUI]] to utilise all available Linux virtualisation technologies. It should be viewed as a productivity tool to manage Linux virtualisation, using a [[webUI]]. It is based on [[Debian]] 11 and therefore GNU/Linux-standards-based with plenty of power at the frontend and the backend. [[PVE]] offers two types of virtualisation: - [[KVM]] / [[QEMU]] - [[LXC]] If you have the time and like tinkering, you can achieve a similar level of functionality if you were to use the Linux provided resources to manage virtualisation technologies, [[KVM]], [[LXC]] and [[QEMU]]. > [!info]- useful links - dive deeper into Linux virtualisation > > https://github.com/lxc/lxc > https://www.linux-kvm.org/page/Management_Tools I highly recommend [[PVE]] over any other hypervisor for the following reasons: - Based on open systems and technologies - Utilises Linux standard technologies - Based on [[Debian]] as the underlying OS - Open source - Actively developed - [[FOC]] for anyone who cannot afford a license, without software limits and restrictions - Reasonable licensing cost for any size setup (good tiered system); [[IMHO]] virtually anyone in employment can afford it ## Definitions A ***[[PVE]]-host*** (also referred to as ***[[PVE]]-node*** and coded as [[PVEn]]), represents a physical computer system where [[PVE]] has been [[BM]] installed. Within the [[PVEn]] numerous [[KVM]] and [[LXC]] machines each host a guest-[[OS]]. When we refer to a ***host*** we mean a [[PVEn]] and when we refer to a ***guest*** we mean a [[KVM]] or [[LXC]]. A ***workload*** represents a [[KVM]] or [[LXC]]. ## Noteworthy [[PVE]] features: - Dark theme available as of PVE 7.4-3 ![[pve-dark-theme.png]] - [[KVM]]s and [[LXC]]s share [[CPU]] resources; therefore you can have multiple [[KVM]]s/[[LXC]]s whose total number of cores can exceed the physically available cores; however, no single [[KVM]]/[[LXC]] can use more cores than physically exist - [[KVM]] fixed memory allocation (Max Memory = Min Memory) - [[KVM]] elastic memory allocation (Max Memory > Min Memory); [[PVE]] will ensure allocated memory never falls below [[KVM]]-minimum, never exceeds [[KVM]]-maximum, and the difference between max-min is used/released to the [[PVEn]] as [[KVM]] utilisation requirements change over time (this change is driven by the guest-[[OS]] resource utilisation running inside the [[KVM]]) - For [[KVM]]s the best ***[[CPU]]-type*** to choose is `host`, as it passes all host [[CPU]]-flags to the [[KVM]] kernel; only drawback is live migration of a ***host***-based [[KVM]] to a dissimilar [[PVEn]] (different [[CPU]] type) might not be possible - migration process might fail - (depends on what [[CPU]]-flags are used by the [[KVM]] and how many [[CPU]]-flags between the [[PVEn]]s are different) - **Root File System**: Use [[ZFS]] over any other file/storage system (e.g. [[EXT]], [[RAID]] ) on [[PVEn]] with adequate [[CPU]] & [[RAM]] resources and ideally in a #RAID1 arrangement for redundancy. [[ZFS]] can be used on featherweight [[PVE]] nodes, but the file system will be slower than if using [[EXT]]; e.g. on a [[PVEn]] with 4 cores, 8GB [[RAM]], single drive [[eMMC]] storage avoid using [[ZFS]] as it will be slower than using [[EXT]]. Read [SWAP on ZFS](https://pve.proxmox.com/pve-docs/chapter-sysadmin.html[[ZFS]]_swap) prior to making a decision. As SWAP is set on a per [[KVM]] / [[LXC]] basis, you can disable SWAP or set it to low value. On a [[PVEn]] with fast cores, adequate [[RAM]] (e.g. upwards of 32GB) and fast [[NVMe]] [[SSD]], SWAP is rarely used in [[KVM]]s/[[LXC]]s, but even when used it presents no real performance issues, unless [[PVEn]] is memory starved. It is best practice if SWAP is always setup on an [[EXT]] volume or not used at all. If SWAP is not used, it requires [[PVEn]] to have enough RAM to serve all workloads without running into memory starvation mode) ## Storage Considerations Although [[ZFS]] is superior to [[EXT]] it comes with some **very important** considerations, especially if you decide to use it as root-[[FS]] (boot-[[FS]]): 1. [[ZFS]] loves [[RAM]]; consider 1GB [[RAM]] per 1TB disk storage 2. [[ZFS]] loves fast storage interfaces; consider only [[NVMe]], maybe [[SATA]] and definitely no [[USB]] 3. [[ZFS]] loves fast storage systems; for root-[[FS]] consider only [[SSD]] and definitely no [[SD]], [[eMMC]], [[HDD]]; 4. [[ZFS]] loves wearing down your [[SSD]] faster than [[EXT]] does; [[SSD]] wearing down on [[ZFS]] is anything between 2 to 8 times faster in comparison to [[EXT]], depending on system resource capability and utilisation; therefore, if your [[SSD]] is very basic it will fail faster with [[ZFS]] compared to [[EXT]]. This is a short list of the [[SSD]] cell technology employed and its durability from best to worst: **[SLC]() -> [MLC]() -> [TLC]() -> [QLC](). Most consumer [[SSD]]s are [QLC]() based.** One more consideration for [[SSD]]s is if they have integrated technology to maintain cell data integrity in the case of a sudden power loss, made available in the form of a capacitor or battery. The above factors need to weigh in your decision to use [[ZFS]] as your root-[[FS]] or play safe using [[EXT]]. One last thing: use [[ECC]] [[RAM]] if possible to guarantee the integrity of the [[ZFS]] data structures held in [[RAM]]. This is true for any [[PVE]] deployment, but more relevant to [[ZFS]] because it uses [[RAM]] to hold its most important data structures. Everything else being equal, having good quality [[SSD]], preferably in #RAID1 or #RAID-Z1 arrangement, is more important than having [[ECC]] [[RAM]]. However, adopting both is very desirable. [[ZFS]] is superior to [[EXT]] in every way, if your system can support it. If you purchase [[NVMe]] [[SSD]]s from reputable manufacturers you will usually face no issues, but the lifespan of your [[SSD]]s will be shorter than if you used [[EXT]]. To protect the [[PVE]] boot drive, at a minimum, use [[ZFS]] in mirrored mode (known as #RAID1), protecting your data from a single drive failure (i.e. you can still boot [[PVE]] even if one drive fails). > ref: [Proxmox ZFS on Linux - Installation as Root File System](https://pve.proxmox.com/wiki/ZFS_on_Linux#_installation_as_root_file_system) ## Useful Resources [@ElectronicsWizardry](https://www.youtube.com/@ElectronicsWizardry/videos) - Proxmox focused YouTube channel [Proxmox 7.1 Guide: From blank system to Hypervisor](https://www.youtube.com/watch?v=Ce0uwBxbVRQ) - The most complete and useful video guide I am aware of.