Proxmox Configuration

Disk setup

LVM Configuration of SSD

At install time I told the installer to only use the first 50GB of my 1TB install drive, and it auto partitioned using LVM. I did this because I needed some temporary disk space to move some backups off the 6TB disks so they were ready for a new ZFS setup, and I wanted to ensure that the root LVM was small to start with. Promox will happily run on a 10 GB / partition.

root@proxmox:/# lsblk
sda                 8:0    0 931.5G  0 disk
├sda1               8:1    0  1007K  0 part
├sda2               8:2    0   512M  0 part /boot/efi
├sda3               8:3    0  49.5G  0 part
│ ├pve-swap       253:0    0   6.1G  0 lvm  [SWAP]
│ ├pve-root       253:1    0  12.3G  0 lvm  /
│ ├pve-data_tmeta 253:2    0     1G  0 lvm
│ │ └pve-data     253:4    0    23G  0 lvm
│ └pve-data_tdata 253:3    0    23G  0 lvm
│   └pve-data     253:4    0    23G  0 lvm
sdb                 8:16   0   5.5T  0 disk
sdc                 8:32   0   5.5T  0 disk
sdd                 8:48   1   3.6G  0 disk
sr0                11:0    1  1024M  0 rom

Using fdisk I added a partition to sda, and using defult values it will take the remaining disk space.

fdisk /dev/sda
n [enter] #(and enter to all default values to create a partition with the remaining disk space)
w [enter] #to write changes to disk

I recommend a reboot here to ensure that the changes to the boot disk are registered. Now the lsblk will show the remaining space as a partition.

└sda4               8:4    0 881.5G  0 part

I partitioned and formatted/used this space to hold some backups while I worked on the ZFS disks, and removed the formatting aftewards.

When finished I created a LVM PV (physical volume) with:

pvcreate /dev/sda4

Then I added it to the existing LVM VG (volume group) labeled pve

vgextend pve /dev/sda4

Now I can grow the data LV (logical volume) to use the rest of the disk

lvextend -l +100%FREE /dev/pve/data

ZFS Configuration of HDDs with mirroring

Since I wanted to use the machine of backups and wanted high fault tolerance I decided to use ZFS mirroring of 2x6TB disk for 6TB of redundant storage. As an added bonus the mirroring increases the read speeds significantly (~50%).

The drives need no preparation beyond removing all partitions - use fdisk if needed, use option d to delete partitions and w to write changes.

Once the drives are ready make a mountpoint on your filesystem:

mkdir /mnt/ZFSMIRROR/

Create the zfs pool and make sure to specify the correct 2 drives:

zpool create -m /mnt/ZFSMIRROR/ zfsmirror mirror /dev/sdb /dev/sdc

Note that zfsmirror is the name that I have chosen and mirror is the argument followed by the two drives you want to include. -m specifies the mountpoint.

Now I want to create separate datasets for backups, ISOs and disk images so I can manage them separately.

zfs create zfsmirror/backups
zfs create zfsmirror/ISOs
zfs create zfsmirror/images

Datasets can be managed independently, sort-of like partitions.

Disks after configuration:

root@proxmox:~# lsblk
sda                 8:0    0 931.5G  0 disk
├sda1               8:1    0  1007K  0 part
├sda2               8:2    0   512M  0 part /boot/efi
├sda3               8:3    0  49.5G  0 part
│ ├pve-swap       253:0    0   6.1G  0 lvm  [SWAP]
│ ├pve-root       253:1    0  12.3G  0 lvm  /
│ ├pve-data_tmeta 253:2    0     1G  0 lvm
│ │ └pve-data     253:4    0 910.6G  0 lvm
│ └pve-data_tdata 253:3    0 910.6G  0 lvm
│   └pve-data     253:4    0 910.6G  0 lvm
└sda4               8:4    0 881.5G  0 part
  └pve-data_tdata 253:3    0 910.6G  0 lvm
    └pve-data     253:4    0 910.6G  0 lvm
sdb                 8:16   0   5.5T  0 disk
├sdb1               8:17   0   5.5T  0 part
└sdb9               8:25   0     8M  0 part
sdc                 8:32   0   5.5T  0 disk
├sdc1               8:33   0   5.5T  0 part
└sdc9               8:41   0     8M  0 part
sdd                 8:48   1   3.6G  0 disk
sr0                11:0    1  1024M  0 rom

Configure Proxmox APT repository

Make sure to point apt to use the non-subscription repository if you are using the free version. /etc/apt/sources.list.d/pve-enterprise.list should have the line deb stretch pve-no-subscription and nothing else. Run apt update to refresh and apt upgrade to get the latest updates installed.


If you are using a server motherboard like me you probably have more than 1 ethernet interface to play with. My Supermicro board has 4, but unfortunately proxmox seems to default to a strange non-predictible naming convention, switching the names around for each boot. This can be avoided by using predictable network interface names - a feature that should be default, but for some reason isn't for me. After adding net.ifnames=1 to grub boot options AND updating the firmware to version 3.3 the interfaces now reliably have the same name every boot.

When predictable network interface names are enabled the interfaces are named based on their physical characteristics such as which PCI(e) slot or bus they are connected to, what type they are and sometimes just what MAC address they have. These names can be harder to remember, but will never change and can be predicted. Without this feature enabled network interface names are simply lableded something like eth0 or wlan1 and can change if multiple of the same type are found.

Since I have 3 spare network interfaces I'll connect one of them to my (virtual) internal network behind the firewall vm with a bridge (vmbr1) that I'll connect the LAN port of the firewall to.

All vms that are created will be attached to the vmbr1 bridge interface.

The physical server is connected on the physical port assigned the eno3 name. A bridge - vmbr0 - is defined in /etc/network/interfaces with proxmox' external IP, bridged to the physical port as seen in the config below.

Although the following can all be configured easily enough in /etc/network/interfaces it is highly recommended to use the Proxmox WebUI at the Datacenter->(Node)->System->Network

Create the "Inside network" Linux Bridge interfaces as shown below (ports/slaves not needed) and reboot proxmox to enable. Note the DMZ network that I have added for later use.

Ignore the extra network interfaces on the list.


Proxmox really isn't meant to be internet-facing, so either configure the firewall, OR just enable Debians ufw: Consider not screwing this up as you'll be locked out.

apt install ufw
ufw allow 22/tcp
ufw allow 443/tcp
ufw allow 8006/tcp
ufw enable
systemctl start ufw

Also disable ssh password login (setup public key login beforehand)