How to: Change IP address for Proxmox VE (PVE)

1 Login to Proxmox VE web gui

2 Navigate to Datacenter -> node name/cluster name -> System -> Network

3 Find the one with the IP address which we currently connected to e.g. eth0 vmbr0 etc. etc.

4 Change the IP address. e.g. from 10.0.0.100 to 10.0.0.200

5 Login to terminal, via SSH or web gui -> Shell or directly from host

6 Use following command to change IP address in the hosts file

nano /etc/hosts

Change the IP address from 10.0.0.100 to 10.0.0.200 if it’s not changed.

Use Ctrl + X, Y, Enter key to Save the file and Exit nano

7 Restart Proxmox VE host from web gui or using following command

reboot

8 If you are using self-signed SSL/TLS certificate, use the following command from terminal to regenerate the certificate, so that it includes the new IP address

pvecm updatecerts --force

9 Now we can navigate to Datacenter -> node name/cluster name -> System -> Certificates

Check if the new IP address is shown under “Subject Alternative Names”


How to: Create an internal only/isolated network for guest OS/Virtual Machines (VM) on Proxmox VE (PVE) like in VMware Workstation (Host-Only network but different)

The Issue

In VMware Workstation by default we have an host-only network which connects the VM and the host. We can also create internal network/Virtual Machine/Guest OS only network, which means VMs can’t connect to internet or our real LAN, but the VMs in the same Virtual Network can still talk to each other, which is ideal for some special usages/cases like testing environment etc.

In Proxmox VE (PVE) we only have Bridge network by default, which means virtual machines are always in the same LAN as the host resides in.

We can manually create the virtual machine only internal network simply, here is how.

The Fix

1 Login to PVE web gui

2 Navigate to Datacenter -> node name/cluster name -> System -> Network

3 Click on “Create” button, click on “Linux Bridge”

Proxomx VE - Create Linux Bridge
Proxomx VE – Create Linux Bridge

4 We only need to enter the desired IPv4/CIDR, and Comment information.

IPv4/CIDR: Desired IP range for VMs to use for internal only network
Comment: For ourselves to distinguish among different interfaces/networks
Gateway: Make sure leave this field empty!

Proxmox VE - Create VM Internal Only network
Proxmox VE – Create VM Internal Only network

5 Click on “Create” button to create the VMs only internal network

Note: If we see this line “Pending changes (Either reboot or use ‘Apply Configuration’ (needs ifupdown2) to activate)”, try to click on “Apply Configuration” button first, if doesn’t work, logon to Proxmox terminal directly or via “Shell” button from PVE web gui, or via SSH, install ifupdown2 with ” apt install ifupdown2″, then click on “Apply configuration”

Bonus

  • Make sure we assign correct Network Interface for VMs in order to utilize this VM internal only network
  • Make sure we assign IP address manually on each VM within the VM internal network, since there is no DHCP server, VMs will not be able to talk with each other by magic, we need to assign static IP addresses for them, make sure it is within the the range of “IPv4/CIDR” which we configured on “Linux Bridge” from step 4

How to: Create LXC container on Proxmox VE (PVE)

1 Login to Proxmox VE terminal via web gui or SSH or from the host directly

2 Use following command to update templates

pveam update

3 When we see the returned message “update successful” from last command, open the Proxmox web gui

4 Navigate to one of the storage which we want to use to store templates

5 Click on “Templates” button

6 Select the lxc container template we want to download and click on “Download” button to download it (e.g. TurnKey WordPress)

7 Once the download is finished, we click on “Create CT” button from Proxmox VE web gui

8 The rest will be very similar with creating the virtual machine, assign disk space, CPU, RAM etc.


How to: Backup and Restore Proxmox VE host

Warning: This guide is to backup Proxmox VE host, not virtual machines, Proxmox VE has built-in VM backup mechanism which can be configured easily via its web gui.

There are many ways to backup Proxmox VE host, e.g. using xfsdump, using different kinds of third party backup programs like rsync etc.

For this guide we will be using UrBackup

Pre-requirements

  • A working Proxmox VE host
  • A spare server for running as UrBackup server with big enough storage
  • The Proxmox VE host and the UrBackup Server is connected and in the same LAN

1 Install UrBackup Server

The server can be Windows or Linux (Debian/Ubuntu etc.)

1.1 For Windows Server

1.1.1 Download the exe or msi file from here: UrBackup Windows Server [1]

1.1.2 Execute the downloaded binary file, install the UrBackup server

1.1.3 Open any browser from the server, navigate to “localhst:55414” or “127.0.0.1:55414”

Note 1: The web interface is available at port 55414. To restrict the access you have to create an admin account in Settings->Users. Without this account everyone can access all backups using the web interface.

Note 2: This web interface is accessible from other device within the same LAN

1.2 For Linux (Ubuntu) Server

1.2.1 We can simply use following commands to install it

sudo add-apt-repository ppa:uroni/urbackup
sudo apt update
sudo apt install urbackup-server

or

wget https://hndl.urbackup.org/Server/2.4.12/urbackup-server_2.4.12_amd64.deb
sudo dpkg -i urbackup-server_2.4.12_amd64.deb
sudo apt install -f

Note 1: It has some dependencies which you can automatically resolve by running apt-get -f install. If it does not work you probably chose the wrong packet from stable/testing/unstable. [2]

Note 2: The web interface is available at port 55414. To restrict the access you have to create an admin account in Settings->Users. Without this account everyone can access all backups using the web interface. [2]

Note 3: This web interface is accessible from other device within the same LAN

2 Install UrBackup Client on Proxmox VE host

2.1 Login to Proxmox VE terminal directly or via SSH or via web gui -> Shell

2.2 Execute following command to install UrBackup client

TF=$(mktemp) && wget "https://hndl.urbackup.org/Client/2.4.10/UrBackup%20Client%20Linux%202.4.10.sh" -O $TF && sh $TF; rm -f $TF

Note: Unless you are sure you want to use snapshot, dattobd require manual installation, LVM requires free space that equals to the size we are backing up. Choose not to use snapshot here if you are not sure even it says you can use dattobd and LVM.

3 Configure the UrBackup Server (and Proxmox host)

3.1 Before we start to configure UrBackup server, we need to make sure Proxmox VE host’s firewall is disabled, or we will have to open ports for UrBackup server, where are listed in following tables

The Server binds to following default ports:

PortUsageIncoming/OutgoingProtocol
55413FastCGI for web interfaceIncomingTCP
55414HTTP web interfaceIncomingTCP
55415Internet clientsIncomingTCP
35623UDP broadcasts for discoveryOutgoingUDP
[3]

The Client binds to following default ports (all incoming):

PortUsageProtocol
35621Sending files during file backups (file server)TCP
35622UDP broadcasts for discoveryUDP
35623Commands and image backupsTCP
[3]

3.2 Login to UrBackup web interface

3.3 Check if the Proxmox VE host appear under “Status” tab, if not, wait 30 – 60 minutes, if we can see the Proxmox VE is displayed under status tab, proceed to next step

3.4 Click on “Settings”

3.5 Click on “Clients”, then click on the name of Proxmox VE host

3.6 Check “Separate settings for this client”, make necessary changes for “File Backups”, no need to worry about “Image Backups” since it is not supported on Linux. (Do not forget to click on “Save” button)

Important: Make sure for File Backups, we exclude zfs pool if we are backing up the root “/”

e.g. Backing up the root file system “/”

Excluded files (with wildcards): /rpool/*;/mnt/proxmox-hostname/*;/tmp/*
Default directories to backup: /

Note: You probably want to exclude “/mnt/data/*” as well.

If we are not doing any modification to system files, we can just backup following folder

/etc

What’s inside

# Note: replace <proxmox-hostname> with your node name
 
# Virtual machine .conf, LXC .conf, SSL certificates, storage related configuration, system settings etc.
/etc/pve/nodes/<proxmox-hostname>/
  
# Virtual machine .conf files
/etc/pve/nodes/<proxmox-host-name>/qemu-server/
 
# LXC .conf files
/etc/pve/nodes/<proxmox-hostname>/lxc/
 
# Mounting information
/etc/fstab
 
/root
 
/usr/local
 
/var

3.7 Click on “Client”, configure a value for “Soft client quota” if you have very limited storage, probably should modify “File Backup” frequency and other numbers too if storage is very limited.

3.8 If everything is set, we can now start to backup.

(Note: We can use “urbackupclientctl stauts” command from Proxmox VE host to check UrBackup status)

4 Restore

To restore the Proxmox VE host, we just simply perform following steps

4.1 Install the same Proxmox VE version on the new hard drive

4.2 Install UrBackup client again on Proxmox VE host (Make sure the hostname is the same), and the client connectes to the server

Note: If we are restoring to a different hardware, we need to make sure that the hostname is the same and we need the tokens in “/usr/local/var/urbackup/tokens” [4], or we can use “Download client for Linux” button from UrBackup web gui, then the token will be included

4.3 There are two ways to restore file and folders, one is through server web gui, another is via client command with “urbackupclientctl ….”

  • If we are using the client command on client, use “urbackupclientctl –help” to check available options, “urbackupclientctl browse” to browse backups and “urbackupclientctl restore-start” to restore

References

[1] “UrBackup – Download UrBackup for Windows, GNU/Linux or FreeBSD”, Urbackup.org, 2020. [Online]. Available: https://www.urbackup.org/download.html

[2] “UrBackup – Install UrBackup on Debian/Ubuntu”, Urbackup.org, 2020. [Online]. Available: https://www.urbackup.org/debianserverinstall.html

[3] “UrBackup – Server administration manual”, Urbackup.org, 2020. [Online]. Available: https://www.urbackup.org/administration_manual.html#x1-9000010.3

[4] “Restoring files to a different VM / PC”, UrBackup – Discourse, 2016. [Online]. Available: https://forums.urbackup.org/t/restoring-files-to-a-different-vm-pc/2221


How to: Shrink/Reclaim free virtual disk space from Virtual Machines on Proxmox VE (PVE) (Windows/Linux/Debian/Ubuntu/Kali Linux/RHEL/CentOS/Fedora etc.)

Pre-requirements

Following method only works for virtual machines (VM) that are satisfying these pre-requirements:

  • Thin-provisioned backing storage (qcow2 disk, thin-lvm, zfs, …)
  • Virtio-SCSI controller configured on guest.
  • Guest scsi disks with the discard option enabled [1]

Note: While changing provisioning types and Virtio-SCSI driver are not easy with existing virtual machines, but changing VM scsi disk’s discard options is simple, that means, if we appear to have an existing VM that is using thin-provisioned backing storage and Virtio-SCSI but “discard” options is not enabled/checked, we can simply find that VM and check that option, then we are good to follow the reset of this guide.

The Issue

When we are using qcow2 sparse virtual disks, we can reclaim free disk spaces which not used by the virtual machine. How to trigger the VM/guest operating system to reclaim it for us though?

The Fix

1 Login to Proxmox VE web gui

2 Find the VM we want to reclaim the unused disk space for and click on it

3 Click on Hardware

4 Double click on the virtual hard’s virtual hard drive we want to reclaim unused space for

5 Make sure the “Discard” is checked

Proxmox VE - Discard option
Proxmox VE – Discard option

6 Start the VM

Once the VM is fully booted

6a For Linux/Debian/Ubuntu/Kali Linux/CentOS/RHEL/Fedora etc.

6a.1 We use following command to reclaim the unused disk space from a terminal

sudo fstrim -av

Once it’s done, we should be able to see the reclaimed disk space from Proxmox VE host (Only if there is unused space, if there is no unused space, we will not see any changes from Proxmox VE host’s disk space)

6a.2 We can also enable the automatic fstrim from the VM, so we do not need to do it manually everytime. Use following command to enable this feature

sudo systemctl enable fstrim.timer

6b For Windows

Usually the trim is enabled by default on Windows (Windows 7/2008R2 and up), we should not need to modify anything.

We can check if TRIM is enabled or not by using following command

fsutil behavior query DisableDeleteNotify

The output should be 0, otherwise, we can set it manually

fsutil behavior set DisableDeleteNotify 0

We can also trigger it manually, here is how.

First, we need to shutdown the Windows VM.

Then from the Proxmox VE web gui, find the Windows VM, Navigate to “Hardware”, double click on the virtual hard drive that we want to reclaim unused space from, make sure the “Discard” and “SSD emulation” are both checked, now start the Windows VM

Proxmox VE - Discard and SSD emulation checked
Proxmox VE – Discard and SSD emulation checked

When the Windows booted, we type “defrag” in start menu to search for “Defragment and Optimize Drives” program.

Windows 10 - Defragment and Optimize Drives
Windows 10 – Defragment and Optimize Drives

Click on it to launch it, then select the drive which we want to claim unused space from, click on “Optimize” button.

We now have manually reclaimed unused space from Windows VM

References

[1] “Shrink Qcow2 Disk Files – Proxmox VE”, Pve.proxmox.com, 2019. [Online]. Available: https://pve.proxmox.com/wiki/Shrink_Qcow2_Disk_Files


How to: Upload ISO files to Proxmox VE (PVE)

So that we can install operating systems 😉

1 Login to Proxmox VE web gui

2 Find the storage with “ISO image” listed in “Content”

Proxmox VE Storage View
Proxmox VE Storage View
Proxmox VE Storage View - Summary - Content - ISO Image
Proxmox VE Storage View – Summary – Content – ISO Image

Note: If the “ISO image” is not listed, that means we can not upload ISO to that storage location (Usually we can create an folder on that storage, make sure “ISO image” is selected during Directory creation, then we can upload ISO files to that storage location)

3 Click on “Content”

4 Click on “Upload” button at the top

5 Make sure “ISO image” is selected for “Content”

6 Now we can upload ISO files to Proxmox VE

7 The rest will be easy, click on “Select File…” button, select the ISO file we want to upload, click on “Upload” button to begin uploading, do not close the page until it’s finished

8 Now we can start to use it to install operating system.


How to: Fix Proxmox VE 6.2.1 Installation error: unknown filesystsms

The Error

error: unknown file systems
error: unknown file systems
error: unknown file systems
error: unknown file systems

Entering rescue mode…

grub rescue>

Usually it appears when we are booting from Proxmox installation USB with a PC/Desktop which is using UEFI and the USB was created via Rufus on Windows.

The Fix

1 Download balenaEtcher (Alternative to Rufus)

2 Use balenaEtcher to load the Proxmox VE iso and make sure only the desired USB flash drive is selected, create the bootable media.

3 Use the Proxmox USB as the same way as we did last time

4 It should load Proxmox installation correctly


How to: Fix Proxmox VE/ZFS Pool extremely slow write performance issue

The Issue

If we just create the ZFS pool from Proxmox gui, then start to use it. (Especially for HDDs)

e.g. We write large datasets continuously.

Sooner or later (Depend on the ZFS pool usage), we will find out that the writes is around 1-10MB/s, which is extremely slow.

(Note: If we test the newly created pool with and without ZIL/SLOG, there probably won’t be much difference or even slower with ZIL/SLOG deice attached, after we have data filled in the ZFS pool, the pool with dedicate ZIL/SLOG device will perform better than the pool without dedicate ZIL/SLOG device)

The Fix

This can happen due to ZFS Intent Log (ZIL)/Separate ZFS Intent Log (SLOG) is getting written to the same ZFS data pool which all our data are stored, which eventually caused “double write” issue.

To fix this issue is easy, best way to fix it properly is to grab a SSD, worst case, if we do not have one temporary, we can even grab a 5400RPM or 7200RPM HDD use it via HBA/SATA/SCSI or even USB 3.0 (USB is not suitable for long term for this purpose, but can work fine as an temporary solution/fix).

Once attached the disk to the Proxmox host, note down the device name e.g. /dev/sde, /dev/sdf etc.

Login to terminal from Proxmox host or via SSH or via Shell from web gui.

Use following command to use an dedicated HDD/SSD for ZIL/SLOG purpose

# For single HDD/SSD
zpool add -f [pool name] log [device name]
# e.g.
zpool add -f rpool log /dev/sdd
 
# For mirrored ZIL/SLOG
zpool add -f [pool name] log mirror [device 1 name] [device 2 name]
# e.g.
zpool add -f rpool log mirror /dev/sdd /dev/sde

Now if we have a look at write performance, it will be increased

Note: Best device for ZIL/SLOG is to have an datacenter grade SSD via HBA, so that we can get best performance

To check the pool status see if the ZIL/SLOG device is added use following commands

zpool status

Bonus

Read/Write buffer/cache for ZFS pool

With ZFS pool

  • SLOG is used to cache synchronous ZIL data (Write) before flushing to disk (Write performance related)
  • Adaptive Replacement Cache (ARC) and Second level adaptive replacement cache (L2ARC) are used to cache reads (Read performance related)

How big should the ZIL/SLOG device/HDD/SSD be?

Usually the ZIL is default to flush every 5 seconds or when it reaches capacity, which means a SLOG that holds 5-10 seconds’ worth of the pool maximum throughput will be find, unless we are doing something extreme/special.

How to Remove/Delete/Replace ZIL/SLOG device/HDD/SSD?

# To remove ZIL/SLOG device
zpool remove [pool name] [device name]
# e.g.
zpool remove rpool /dev/sdd
or
zpool remove rpool sdd
or
# By using real device name
zpool remove rpool ata-xxxx_xxxx_Xxxx_x.....

To replace, simply remove the current HDDs/SSDs then add new disks again

How to find out real device name from Proxmox?

Refer to this guide: How to: Find drive name (real name) for /dev/sdb /dev/sdc from Proxmox (PVE)

How to add read cache disks?

It is very similar to adding ZIL/SLOG drives

zpool add -f [pool name] cache [device name]
# e.g.
zpool add -f rpool cache /dev/sde
 
# Remove cache disk
zpool remove [pool name] [device name]
zpool remove rpool /dev/sde
zpool remove rpool /sde

If the above remove command does not work, try remove the ZIL/SLOG first


How to: Fix “CT is locked (rollback)” on Proxmox VE (PVE) (How to: Unlock Container/CT on Proxmox VE)

The Error

CT is locked (rollback)

The Fix

1 Login to Proxmox VE terminal or via SSH or Proxmox web gui

2 Execute following command

pct unlock <containerID>

e.g.

pct unlock 100

3 Now we should be able do what we want to previously without error


pct command help

(pct – Command line Tool to manage Linux Containers (LXC) on Proxmox VE [1])

USAGE: pct <COMMAND> [ARGS] [OPTIONS]
       pct clone <vmid> <newid> [OPTIONS]
       pct create <vmid> <ostemplate> [OPTIONS]
       pct destroy <vmid> [OPTIONS]
       pct list
       pct migrate <vmid> <target> [OPTIONS]
       pct move_volume <vmid> <volume> <storage> [OPTIONS]
       pct pending <vmid>
       pct resize <vmid> <disk> <size> [OPTIONS]
       pct restore <vmid> <ostemplate> [OPTIONS]
       pct template <vmid>
       pct config <vmid> [OPTIONS]
       pct set <vmid> [OPTIONS]
       pct delsnapshot <vmid> <snapname> [OPTIONS]
       pct listsnapshot <vmid>
       pct rollback <vmid> <snapname>
       pct snapshot <vmid> <snapname> [OPTIONS]
       pct reboot <vmid> [OPTIONS]
       pct resume <vmid>
       pct shutdown <vmid> [OPTIONS]
       pct start <vmid> [OPTIONS]
       pct stop <vmid> [OPTIONS]
       pct suspend <vmid>
       pct console <vmid> [OPTIONS]
       pct cpusets
       pct df <vmid>
       pct enter <vmid>
       pct exec <vmid> [<extra-args>]
       pct fsck <vmid> [OPTIONS]
       pct fstrim <vmid>
       pct mount <vmid>
       pct pull <vmid> <path> <destination> [OPTIONS]
       pct push <vmid> <file> <destination> [OPTIONS]
       pct rescan  [OPTIONS]
       pct status <vmid> [OPTIONS]
       pct unlock <vmid>
       pct unmount <vmid>
       pct help [<extra-args>] [OPTIONS]

References

[1] “pct(1)”, Pve.proxmox.com, 2020. [Online]. Available: https://pve.proxmox.com/pve-docs/pct.1.html