In VMware Workstation by default we have an host-only network which connects the VM and the host. We can also create internal network/Virtual Machine/Guest OS only network, which means VMs can’t connect to internet or our real LAN, but the VMs in the same Virtual Network can still talk to each other, which is ideal for some special usages/cases like testing environment etc.
In Proxmox VE (PVE) we only have Bridge network by default, which means virtual machines are always in the same LAN as the host resides in.
We can manually create the virtual machine only internal network simply, here is how.
1 Login to PVE web gui
2 Navigate to Datacenter -> node name/cluster name -> System -> Network
3 Click on “Create” button, click on “Linux Bridge”
4 We only need to enter the desired IPv4/CIDR, and Comment information.
IPv4/CIDR: Desired IP range for VMs to use for internal only network Comment: For ourselves to distinguish among different interfaces/networks Gateway: Make sure leave this field empty!
5 Click on “Create” button to create the VMs only internal network
Note: If we see this line “Pending changes (Either reboot or use ‘Apply Configuration’ (needs ifupdown2) to activate)”, try to click on “Apply Configuration” button first, if doesn’t work, logon to Proxmox terminal directly or via “Shell” button from PVE web gui, or via SSH, install ifupdown2 with ” apt install ifupdown2″, then click on “Apply configuration”
Make sure we assign correct Network Interface for VMs in order to utilize this VM internal only network
Make sure we assign IP address manually on each VM within the VM internal network, since there is no DHCP server, VMs will not be able to talk with each other by magic, we need to assign static IP addresses for them, make sure it is within the the range of “IPv4/CIDR” which we configured on “Linux Bridge” from step 4
1.1.2 Execute the downloaded binary file, install the UrBackup server
1.1.3 Open any browser from the server, navigate to “localhst:55414” or “127.0.0.1:55414”
Note 1: The web interface is available at port 55414. To restrict the access you have to create an admin account in Settings->Users. Without this account everyone can access all backups using the web interface.
Note 2: This web interface is accessible from other device within the same LAN
1.2 For Linux (Ubuntu) Server
1.2.1 We can simply use following commands to install it
Note 1: It has some dependencies which you can automatically resolve by running apt-get -f install. If it does not work you probably chose the wrong packet from stable/testing/unstable. 
Note 2: The web interface is available at port 55414. To restrict the access you have to create an admin account in Settings->Users. Without this account everyone can access all backups using the web interface. 
Note 3: This web interface is accessible from other device within the same LAN
2 Install UrBackup Client on Proxmox VE host
2.1 Login to Proxmox VE terminal directly or via SSH or via web gui -> Shell
2.2 Execute following command to install UrBackup client
Note: Unless you are sure you want to use snapshot, dattobd require manual installation, LVM requires free space that equals to the size we are backing up. Choose not to use snapshot here if you are not sure even it says you can use dattobd and LVM.
3 Configure the UrBackup Server (and Proxmox host)
3.1 Before we start to configure UrBackup server, we need to make sure Proxmox VE host’s firewall is disabled, or we will have to open ports for UrBackup server, where are listed in following tables
The Server binds to following default ports:
FastCGI for web interface
HTTP web interface
UDP broadcasts for discovery
The Client binds to following default ports (all incoming):
Sending files during file backups (file server)
UDP broadcasts for discovery
Commands and image backups
3.2 Login to UrBackup web interface
3.3 Check if the Proxmox VE host appear under “Status” tab, if not, wait 30 – 60 minutes, if we can see the Proxmox VE is displayed under status tab, proceed to next step
3.4 Click on “Settings”
3.5 Click on “Clients”, then click on the name of Proxmox VE host
3.6 Check “Separate settings for this client”, make necessary changes for “File Backups”, no need to worry about “Image Backups” since it is not supported on Linux. (Do not forget to click on “Save” button)
Important: Make sure for File Backups, we exclude zfs pool if we are backing up the root “/”
e.g. Backing up the root file system “/”
Excluded files (with wildcards): /rpool/*;/mnt/proxmox-hostname/*;/tmp/* Default directories to backup: /
Note: You probably want to exclude “/mnt/data/*” as well.
If we are not doing any modification to system files, we can just backup following folder
# Note: replace <proxmox-hostname> with your node name
# Virtual machine .conf, LXC .conf, SSL certificates, storage related configuration, system settings etc.
# Virtual machine .conf files
# LXC .conf files
# Mounting information
3.7 Click on “Client”, configure a value for “Soft client quota” if you have very limited storage, probably should modify “File Backup” frequency and other numbers too if storage is very limited.
3.8 If everything is set, we can now start to backup.
(Note: We can use “urbackupclientctl stauts” command from Proxmox VE host to check UrBackup status)
To restore the Proxmox VE host, we just simply perform following steps
4.1 Install the same Proxmox VE version on the new hard drive
4.2 Install UrBackup client again on Proxmox VE host (Make sure the hostname is the same), and the client connectes to the server
Note: If we are restoring to a different hardware, we need to make sure that the hostname is the same and we need the tokens in “/usr/local/var/urbackup/tokens” , or we can use “Download client for Linux” button from UrBackup web gui, then the token will be included
4.3 There are two ways to restore file and folders, one is through server web gui, another is via client command with “urbackupclientctl ….”
If we are using the client command on client, use “urbackupclientctl –help” to check available options, “urbackupclientctl browse” to browse backups and “urbackupclientctl restore-start” to restore
Guest scsi disks with the discard option enabled 
Note: While changing provisioning types and Virtio-SCSI driver are not easy with existing virtual machines, but changing VM scsi disk’s discard options is simple, that means, if we appear to have an existing VM that is using thin-provisioned backing storage and Virtio-SCSI but “discard” options is not enabled/checked, we can simply find that VM and check that option, then we are good to follow the reset of this guide.
When we are using qcow2 sparse virtual disks, we can reclaim free disk spaces which not used by the virtual machine. How to trigger the VM/guest operating system to reclaim it for us though?
1 Login to Proxmox VE web gui
2 Find the VM we want to reclaim the unused disk space for and click on it
3 Click on Hardware
4 Double click on the virtual hard’s virtual hard drive we want to reclaim unused space for
5 Make sure the “Discard” is checked
6 Start the VM
Once the VM is fully booted
6a For Linux/Debian/Ubuntu/Kali Linux/CentOS/RHEL/Fedora etc.
6a.1 We use following command to reclaim the unused disk space from a terminal
sudo fstrim -av
Once it’s done, we should be able to see the reclaimed disk space from Proxmox VE host (Only if there is unused space, if there is no unused space, we will not see any changes from Proxmox VE host’s disk space)
6a.2 We can also enable the automatic fstrim from the VM, so we do not need to do it manually everytime. Use following command to enable this feature
sudo systemctl enable fstrim.timer
6b For Windows
Usually the trim is enabled by default on Windows (Windows 7/2008R2 and up), we should not need to modify anything.
We can check if TRIM is enabled or not by using following command
fsutil behavior query DisableDeleteNotify
The output should be 0, otherwise, we can set it manually
fsutil behavior set DisableDeleteNotify 0
We can also trigger it manually, here is how.
First, we need to shutdown the Windows VM.
Then from the Proxmox VE web gui, find the Windows VM, Navigate to “Hardware”, double click on the virtual hard drive that we want to reclaim unused space from, make sure the “Discard” and “SSD emulation” are both checked, now start the Windows VM
When the Windows booted, we type “defrag” in start menu to search for “Defragment and Optimize Drives” program.
Click on it to launch it, then select the drive which we want to claim unused space from, click on “Optimize” button.
We now have manually reclaimed unused space from Windows VM
2 Find the storage with “ISO image” listed in “Content”
Note: If the “ISO image” is not listed, that means we can not upload ISO to that storage location (Usually we can create an folder on that storage, make sure “ISO image” is selected during Directory creation, then we can upload ISO files to that storage location)
3 Click on “Content”
4 Click on “Upload” button at the top
5 Make sure “ISO image” is selected for “Content”
6 Now we can upload ISO files to Proxmox VE
7 The rest will be easy, click on “Select File…” button, select the ISO file we want to upload, click on “Upload” button to begin uploading, do not close the page until it’s finished
8 Now we can start to use it to install operating system.
If we just create the ZFS pool from Proxmox gui, then start to use it. (Especially for HDDs)
e.g. We write large datasets continuously.
Sooner or later (Depend on the ZFS pool usage), we will find out that the writes is around 1-10MB/s, which is extremely slow.
(Note: If we test the newly created pool with and without ZIL/SLOG, there probably won’t be much difference or even slower with ZIL/SLOG deice attached, after we have data filled in the ZFS pool, the pool with dedicate ZIL/SLOG device will perform better than the pool without dedicate ZIL/SLOG device)
This can happen due to ZFS Intent Log (ZIL)/Separate ZFS Intent Log (SLOG) is getting written to the same ZFS data pool which all our data are stored, which eventually caused “double write” issue.
To fix this issue is easy, best way to fix it properly is to grab a SSD, worst case, if we do not have one temporary, we can even grab a 5400RPM or 7200RPM HDD use it via HBA/SATA/SCSI or even USB 3.0 (USB is not suitable for long term for this purpose, but can work fine as an temporary solution/fix).
Once attached the disk to the Proxmox host, note down the device name e.g. /dev/sde, /dev/sdf etc.
Login to terminal from Proxmox host or via SSH or via Shell from web gui.
Use following command to use an dedicated HDD/SSD for ZIL/SLOG purpose
# For single HDD/SSD
zpool add -f [pool name] log [device name]
zpool add -f rpool log /dev/sdd
# For mirrored ZIL/SLOG
zpool add -f [pool name] log mirror [device 1 name] [device 2 name]
zpool add -f rpool log mirror /dev/sdd /dev/sde
Now if we have a look at write performance, it will be increased
Note: Best device for ZIL/SLOG is to have an datacenter grade SSD via HBA, so that we can get best performance
To check the pool status see if the ZIL/SLOG device is added use following commands
Read/Write buffer/cache for ZFS pool
With ZFS pool
SLOG is used to cache synchronous ZIL data (Write) before flushing to disk (Write performancerelated)
Adaptive Replacement Cache (ARC) and Second level adaptive replacement cache (L2ARC) are used to cache reads (Read performancerelated)
How big should the ZIL/SLOG device/HDD/SSD be?
Usually the ZIL is default to flush every 5 seconds or when it reaches capacity, which means a SLOG that holds 5-10 seconds’ worth of the pool maximum throughput will be find, unless we are doing something extreme/special.
How to Remove/Delete/Replace ZIL/SLOG device/HDD/SSD?
# To remove ZIL/SLOG device
zpool remove [pool name] [device name]
zpool remove rpool /dev/sdd
zpool remove rpool sdd
# By using real device name
zpool remove rpool ata-xxxx_xxxx_Xxxx_x.....
To replace, simply remove the current HDDs/SSDs then add new disks again