How to: Create an internal only/isolated network for guest OS/Virtual Machines (VM) on Proxmox VE (PVE) like in VMware Workstation (Host-Only network but different)

The Issue

In VMware Workstation by default we have an host-only network which connects the VM and the host. We can also create internal network/Virtual Machine/Guest OS only network, which means VMs can’t connect to internet or our real LAN, but the VMs in the same Virtual Network can still talk to each other, which is ideal for some special usages/cases like testing environment etc.

In Proxmox VE (PVE) we only have Bridge network by default, which means virtual machines are always in the same LAN as the host resides in.

We can manually create the virtual machine only internal network simply, here is how.

The Fix

1 Login to PVE web gui

2 Navigate to Datacenter -> node name/cluster name -> System -> Network

3 Click on “Create” button, click on “Linux Bridge”

Proxomx VE - Create Linux Bridge
Proxomx VE – Create Linux Bridge

4 We only need to enter the desired IPv4/CIDR, and Comment information.

IPv4/CIDR: Desired IP range for VMs to use for internal only network
Comment: For ourselves to distinguish among different interfaces/networks
Gateway: Make sure leave this field empty!

Proxmox VE - Create VM Internal Only network
Proxmox VE – Create VM Internal Only network

5 Click on “Create” button to create the VMs only internal network

Note: If we see this line “Pending changes (Either reboot or use ‘Apply Configuration’ (needs ifupdown2) to activate)”, try to click on “Apply Configuration” button first, if doesn’t work, logon to Proxmox terminal directly or via “Shell” button from PVE web gui, or via SSH, install ifupdown2 with ” apt install ifupdown2″, then click on “Apply configuration”


  • Make sure we assign correct Network Interface for VMs in order to utilize this VM internal only network
  • Make sure we assign IP address manually on each VM within the VM internal network, since there is no DHCP server, VMs will not be able to talk with each other by magic, we need to assign static IP addresses for them, make sure it is within the the range of “IPv4/CIDR” which we configured on “Linux Bridge” from step 4

How to: Shrink/Reclaim free virtual disk space from Virtual Machines on Proxmox VE (PVE) (Windows/Linux/Debian/Ubuntu/Kali Linux/RHEL/CentOS/Fedora etc.)


Following method only works for virtual machines (VM) that are satisfying these pre-requirements:

  • Thin-provisioned backing storage (qcow2 disk, thin-lvm, zfs, …)
  • Virtio-SCSI controller configured on guest.
  • Guest scsi disks with the discard option enabled [1]

Note: While changing provisioning types and Virtio-SCSI driver are not easy with existing virtual machines, but changing VM scsi disk’s discard options is simple, that means, if we appear to have an existing VM that is using thin-provisioned backing storage and Virtio-SCSI but “discard” options is not enabled/checked, we can simply find that VM and check that option, then we are good to follow the reset of this guide.

The Issue

When we are using qcow2 sparse virtual disks, we can reclaim free disk spaces which not used by the virtual machine. How to trigger the VM/guest operating system to reclaim it for us though?

The Fix

1 Login to Proxmox VE web gui

2 Find the VM we want to reclaim the unused disk space for and click on it

3 Click on Hardware

4 Double click on the virtual hard’s virtual hard drive we want to reclaim unused space for

5 Make sure the “Discard” is checked

Proxmox VE - Discard option
Proxmox VE – Discard option

6 Start the VM

Once the VM is fully booted

6a For Linux/Debian/Ubuntu/Kali Linux/CentOS/RHEL/Fedora etc.

6a.1 We use following command to reclaim the unused disk space from a terminal

sudo fstrim -av

Once it’s done, we should be able to see the reclaimed disk space from Proxmox VE host (Only if there is unused space, if there is no unused space, we will not see any changes from Proxmox VE host’s disk space)

6a.2 We can also enable the automatic fstrim from the VM, so we do not need to do it manually everytime. Use following command to enable this feature

sudo systemctl enable fstrim.timer

6b For Windows

Usually the trim is enabled by default on Windows (Windows 7/2008R2 and up), we should not need to modify anything.

We can check if TRIM is enabled or not by using following command

fsutil behavior query DisableDeleteNotify

The output should be 0, otherwise, we can set it manually

fsutil behavior set DisableDeleteNotify 0

We can also trigger it manually, here is how.

First, we need to shutdown the Windows VM.

Then from the Proxmox VE web gui, find the Windows VM, Navigate to “Hardware”, double click on the virtual hard drive that we want to reclaim unused space from, make sure the “Discard” and “SSD emulation” are both checked, now start the Windows VM

Proxmox VE - Discard and SSD emulation checked
Proxmox VE – Discard and SSD emulation checked

When the Windows booted, we type “defrag” in start menu to search for “Defragment and Optimize Drives” program.

Windows 10 - Defragment and Optimize Drives
Windows 10 – Defragment and Optimize Drives

Click on it to launch it, then select the drive which we want to claim unused space from, click on “Optimize” button.

We now have manually reclaimed unused space from Windows VM


[1] “Shrink Qcow2 Disk Files – Proxmox VE”,, 2019. [Online]. Available:

How to: Add/Remove/Delete independent/dedicate Intent Log Device (ZIL)/Separate Intent Log (SLOG) drive for ZFS (Proxmox (PVE))

1 Login to terminal via direct login, web gui or SSH to Proxmox

Adding drive as ZIL/SLOG device

zpool add -f poolName log deviceName
# e.g.
zpool add -f rpool log /dev/sdb

(To add mirrored ZIL/SLOG drives)

zpool add -f rpool log mirror /dev/sdb /dev/sdc

-f: Without -f flag, we will get a warning about missing EFI label

Use following command to check ZIL/SLOG status

zpool status

Note: If the ZIL/SLOG device failed, we will lose seconds worth of writes but our file system will continue to function without data corruption.

Remove/Delete ZIL/SLOG drive

zpool remove poolName deviceName
# e.g.
zpool remove rpool /dev/sdb
#By using drive's real name rather than /dev/sdb
zpool remove rpool ata-HGST_HTSxxxxxxx_RCFxxxxxxxxx-part9

For finding real drive names: How to: Find drive name (real name) for /dev/sdb /dev/sdc from Proxmox (PVE)

How to: Delete/Remove local-lvm from Proxmox VE (PVE) (and Some LVM basics, commands)

1 Delete/Remove local-lvm

Before we start, make sure we login to PVE web gui, delete local-lvm from Datacenter -> Storage. Select the local-lvm, Click on “Remove” button

1.1 Login to pve via SSH

1.2 Unmount and Delete lvm-thin

umount /dev/pve/data
lvremove /dev/pve/data

Confirm to delete

1.3 Check free space

vgdisplay pve | grep Free

1.4 Create new lvm

Note: Replace 92482 with the number you get from step 3, which is available free space

lvcreate -l 92482 -n data pve

1.5 Format and mount

mkfs.ext4 /dev/pve/data
mkdir /mnt/data
mount /dev/pve/data /mnt/data

Note: Use “mkfs.ext4” for ext4 format, “mkfs.xfs -f” to use xfs format

1.6 Modify fstab, so that it’s mounted on boot, add following line to the end of the file

nano /etc/fstab
/dev/pve/data /mnt/data ext4 defaults 0 0

Note: If you have used xfs, replace ext4 with xfs

2 Use it in Proxmox

If you want to use it from PVE with ease, here is how

2.1 Login to Proxmox web gui

2.2 Navigate to Datacenter -> Storage, click on “Add” button

2.3 Click on “Directory”

2.4 Give it a name (ID field), Directory is “/mnt/data”, select Content you want to put to the directory, you can just select all of them.

2.5 Click on “Add” button

2.6 Now you can use the folder on that LVM volume easily within Proxmox

3 To revert to lvm-thin

3.1 Unmount

umount /mnt/data

3.2 Remove partition

lvremove /dev/pve/data

3.3 Create a one 1 block sized data partition, to prevent error for next step

lvcreate -l 1 -n data pve

3.4 Convert to thin-pool

lvconvert --type thin-pool pve/data

3.5 Merge the reset free space to the partition

lvextend -l +99%FREE pve/data

3.6 Remove mount line from /etc/fstab file

nano /etc/fstab
# Remove following line, if there is one
/dev/pve/data /mnt/data ext4 defaults 0 0


LVM: Logical Volume Manager

LVM Usage: Create logical partition, so that it can be resized easily

PV: Physical Volume, similar to partition

PE: Physical Extent, Physical blocks, similar to block (Multiple continues blocks)

LV: Logical Volume, what file system sees as “partition”

LE: Logical Extent, what file systems sees as “block”

VG: Volume Group, like a storage pool

1 Physical Volume can have/contain multiple Physical Extents

1 Logical Volume can have/contain multiple Logical Extents

1 Volume Group is made up of multiple Physical Volumes

Logical Extent is created from Volume Group, which is similar to /dev/sda etc.

View Physical Volumes: pvdisplay

View Logical Volumes: lvdisplay

View Volume Groups: vgdisplay

Create Physical Volume

Use the entire disk

pvcreate /dev/sdb

Use only one partition: Use fdisk to change partition type to 0x8E which is (Linux LVM)

Create Volume Group

Create one Volume Group (vg1) from two disks (/dev/sda and /dev/sdb)

vgcreate vg1 /dev/sda /dev/sdb

Add Physical Volume to Volume Group

vgextend vg1 /dev/sdc

Create Logical Volume

Check size of Physical Extent

vgdisplay vg1

Create an 10GB Logical Volume

lvcreate -L 10G -n lv1 vg1

Create ext4

mkfs.ext4 /dev/vg1/lv1
mkfs -t ext4 /dev/vg1/lv1

Extend file system size

(Extend data partition with another 10GB)

Extend Logical Volume size

lvextend -L +10G /dev/vg1/lv1

Extend file system size

resize2fs /dev/vg1/lv1

Shrink files system size

Shrink to 10GB

umount /data
resize2fs /dev/vg1/lv1 10G
# Run e2fsck to check file system
e2fsck -f /dev/vg1/lv1
lvreduce -L 20G /dev/vg1/lv1
mount -t ext4 /dev/vg1/lv1 /mnt/lv1

Extend Logical Volume and file system

Extend Logical Volume so tat it uses all free space from Volume Group, then extend the file system

lvextend -l +100%FREE /dev/vg1/lv1
resize2fs /dev/vg1/lv1

Use all of the free space on Volume Group to create a new Logical Volume

lvcreate -l 100%FREE -n lv2 vg1

Proxmox (PVE) Unlock locked virtual machine

qm unlock 100

Proxmox (PVE) Stop virtual machine

qm stop 100

Note: Replace 100 with your virtual machine ID

How to fix Microsoft Hyper-V Error: The application encountered an error while attempting to change the state of ‘New Virtual Machine’.

The Error:

The application encountered an error while attempting to change the state of ‘New Virtual Machine’.

‘New Virtual Machine’ failed to start.

Synthetic SCSI Controller (Instance ID xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx): Failed to Power on with Error ‘General access denied error’.

Hyper-V Virtual Machine Management service Account does not have permission to open attachment ‘D:\vm.vhdx’. Error ‘General access denied error’.

Hyper-V Error: Virtual Machine Connection - 'New Virtual Machine' failed to start
Hyper-V Error: Virtual Machine Connection – ‘New Virtual Machine’ failed to start

Note: Other similar errors which is related to virtual hard drive (vhd, vhdx) permission issue when starting the Hyper-V virtual might also be fix by using following method.

The Fix:

Method 1:

Remove the virtual hard drive from virtual machine via Hyper-V Manager then reattach the hard drive

Method 2:

1 We need to get the SID of the virtual machine, usually it will be displayed in the error dialogue. If you have the SID continue with step 2, If not, follow the steps below:

1.1 Open the Run window by using Win + R key combination.

Microsoft Windows - Run window
Microsoft Windows – Run window

1.2 Type virtmgmt.msc and hit “OK” button

1.3 Write down the name of the virtual machine which is having perimssion issue.

1.4 Use key combination Win + X, click on Windows PowerShell (Admin) to open PowerShell window. Type following command and hit Enter key (Replace Name of Virtual Machine to your virtual machine’s name)

Get-VM 'Name of Virtual Machine' | Select-Object VMID
Windows PowerShell -  Get-VM 'Name of Virtual Machine' | Select-Object VMID
Windows PowerShell – Get-VM ‘Name of Virtual Machine’ | Select-Object VMID

2 Enter following command in PowerShell to grant permission for this virtual machine to attach the virtual herd drive. ()

icacls "<Path of .vhd or .avhd file>" /grant "NT VIRTUAL MACHINE\<Virtual Machine ID from step 1>":F


icacls "<Path of .vhd or .avhd file>" /grant "<Virtual Machine ID from step 1>":F

Tip: By default Hyper-V store virtual machine configuration files in “C:\ProgramData\Microsoft\Windows\Hyper-V”, hard drives in “C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks”