bulk vm creation script on proxmox
Table of Content
bulk vm creation script on proxmox¶
I am trying out various Linux distributions to run on my homelab using Proxmox VE.
I have a small script to bulk spin up VMs for my homelab off of a list of servers and their specifications in a csv file.
Preparation¶
Here is the list of things to prepare:
- templated cloud-init image for different distributions (which gives you VMID of the base image)
- username to set on a cloud-init image
- ssh public key for the user
- nameserver list to set on a cloud-init image
- search domain to set on a cloud-init image
Here is the pveversion output.
CPUTYPE to choose on Proxmox¶
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_qemu_cpu_types
https://www.yinfor.com/2023/06/how-i-choose-vm-cpu-type-in-proxmox-ve.html
I came across above two useful URLs to find out which CPUTYPE to set for my VM on Proxmox.
In my case it's "x86-64-v3" and this is going to be used throughout this post.
Templated Cloud-Init Image¶
Basically you search for cloud-init image for your Linux distribution of choice, download it on Proxmox, and follow the instruction available on the document to create templated cloud-init image you can use to create VM.
https://pve.proxmox.com/wiki/Cloud-Init_Support
Now, let me take Debian 12 Bookworm as an example to go through the actual steps.
https://cdimage.debian.org/images/cloud/
There are variety of cloud images available. Let me go with genericcloud
type this time.
Plain VM (amd64), suitable for use with QEMU
genericcloud: Similar to generic. Should run in any virtualised environment. Is smaller than
generic
by excluding drivers for physical hardware.
# directory to store downloaded cloud-init images
PVE_IMAGE_DIR=/opt/pve/dnld
mkdir -p $PVE_IMAGE_DIR
cd $PVE_IMAGE_DIR
# download qcow2 image and checksum file
wget https://cdimage.debian.org/images/cloud/bookworm/latest/debian-12-genericcloud-amd64.qcow2
wget https://cdimage.debian.org/images/cloud/bookworm/latest/SHA512SUMS
# verify
sha512sum -c SHA512SUMS
# view the image details
qemu-img info debian-12-genericcloud-amd64.qcow2
# set the variable for the image
CIIMAGE=$PVE_IMAGE_DIR/debian-12-genericcloud-amd64.qcow2
# ID of the cloud-init template image
TEMPLATE_ID=9006
# create a VM template using the downloaded cloud-init image
# CPUTYPE is x86-64-v3 in my case
# the disk I use for my VMs is "local-zfs"
PVE_CPUTYPE=x86-64-v3
PVE_DISK=local-zfs
qm create $TEMPLATE_ID --memory 2048 --net0 virtio,bridge=vmbr0 --scsihw virtio-scsi-single --cpu cputype=$PVE_CPUTYPE --ostype l26
qm set $TEMPLATE_ID --scsi0 $PVE_DISK:0,import-from=$CIIMAGE
qm set $TEMPLATE_ID --ide2 $PVE_DISK:cloudinit
qm set $TEMPLATE_ID --serial0 socket --vga serial0
qm set $TEMPLATE_ID --boot order=scsi0
qm template $TEMPLATE_ID
# check the resulting image
qm config $TEMPLATE_ID
Preparing Server List to Create¶
Let me prepare the build directory with the csv file containing list of servers to build.
My server list looks like this.
Btw note that I'm using 192.0.2.0/24 and example.net just for the write up and I set my real domain name and IP addresses from my subnet.
# hostname, vmid, os, cpu, memory(mb), disk(gb), ipaddr/subnetmask, gw
# vmhost1, 8999, debian, 4, 8192, 128, 192.0.2.30/24, 192.0.2.1
etcd2,1303,debian,2,2048,32,192.0.2.11/24,192.0.2.62
etcd3,1304,debian,2,2048,32,192.0.2.12/24,192.0.2.62
Bulk VM Build Script¶
Let me prepare an .env
file to set usernames and VMID of the available cloud-init image templates.
Here I set the username, ssh public key path, nameservers to set, and search domain to configure in cloud-init. Place the ssh public key for the user. Once the VM is built, you can use the private key and logon as the username set here.
Also the list of templated cloud-init image IDs are set here. As you go and prepare more distros, you can modify this file to add them as a image option.
# .env file
# cloud-init
CI_USERNAME=ansible-hlv3
CI_USER_SSH_PUB=/opt/pve/ssh_pub/id_ed25519_ansible.pub
CI_NAMESERVERS=192.0.2.16 192.0.2.17
CI_SEARCHDOMAIN=example.net
# VM TEMPLATE ID
DEBIAN_TEMPLATE_ID=9006
RHEL_TEMPLATE_ID=9003
UBUNTU_TEMPLATE_ID=9004
ROCKY_TEMPLATE_ID=9000
ORACLE_TEMPLATE_ID=9001
Now this is the bulk build script. Some variables are taken from the .env
file just created.
Matching the OS name such as "debian" and its template ID is done in get_template_id()
. Update both .env
and this function when you update or add distros.
Here is what's done briefly in this script:
- create the new VM by cloning from existing cloud-init template image
- set cloud-init configurations like username, ssh public key, nameservers, and search suffix
- set cpu count and memory size
- resize disk
- set
onboot=1
to automatically start the VM whenever the Proxmox host reboots - create a snapshot named "init" with the timestamp added in the description
#!/bin/bash
# files to use
ENV_FILENAME=.env
VM_LIST_CSV=vm_list.csv
# function to load variables from .env file
load_dotenv() {
while IFS= read -r line; do
if [[ -z "$line" || "$line" =~ ^# ]]; then
continue
fi
export "$line"
done <$ENV_FILENAME
}
# function to get template VM ID per OS
function get_template_id() {
case $OS in
debian)
TEMPLATE_ID=$DEBIAN_TEMPLATE_ID
;;
rhel)
TEMPLATE_ID=$RHEL_TEMPLATE_ID
;;
ubuntu)
TEMPLATE_ID=$UBUNTU_TEMPLATE_ID
;;
rocky)
TEMPLATE_ID=$ROCKY_TEMPLATE_ID
;;
oracle)
TEMPLATE_ID=$ORACLE_TEMPLATE_ID
;;
*)
echo "Invalid OS"
exit 1
;;
esac
}
# function for VM bulk build
function bulk_build_vm() {
local IFS=','
while read -r line; do
if [[ -z "$line" || "$line" =~ ^# ]]; then
continue
fi
set -- $line
HOSTNAME=$1
VMID=$2
OS=$3
CPU=$4
MEMORY=$5
DISK="${6}G"
NIC=$7
GW=$8
get_template_id
qm clone $TEMPLATE_ID $VMID --name $HOSTNAME
qm set $VMID --sshkey $CI_USER_SSH_PUB
qm set $VMID --ciuser $CI_USERNAME
qm set $VMID --ipconfig0 ip=$NIC,gw=$GW
qm set $VMID --nameserver "${CI_NAMESERVERS}"
qm set $VMID --searchdomain $CI_SEARCHDOMAIN
qm set $VMID --cores $CPU --memory $MEMORY
qm set $VMID --onboot 1
qm disk resize $VMID scsi0 $DISK
qm snapshot $VMID init --description "Created at $(date --utc --iso-8601=seconds)"
done <$VM_LIST_CSV
}
# load variables from .env file
load_dotenv
# bulk build VMs
bulk_build_vm
Example execution log.
# ./bulk_build.sh
create full clone of drive ide2 (local-zfs:vm-9006-cloudinit)
create linked clone of drive scsi0 (local-zfs:base-9006-disk-0)
update VM 1303: -sshkeys xxx
update VM 1303: -ciuser ansible-hlv3
update VM 1303: -ipconfig0 ip=192.0.2.11/24,gw=192.0.2.62
update VM 1303: -nameserver 192.0.2.16 192.0.2.17
update VM 1303: -searchdomain example.net
update VM 1303: -cores 2 -memory 2048
update VM 1303: -onboot 1
snapshotting 'drive-scsi0' (local-zfs:base-9006-disk-0/vm-1303-disk-0)
create full clone of drive ide2 (local-zfs:vm-9006-cloudinit)
create linked clone of drive scsi0 (local-zfs:base-9006-disk-0)
update VM 1304: -sshkeys xxx
update VM 1304: -ciuser ansible-hlv3
update VM 1304: -ipconfig0 ip=192.0.2.12/24,gw=192.0.2.62
update VM 1304: -nameserver 192.0.2.16 192.0.2.17
update VM 1304: -searchdomain example.net
update VM 1304: -cores 2 -memory 2048
update VM 1304: -onboot 1
snapshotting 'drive-scsi0' (local-zfs:base-9006-disk-0/vm-1304-disk-0)
# qm config 1303
boot: order=scsi0
ciuser: ansible-hlv3
cores: 2
cpu: cputype=x86-64-v3
ide2: local-zfs:vm-1303-cloudinit,media=cdrom,size=4M
ipconfig0: ip=192.0.2.11/24,gw=192.0.2.62
memory: 2048
meta: creation-qemu=9.0.2,ctime=1741480019
name: etcd2
nameserver: 192.0.2.16 192.0.2.17
net0: virtio=BC:24:11:16:50:A2,bridge=vmbr0
onboot: 1
ostype: l26
parent: init
scsi0: local-zfs:base-9006-disk-0/vm-1303-disk-0,size=32G
scsihw: virtio-scsi-single
searchdomain: example.net
serial0: socket
smbios1: uuid=73b42a6c-b9c8-4c83-9b26-47d86821df39
sshkeys: ssh-ed25519%20AAAAC3NzaC1lZDI1NTE5AAAAICL2ZwI2LR%2BVb%2BqFL6wgEwlhLRVD1CIO71bEmrlAGVj1%20ansible%0A
vga: serial0
vmgenid: f26df15a-f2d8-48e3-9382-255ccd998c50
# qm listsnapshot 1303
`-> init 2025-03-09 00:57:10 Created at 2025-03-09T00:57:10+00:00
`-> current You are here!
Now, run qm start 1303
and qm start 1304
to run those VMs.
Logon and see¶
Logon as the username set in the .env
file and use the ssh private key for the public key also specified in the .env
file.
You can confirm that this VM is running the latest Debian 12 Bookworm (12.9), has x2 CPU, 2G memory, and 32G disk.
ansible-hlv3@etcd2:~$ ls -la .ssh
total 12
drwx------ 2 ansible-hlv3 ansible-hlv3 4096 Mar 9 01:01 .
drwxr-xr-x 3 ansible-hlv3 ansible-hlv3 4096 Mar 9 01:01 ..
-rw------- 1 ansible-hlv3 ansible-hlv3 89 Mar 9 01:01 authorized_keys
ansible-hlv3@etcd2:~$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
ansible-hlv3@etcd2:~$ cat /etc/debian_version
12.9
ansible-hlv3@etcd2:~$ sudo echo hi
hi
ansible-hlv3@etcd2:~$ grep -c processor /proc/cpuinfo
2
ansible-hlv3@etcd2:~$ grep MemTotal /proc/meminfo
MemTotal: 2027032 kB
ansible-hlv3@etcd2:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 979M 0 979M 0% /dev
tmpfs 198M 480K 198M 1% /run
/dev/sda1 32G 914M 30G 3% /
tmpfs 990M 0 990M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda15 124M 12M 113M 10% /boot/efi
tmpfs 198M 0 198M 0% /run/user/1000
ansible-hlv3@etcd2:~$ sudo fdisk -l
Disk /dev/sda: 32 GiB, 34359738368 bytes, 67108864 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E87670DB-F40E-7D44-B763-861A19E45656
Device Start End Sectors Size Type
/dev/sda1 262144 67108830 66846687 31.9G Linux root (x86-64)
/dev/sda14 2048 8191 6144 3M BIOS boot
/dev/sda15 8192 262143 253952 124M EFI System
Partition table entries are not in disk order.