Skip to content

Proxmox Install and Configure

This content pack manages deployment of Proxmox hyperviso nodes and configuration of the installed Proxmox hypervisor.

This content pack utilizes the Debian bootenv as a baseline for the Proxmox installation. The Proxmox released "appliance ISO" is not network installable by default, and requires a fair amount of work to rip apart the ISO and rebuild it to make it network installable.

The version of Proxmox that is installed is dependent on the base Debian version you install. Examples of the Proxmox versions you will get based on the Debian base version:

  • Debian 10 = Proxmox 6.x
  • Debian 11 = Proxmox 7.x

Upgrading Proxmox Content Pack

To upgrade from the proxmox v4.9.0 and older versions, please perform the following prepatory tasks:

  • Convert any existing profiles in configuration data content packs that use Params with the name buster in them; replace buster with debian
  • Remove any Workflows or Stages on any Machines that contain the buster named Stages
  • Upgrade the content pack as per normal operations (eg drpcli catalog item install proxmox --version=v4.11.0, or via the Catalog menu item in the Web UX Portal)

The named Params and Stages that contained "buster" have been moved to more generic name construct of "debian" to support the install of the content on newer versions of Debian distros.

Installation Preparation Information

Proxmox installation has been moved to use the Universal Pipelines and workflow chaining process. This significantly simplifies the installation path, along with providing all of the enhanced customizability of the Universal workflows.

The basic customization and configuration methods for Proxmox have not changed from previous implementation.


RackN does NOT suggest using the standalone (non-universal) Proxmox workflows. They are maintained in this content pack for backwards compatibility, and will be DEPRECATED and removed from future versions.

The Proxmox content bundle relies on drp-community-content to install the base Debian version on the target systems.

Debian package prerequisites require human input to the packaging system. To automate this, we utilize the debconf-set-selections mechanism to preseed the answers to those packages (eg Samba and Postfix). The preseed debconf selections template is defined by the proxmox/debconf-selections-template Param. By default it defines the template as proxmox-debconf-selections-template.tmpl. To specify customizations for the selections, set the Param to point to your own custom template with appropriate customization selections.

The debconf selections by default answer values for Postfix and Samba. If the Proxmox host needs these set to interact with external infrastructure (Eg send outbound mail), you must adjust these appropriately.

Debian OS Network Configuration

Currently, the RackN netwrangler Network Configuration tooling does not support network configuration for Debian / Proxmox systems. As a consequence, Tasks and tooling have been written to support building up the Base OS network configuration to support the topology requirements for given virtualized use cases.

The primary concern is to integrate the hypervisors network configuration and IP addressing needs with the virtualized network topology and IP addressing requirements for the virtualized infrastructure built on top of Proxmox.

To address that, the Proxmox workflows support custom network topology configuration of the base hypervisor OS (Debian) with the use of the Flexiflow system to inject tasks to handle network reconfiguration. This is handled by setting an array of tasks in the proxmox/flexiflow-debian-install Param, which drives the flexiflow-debian-install Stage to inject tasks.

Several prebuilt network topology Tasks exist that may meet the operator needs. These are as follows:

  • network-add-nat-bridge
  • network-convert-interface-to-bridge
  • network-simple-bridge-with-addressing
  • network-simple-single-bridge-with-nat

More than one network reconfiguration task can be specified in the array to combine multiple tasks to generate the final desired configuration. Please review each Task to understand what is being changed to support network topology use cases.

Due to how network reconfigurations occur in Debian, it is possible that some tasks may require forcing a reboot, or detailed understanding on how to tear down existing configurations to reach a desired topology. For example, if the base OS is using Bonded interfaces that need to be broken and reconfigured, this requires a lot of work to correctly tear down the bonds and build things back up again, it is safeer/cleaner to reboot the system in some cases.

Custom templates can be written to effect specific network topology changes as required, if the provided tasks/templates are not sufficient for your use case. Please review the Task and associated above named Templates to understand how to customize via the Flexiflow task injection.

Installation via Universal Pipelines

The following information outlines installation of Proxmox 7 via the Debian base repositories. Future versions may need to adjust this example (eg "Debian 12" deploys "Proxmox 8").

The process should be fairly straight forward. YOU MUST build the appropriate network topology and configuration necessary for your virtualized environment use case. This should be provided in a Profile attached to your target machine PRIOR TO INSTALLING Proxmox. See the above documentation information for details.


Debian 11 can ONLY be installed with Internet access. Go thank the Debian maintainers for that. Subsequently though, you will not need to install the Debian 11 bootenv ISO.

  1. Install the BootEnv ISO for the Base OS, if required (see above note)
  2. Add your Proxmox configuration Profile to the target bare metal Machine
  3. Set the Pipeline on the Machine to proxmox-7
  4. Set the Workflow on the Machine to universal-discover
  5. Hit the Start Workflow button

Virtual Machine IP Address Assignments

Ultimately, how the Virtual Machines on the Proxmox system obtain their IP addressing determines the required previous OS Network Configuration steps. Some examples are:

  • Completely private IP addressing, isolated from external networks - requires hypervisor/OS SSH tunneling, VPNs, or custom NAT translations to make the VMs reachable from the outside
  • Virtual Machine Layer 3 IP networks, routed by external network devices, to the external IP of the Hypervisor
  • DHCP or Static IP assignments from the Hypervisors Layer 2 / Layer 3 network, bridging the hypervisor interface directly to the VMs networks

In some cases, the NAT translation network configuration tasks in these workflows can help with the first case (private networks).

If the VMs are addressable on the external network of the Hypervisor, then an external DRP Endpoint can provision the addressable VMs via standard PXE/TFTP boot and installation workflows.

If the VMs are not addressable directly on the external networks, a DRP endpoint may be installed on the Hypervisor OS alongside the Proxmox components to provision the VMs.

For more complete details of these differences, see the following DRP Deployment section.

DRP Deployment of Virtual Machines

Digital Rebar Platform (DRP) v4.11 and newer now supports virtual machine infrastructure management via the cloud-wrappers content on top of Proxmox hypervisors.

The process essentialy entails creating a Resource Broker which provides the connection and API calls to a given individual Hypervisor or cluster of Proxmox hypervisors.

A Cluster object is then used to manage the request to add or remove Virtual Machines, via a specific Resource Broker.

Please review the Cloud Wrappers documentation for more details on how to achieve this.

Workflow Definitions

The primary installation workflow is provided in the universal catalog content bundle. It is called universal-proxmox, but should not be directly used. See the Installation via Universal Pipelines instructions above for usage.

Here is a brief summary of the now DEPRECATED workflows provided in the Proxmox content pack.


DEPRECATED - please do not use these workflows. See the Installation via Universal Pipelines section above for correct installation method.

  • proxmox-debian-install: Install Debian 10 (Buster) linux, install Proxmox, setup admin account, create storage configs, enable KVM nested mode, configure base hypervisor network
  • proxmox-only-install: no Debian install, only Proxmox install and configure

These workflows have been removed from the Proxmox content as of v4.11.

  • proxmox-install-and-lab-setup
  • proxmox-lab-centos-image
  • proxmox-lab-create
  • proxmox-lab-destroy
  • proxmox-lab-drp-setup

Profiles with Example Usage

Review the Profile provided in the Proxmox content pack for example usage scenarios. Example profile(s) will start with the prefix EXAMPLE-.

Future To Do Items

This is a list of future enhancements that are planned for the Proxmox content.

Current ToDo list:

  • [completed in v4.11] separate Proxmox and RackN Lab components in to separate content packs
  • [deprecated in v4.11] restructure the workflows as individual non-overlapping workflows as follows:

  • [completed] base OS install and customization (or move debconf selection handline to community content)

  • [completed] base OS network topology reconfiguration (preferably netwrangler should support this instead)
  • [completed] proxmox package installation
  • [completed] proxmox configuration and setup
  • [completed] generic VM create capability (this may move to new WorkOrders system)
  • [completed] generic VM destroy capability (this may move to new WorkOrders system)
  • [partially completed in v4.11] RackN usage scenarios

    • lab create
    • lab destroy
  • [completed in v4.11] move the newly resturcutred workflows to Universal wrapped workflows

  • [completed in v4.11] possibly integrate cloud-wrappers to drive VM management on top of Proxmox hosts or clusters
  • implement Proxmox Cluster management between multiple hypervisors
  • enable more Storage configuration capabilities (e.g. shared Ceph storage, zfs, nfs)
  • move to Netwrangler network topology management for Hypervisor network config (requires netwrangler supporting Debian base network configuration methodology)