22.38. proxmox - Proxmox Install and Configure

The following documentation is for Proxmox Install and Configure (proxmox) content package at version v4.12.0-alpha00.78+gc037aaa40eb3ad853690ce178f9ab8a5bae4c436.

This content pack manages deployment of Proxmox hyperviso nodes and configuration of the installed Proxmox hypervisor.

This content pack utilizes the Debian bootenv as a baseline for the Proxmox installation. The Proxmox released “appliance ISO” is not network installable by default, and requires a fair amount of work to rip apart the ISO and rebuild it to make it network installable.

The version of Proxmox that is installed is dependent on the base Debian version you install. Examples of the Proxmox versions you will get based on the Debian base version:

  • Debian 10 = Proxmox 6.x

  • Debian 11 = Proxmox 7.x

## Upgrading Proxmox Content Pack

To upgrade from the proxmox v4.9.0 and older versions, please perform the following prepatory tasks:

  • Convert any existing profiles in configuration data content packs that use Params with the name buster in them; replace buster with debian

  • Remove any Workflows or Stages on any Machines that contain the buster named Stages

  • Upgrade the content pack as per normal operations (eg drpcli catalog item install proxmox –version=v4.11.0, or via the Catalog menu item in the Web UX Portal)

The named Params and Stages that contained “buster” have been moved to more generic name construct of “debian” to support the install of the content on newer versions of Debian distros.

## Installation Preparation Information

Proxmox installation has been moved to use the Universal Pipelines and workflow chaining process. This significantly simplifies the installation path, along with providing all of the enhanced customizability of the Universal workflows.

The basic customization and configuration methods for Proxmox have not changed from previous implementation.

!!! note

RackN does NOT suggest using the standalone (non-universal) Proxmox workflows. They are maintained in this content pack for backwards compatibility, and will be DEPRECATED and removed from future versions.

The Proxmox content bundle relies on drp-community-content to install the base Debian version on the target systems.

Debian package prerequisites require human input to the packaging system. To automate this, we utilize the debconf-set-selections mechanism to preseed the answers to those packages (eg Samba and Postfix). The preseed debconf selections template is defined by the proxmox/debconf-selections-template Param. By default it defines the template as proxmox-debconf-selections-template.tmpl. To specify customizations for the selections, set the Param to point to your own custom template with appropriate customization selections.

The debconf selections by default answer values for Postfix and Samba. If the Proxmox host needs these set to interact with external infrastructure (Eg send outbound mail), you must adjust these appropriately.

## Debian OS Network Configuration

Currently, the RackN netwrangler Network Configuration tooling does not support network configuration for Debian / Proxmox systems. As a consequence, Tasks and tooling have been written to support building up the Base OS network configuration to support the topology requirements for given virtualized use cases.

The primary concern is to integrate the hypervisors network configuration and IP addressing needs with the virtualized network topology and IP addressing requirements for the virtualized infrastructure built on top of Proxmox.

To address that, the Proxmox workflows support custom network topology configuration of the base hypervisor OS (Debian) with the use of the Flexiflow system to inject tasks to handle network reconfiguration. This is handled by setting an array of tasks in the proxmox/flexiflow-debian-install Param, which drives the flexiflow-debian-install Stage to inject tasks.

Several prebuilt network topology Tasks exist that may meet the operator needs. These are as follows:

  • network-add-nat-bridge

  • network-convert-interface-to-bridge

  • network-simple-bridge-with-addressing

  • network-simple-single-bridge-with-nat

More than one network reconfiguration task can be specified in the array to combine multiple tasks to generate the final desired configuration. Please review each Task to understand what is being changed to support network topology use cases.

Due to how network reconfigurations occur in Debian, it is possible that some tasks may require forcing a reboot, or detailed understanding on how to tear down existing configurations to reach a desired topology. For example, if the base OS is using Bonded interfaces that need to be broken and reconfigured, this requires a lot of work to correctly tear down the bonds and build things back up again, it is safeer/cleaner to reboot the system in some cases.

Custom templates can be written to effect specific network topology changes as required, if the provided tasks/templates are not sufficient for your use case. Please review the Task and associated above named Templates to understand how to customize via the Flexiflow task injection.

## Installation via Universal Pipelines

The following information outlines installation of Proxmox 7 via the Debian base repositories. Future versions may need to adjust this example (eg “Debian 12” deploys “Proxmox 8”).

The process should be fairly straight forward. YOU MUST build the appropriate network topology and configuration necessary for your virtualized environment use case. This should be provided in a Profile attached to your target machine PRIOR TO INSTALLING Proxmox. See the above documentation information for details.

!!! note

Debian 11 can ONLY be installed with Internet access. Go thank the Debian maintainers for that. Subsequently though, you will not need to install the Debian 11 bootenv ISO.

1. Install the BootEnv ISO for the Base OS, if required (see above note) 1. Add your Proxmox configuration Profile to the target bare metal Machine 1. Set the Pipeline on the Machine to proxmox-7 1. Set the Workflow on the Machine to universal-discover 1. Hit the Start Workflow button

## Virtual Machine IP Address Assignments

Ultimately, how the Virtual Machines on the Proxmox system obtain their IP addressing determines the required previous OS Network Configuration steps. Some examples are:

  • Completely private IP addressing, isolated from external networks - requires hypervisor/OS SSH tunneling, VPNs, or custom NAT translations to make the VMs reachable from the outside

  • Virtual Machine Layer 3 IP networks, routed by external network devices, to the external IP of the Hypervisor

  • DHCP or Static IP assignments from the Hypervisors Layer 2 / Layer 3 network, bridging the hypervisor interface directly to the VMs networks

In some cases, the NAT translation network configuration tasks in these workflows can help with the first case (private networks).

If the VMs are addressable on the external network of the Hypervisor, then an external DRP Endpoint can provision the addressable VMs via standard PXE/TFTP boot and installation workflows.

If the VMs are not addressable directly on the external networks, a DRP endpoint may be installed on the Hypervisor OS alongside the Proxmox components to provision the VMs.

For more complete details of these differences, see the following DRP Deployment section.

## DRP Deployment of Virtual Machines

Digital Rebar Platform (DRP) v4.11 and newer now supports virtual machine infrastructure management via the cloud-wrappers content on top of Proxmox hypervisors.

The process essentialy entails creating a Resource Broker which provides the connection and API calls to a given individual Hypervisor or cluster of Proxmox hypervisors.

A Cluster object is then used to manage the request to add or remove Virtual Machines, via a specific Resource Broker.

Please review the [Cloud Wrappers documentation](../cloud-wrappers) for more details on how to achieve this.

## Workflow Definitions

The primary installation workflow is provided in the universal catalog content bundle. It is called universal-proxmox, but should not be directly used. See the [Installation via Universal Pipelines](#installation-via-universal-pipelines) instructions above for usage.

Here is a brief summary of the now DEPRECATED workflows provided in the Proxmox content pack.

!!! note

DEPRECATED - please do not use these workflows. See the [Installation via Universal Pipelines](#installation-via-universal-pipelines) section above for correct installation method.

  • proxmox-debian-install: Install Debian 10 (Buster) linux, install Proxmox, setup admin account, create storage configs, enable KVM nested mode, configure base hypervisor network

  • proxmox-only-install: no Debian install, only Proxmox install and configure

These workflows have been removed from the Proxmox content as of v4.11.

  • proxmox-install-and-lab-setup

  • proxmox-lab-centos-image

  • proxmox-lab-create

  • proxmox-lab-destroy

  • proxmox-lab-drp-setup

## Profiles with Example Usage

Review the Profile provided in the Proxmox content pack for example usage scenarios. Example profile(s) will start with the prefix EXAMPLE-.

## Future To Do Items

This is a list of future enhancements that are planned for the Proxmox content.

Current ToDo list:

  • [completed in v4.11] separate Proxmox and RackN Lab components in to separate content packs

  • [deprecated in v4.11] restructure the workflows as individual non-overlapping workflows as follows:

    • [completed] base OS install and customization (or move debconf selection handline to community content)

    • [completed] base OS network topology reconfiguration (preferably netwrangler should support this instead)

    • [completed] proxmox package installation

    • [completed] proxmox configuration and setup

    • [completed] generic VM create capability (this may move to new WorkOrders system)

    • [completed] generic VM destroy capability (this may move to new WorkOrders system)

    • [partially completed in v4.11] RackN usage scenarios

      • lab create

      • lab destroy

  • [completed in v4.11] move the newly resturcutred workflows to Universal wrapped workflows

  • [completed in v4.11] possibly integrate cloud-wrappers to drive VM management on top of Proxmox hosts or clusters

  • implement Proxmox Cluster management between multiple hypervisors

  • enable more Storage configuration capabilities (e.g. shared Ceph storage, zfs, nfs)

  • move to Netwrangler network topology management for Hypervisor network config (requires netwrangler supporting Debian base network configuration methodology)

22.38.1. Object Specific Documentation

22.38.1.1. bootenvs

The content package provides the following bootenvs.

22.38.1.1.1. proxmox-6-install

This BootEnv installs the CentOS 7 Minimal operating system. Both x86_64 and aarch64 architectures are supported.

22.38.1.1.2. proxmox-6-rackn-install

This BootEnv installs the Proxmox system. This is a rebuilt image from the stock ISO to support PXE installation process, as the community released ISO does not support PXE by default.

22.38.1.2. params

The content package provides the following params.

22.38.1.2.1. network-add-nat-bridge-template

The name of the template to utilize to configure the NAT Add Bridge network with Addressing (network-nat-add-bridge) network configuration.

The default is network-add-nat-bridge.cfg.tmpl

This will be written to /etc/network/interfaces.d/$BRIDGE where BRIDGE is defined by the Param proxmox/vm-nat-bridge.

22.38.1.2.2. network-convert-interface-to-bridge-template

The name of the template to utilize to configure the NAT Add Bridge network with Addressing (network-convert-interface-to-bridge) network configuration.

The default is network-convert-interface-to-bridge.cfg.tmpl

This will be written to /etc/network/interfaces.d/$BRIDGE where BRIDGE is defined by the Param proxmox/vm-external-bridge.

22.38.1.2.3. network-simple-bridge-with-addressing-template

The name of the template to utilize to configure the Network Simple Bridge with Addressing (network-simple-bridge-with-addressing) network configuration.

The default is network-simple-bridge-with-addressing.cfg.tmpl

This will be written to /etc/network/interfaces.d/$BRIDGE where BRIDGE is defined by the Param proxmox/vm-external-bridge.

22.38.1.2.4. network-simple-single-bridge-with-nat

The name of the template to utilize to configure the single flat bridge setup, with outbound NAT translation for the VMS.

The default is network-simple-single-bridge-with-nat

This will be written to /etc/network/interfaces.d/$BRIDGE where BRIDGE is defined by the Param proxmox/vm-external-bridge.

22.38.1.2.5. proxmox/data-profile

This parameter defines the Profile name for the profile that will carry dynamic data generated through the install process. For example, the generates SSH key halves will be saved to this profile.

!!! warning

It is critical that this is set to a unique value if you are maintaining multiple separate Proxmox deployments.

22.38.1.2.6. proxmox/debconf-selections-template

Defines the template to use during installation for the debconf-set-selections process. To customize, create a new template with the correctly formatted debconf-set-selections values, and set this Param to the name of your custom template.

By default the template named proxmox-debconf-set-selections.tmpl will be used.

22.38.1.2.7. proxmox/drp-timeout-kill-switch

This is an emergency control outlet. If this parameter is set to true, and if the machine is in the proxmox-drp-provision-drp task … and in the timeout wait loop in the Shell execution… it will evalute this Param, and exit from the loop with an error code.

This param will be attempted to be removed from the machine prior to exiting with an error message.

22.38.1.2.8. proxmox/drp-wait-timeout

Changes the timeout wait for all DRP VMs to be created. Some particularly slow hardware may make this process longer than expected. The default value is 600 seconds.

22.38.1.2.9. proxmox/flexiflow-create-storage

This Param controls setting up the Storage on a Proxmox host. It should be a list of Tasks responsible for setting up the specific supported types of storatge on the Proxmox node. The Flexiflow system will inject each task in to the running workflow to implement the changes.

The Storage types will be implemented in order as specified in the list.

Documentation on the Proxmox storage types can be found at:

Supported storage types are defined as a Task, which implements each various Storage system configuration that Proxmox supports. See the above page for Supported types.

Storage types implementation status:
  • lvmthin: supported (the default method)

  • dir: supported

  • lvm: not implemented

  • zfspool: not implemented

  • btrfs: not implemented

  • nfs: not implemented

  • cifs: not implemented

  • pbs: not implemented

  • glusterfs: not implemented

  • cephfs: not implemented

  • iscsi: not implemented

  • iscsidirect: not implemented

  • rbd: not implemented

  • zfs: not implemented

See each specific task for configuration values/settings to configure each type.

22.38.1.2.10. proxmox/flexiflow-debian-install

This Param contains an Array of Strings that will define which task or tasks to dynamically add to the flexiflow-debian-install workflow on first boot. One or more tasks may be specified; and each task defined by this Param will be executed in the order found in the list.

This is generally used to specify the network configuration in the base Hypervisor, before creating any target DRP or Machine VMs.

For example, the following tasks set network configuration up:

  • network-simple-bridge-with-external-addressing

To create a simple bridge, with an IP address assigned block to allocate to the “external” interfaces of the DRP Endpoint Virtual Machines. IP addressing for the DRP Endpoints must be provided by the external network (external to the Hypervisor), either via DHCP, or static assignment. The DRP endpoints are essentially bridged to the Hypervisors physical external network.

Another example:

  • network-convert-interface-to-bridge

The above migrates the IP Address on the base interface on the Proxmox Hypervisor to a bridge (identified by the Param proxmox/lab-network-external-interface), the DRP Endpoint VMs external interface are then attached to this bridge.

Another example:

  • network-simple-single-bridge-with-nat

The above assumes that (typically) vmbr0 carries the hypervisors primary IP address, and that Machines will be directly attached to this bridge. The machines will use a secondary network space (defined by proxmox/network-external-subnet), but will be setup to NAT to the Bridges IP address for outbound internet connectivity.

No inbound NAT mappings are setup in this mode. If inbound IP connectivity to the VMs is required, then external routers need to route the proxmox/network-external-subnet to the Hypervisors IP, or additional NAT inbound mappings need to be arranged.

  • network-add-nat-bridge

The above creates an additional bridge to abstract the connection from the Hypervisors main NIC and Bridge, connecting the DPR Endpoints to this bridge. NAT Masquerading or similar constructs must be used to provide outbound network connectivity to the DRP Endpoints.

!!! warning

The network-add-nat-bridge current NAT Masquerading mechanisms do not appear to correctly work reliably. This method requires additional testing and development.

22.38.1.2.11. proxmox/install-drp-on-hypervisor

Depending on the network configuration used on the Hypervisors, the DRP Endpoint VMs may or may not need to be provisioned from the Hypervisor.

In the event that the DPR Virtual Machines do not obtain DHCP and PXE from outside of the Hypervisor, then the operator will have to arrange to install an OS on the DRP VMs. The main workflows include a DRP Install on the Hypervisor task.

If this Param is set to true (NOT the default), then DRP will be installed in a very opinionated configuration.

22.38.1.2.12. proxmox/iso

The URL that the ISO of the Proxmox install can be found at. This ISO will be modified to include the ISO as /proxmox.iso to enable network install of Proxmox. By default the ISO is not capable of installing via HTTP network path.

22.38.1.2.13. proxmox/package-selections

This parameter defines the Package selection list to install initially. This list should contain at least proxmox-ve and any necessary supporting packages.

If the operator overrides the Default values specified in this Param, all packages must be specified in the updated Param values.

The list is a space separated string that must contain valid Debian package names. These packages must be available in the default repos unless additional apt repos have been setup and initialized prior to this task run.

!!! note

The default workflows assume postfix and samba packages are installed (as specified by proxmox requirements). There are special tasks for staging apt-set-selections to automate these package installation successfully. If additional packages requiring input are added, the operator must implement a set of apt-set-selections appropriate to that package.

This Param defaults to:

  • proxmox-ve postfix openvswitch-switch open-iscsi vim wget curl jq ifupdown2 lldpd

If the operator sets any values to this Param, you MUST ALSO INCLUDE THESE AS THEY ARE REQUIRED.

This should likely be adjusted in the future to not allow these to be overridden.

22.38.1.2.14. proxmox/storage-config

This param is used to define the configuration for the various backend storage on a Proxmox host.

The Param is an object, with a Key for the type of storage, based on supported setup configurations in Tasks, following with another key that is specific to the given instace configuraiton. This allows for support of multiple objects of the same type.

```yaml lvmthin:

local-lvm:

device: /dev/sdb vgname: pve thinpool: data content: rootdir,images size: 95%FREE maxfiles: “7”

dir:
local-images:

path: /var/lib/images content: images,iso

local:

path: /var/lib/vz content: iso,vztmpl,backup format: qcow2,vmdk

backup:

path: /mnt/backup content: backup prune-backups: keep-all=0,keep-daily=7,keep-hourly=24

```

!!! warning:: You may not specify a dir type with name of local, this is a reserved

Dir type of storage. Doing so will cause a fatal error in the Workflow and stop workflow processing.

The following types have not yet been implemented yet.

  • lvm, zfspool, btrfs, nfs, cifs, pbs, glusterfs, cephfs, iscsi, iscsidirect, rbd, zfs

For Storage configuration details, see the Proxmox documentation at:

!!! note

This object type does not yet contain a Schema for validation of the configuration. Field values in each segment directly map to Proxmox Storage configuration directives, use the documentation for guidelines, and follow the example defined above. However, some values may be helpers specific to a task (eg device: /dev/sdb, defines to create the LVM Thinpool using the backing Device specified).

A future implementation example for ZFS Pool configuration might look like:

```yaml zfspool:

local-zfs:

pool: rpool/data sparse: true content: images,rootdir

```

If no values are specified (the default), then the product default of LVM-Thin type storage will be set up based on the proxmox/storage-device and proxmox/storage-name Param settings.

22.38.1.2.15. proxmox/storage-device

!!! warning

DEPRECATED: Use proxmox/storage-config instead.

This param is used to define the disk that the base storage volume will be created on. It defaults to /dev/sdb if not otherwise defined.

22.38.1.2.16. proxmox/storage-name

!!! warning

DEPRECATED: Use proxmox/storage-config instead.

This param is used to define the Thin Pool and LVM Logical Volume name that will be created on the PVE node.

It defaults to local-lvm, which is used when creating VMs. Ensure these values match.

22.38.1.2.17. proxmox/storage-skip-if-exists

Setting this Param value to true will cause the Storage tasks to exit with an error condition, stopping workflow, if certain conditions arise.

For example, if creating a dir type storage, and if that storage already exists, then this Param set to true will cause the Workflow to stop with a fatal error. If however, the operator desires to continue on, and skip trying create the storage type, then set the value to false.

NOTE that the assumption in the above example is that the storage provider is fully configured correctly. No attempt to additionally provide configuration settings will be made.

The default value is true (do exit Workflow on errors).

22.38.1.2.18. proxmox/strip-kernel

Setting this Param value to true will cause the installer to remove the packages specified by the proxmox/strip-kernel-packages param. This is an optional step and not required for Proxmox installation.

The default value is false (do NOT strip the kernel packages off of the system).

22.38.1.2.19. proxmox/strip-kernel-packages

The default package list to remove from the final installed system. The Proxmox install guides optional suggests removing the stock kernel packages. By default, this installer workflow does NOT strip these packages. To strip them, set proxmox/strip-kernel to true, and ensure this Param has the correct set of values for your installation.

The default value is linux-image-amd64 linux-image-4.19*.

!!! note

If a regex is used, you must single quote protect the regex from the shell interpretting it as a wildcard. See the default value setting for this param as a valid example.

22.38.1.2.20. proxmox/vm-drp-nic

Must select one of the Proxmox supported NIC models from the list. The default is e1000. If you are running ESXi on top of Proxmox, you may need to change this (eg to vmxnet3 - especially for ESXi 7.x).

Additional documentation and details can be found on the Proxmox Wiki, at:

22.38.1.2.21. proxmox/vm-drp-os-type

Must select one of the Proxmox supported OS Type models from the list. The default is l26 (Linux 2.6 or newer kernel).

Additional documentation and details can be found on the Proxmox Wiki, at (search for ‘ostype: <l24’ to find them):

The list of supported OS Types is as follows:

  • other = unspecified OS

  • wxp = Microsoft Windows XP

  • w2k = Microsoft Windows 2000

  • w2k3 = Microsoft Windows 2003

  • w2k8 = Microsoft Windows 2008

  • wvista = Microsoft Windows Vista

  • win7 = Microsoft Windows 7

  • win8 = Microsoft Windows 8/2012/2012r2

  • win10 = Microsoft Windows 10/2016

  • l24 = Linux 2.4 Kernel

  • l26 = Linux 2.6 - 5.X Kernel

  • solaris = Solaris/OpenSolaris/OpenIndiania kernel

22.38.1.2.22. proxmox/vm-drp-storage

Must select one of the Proxmox supported Storage models from the list. The default is SCSI megasas.

Additional documentation and details can be found on the Proxmox Wiki, at:

There are 3 types of controllers - ide, sata, and scsi. IDE and Sata do not have any additional configuration options. Anything else listed is a SCSI controller.

22.38.1.2.23. proxmox/vm-drp-storage-name

This param is used to define the Storage name that will be used to back the Lab DRP Virtual machines.

It defaults to local, which is automatically created as the default Storage location on a Proxmox system. This backs the Virtual Machines volumes in the filesystem of the local Proxmox node.

This Param uses the .ParamExpand method, which means that the operator can specify Golang templating constructs, which will be rendered uniquely based on the Machine context the task is running in. This allows for Storage types that are uniquely defined based on the Machine information (eg name, etc).

22.38.1.2.24. proxmox/vm-external-bridge

This param is used to define the external bridge used for Virtual Machines in various network config scripts for building up the Proxmox network topology.

22.38.1.2.25. proxmox/vm-external-subnet

This is an IP address that MUST BE routable inside your organization, to reach the Virtual Machines allcoated on the Hypervisor. The Subnet will be added on the Hypervisor and each Virtual Machine will be provisioned with an IP address from this network.

IF this method is used, you generally will have to either SSH forward to the Proxmox Hypervisor, install a VPN service of some sort on the Hypervisor, or arrange for your external Networking devices (routers/switches) to route this IP block to the addressable interface of the Proxmox Hypervisor.

The default is 192.168.1.0/24.

If you wish to assign IP addresses to your VMs via a bridged interface on the Proxmox Hypervisor, DO NOT use this method, instead, use the Network configuration task named network-simple-bridge-with-addressing.

The subnet must be in CIDR Notation (eg 1.2.3.0/24), and the Network address set in the CIDR (eg the “.0” part). The Hypervisor will be assigned the first IP address in the network, and used as the Default Route for the DRP Endpoint Virtual Machines.

22.38.1.2.26. proxmox/vm-machine-nic

Must select one of the Proxmox supported NIC models from the list. The default is e1000. If you are running ESXi on top of Proxmox, you may need to change this (eg to vmxnet3 - especially for ESXi 7.x).

Additional documentation and details can be found on the Proxmox Wiki, at:

22.38.1.2.27. proxmox/vm-machine-os-type

Must select one of the Proxmox supported OS Type models from the list. The default is l26 (Linux 2.6 or newer kernel).

Additional documentation and details can be found on the Proxmox Wiki, at (search for ‘ostype: <l24’ to find them):

The list of supported OS Types is as follows:

  • other = unspecified OS

  • wxp = Microsoft Windows XP

  • w2k = Microsoft Windows 2000

  • w2k3 = Microsoft Windows 2003

  • w2k8 = Microsoft Windows 2008

  • wvista = Microsoft Windows Vista

  • win7 = Microsoft Windows 7

  • win8 = Microsoft Windows 8/2012/2012r2

  • win10 = Microsoft Windows 10/2016

  • l24 = Linux 2.4 Kernel

  • l26 = Linux 2.6 - 5.X Kernel

  • solaris = Solaris/OpenSolaris/OpenIndiania kernel

22.38.1.2.28. proxmox/vm-machine-storage

Must select one of the Proxmox supported Storage models from the list. The default is megasas.

Additional documentation and details can be found on the Proxmox Wiki, at:

There are 3 types of controllers - ide, sata, and scsi. IDE and Sata do not have any additional configuration options. Anything else listed is a SCSI controller.

22.38.1.2.29. proxmox/vm-machine-storage-name

This param is used to define the Storage name that will be used to back the Lab target Virtual machines.

It defaults to local, which is automatically created as the default Storage location on a Proxmox system. This backs the Virtual Machines volumes in the filesystem of the local Proxmox node.

This Param uses the .ParamExpand method, which means that the operator can specify Golang templating constructs, which will be rendered uniquely based on the Machine context the task is running in. This allows for Storage types that are uniquely defined based on the Machine information (eg name, etc).

22.38.1.2.30. proxmox/vm-nat-bridge

This param is used to define the name of the Bridge that will be created for attaching Virtual Machines which should be NAT (Masqueraded). It will be attached to the primary bridge defined by proxmox/vm-external-bridge Param.

NAT Masquerading will be set up for proxmox/vm-nat-subnet. There are no DHCP services setup automatically. Either statically assign IP addresses from that range, or enable a DRP Subnet for that range on the proxmox/vm-nat-bridge interface.

The default is vmnat0.

22.38.1.2.31. proxmox/vm-nat-subnet

The IP Subnet to NAT Masquerade for proxmox/vm-nat-bridge (defaults to vmnat0). There are no DHCP services setup automatically. Either statically assign IP addresses from that range, or enable a DRP Subnet for that range on the proxmox/vm-nat-bridge interface.

The default is 192.168.1.0/24.

22.38.1.3. profiles

The content package provides the following profiles.

22.38.1.3.1. EXAMPLE-proxmox-gamble

This is an EXAMPLE PROFILE for configuration of a Proxmox host. It defines a “flat topology” for the Virtual Machine network.

To use this as a starting point, clone the Profile and adjust the Param values to suit your needs. It is important you understand each of the Param configuration values defined in this profile.

Attach the new Profile to the target Proxmox Hypervisor node, then deploy it based on the instructions in the [Proxmox](../../../../developers/contents/proxmox) documentation.

22.38.1.4. stages

The content package provides the following stages.

22.38.1.4.1. flexiflow-debian-install

Allows for injecting custom tasks in to the proxmox-debian-install workflow before finishing the install.

Set the Param proxmox/flexiflow-debian-install on the machine to a String array list of Tasks to execute. This gets set on the target Proxmox hypervisor(s) you are building.

22.38.1.4.2. proxmox-admin-account

Sets up the admin account in the PVE Realm with Admiministrator ACLs.

22.38.1.4.3. proxmox-create-storage

Allows for injecting custom storage creation tasks into the running workflow for setting up the Storage subsystems within a Proxmox node.

Set the Param proxmox/flexiflow-create-storage on the machine to a String array list of Tasks to execute. This gets set on the target Proxmox hypervisor(s) you are building.

Example:

```yaml proxmox/flexiflow-create-storage:

  • proxmox-storage-setup-dir

  • proxmox-storage-setup-lvmthin

```

Would specify setting up a Directory type storage provider, followed by setting up an LVM Thin pool for storage.

Note that each injected Task will have it’s own requirements on Param settings to control the configuration of that task.

Storage configurations are usually managed by use of the proxmox/storage-config Param.

22.38.1.4.4. proxmox-debian-installer

This Stage does basic setup of the Proxmox VE repositories, sets some debconf selections for the Samba and Postfix packages, and then performs the Proxmox VE lateest stable version.

22.38.1.4.5. proxmox-drp-destroy-drp

Destroys DRP service installed on the Hypervisor.

22.38.1.4.6. proxmox-drp-install

Installs DRP with an opinionated configuration on a DRP Endpoint.

22.38.1.4.7. proxmox-drp-provision-drp

Provisions the OS on the DRP VMs, from the installed DRP on the Hypervisor.

22.38.1.4.8. proxmox-generate-ssh-key

Creates SSH keys and stores them in the proxmox/data-profile named profile.

22.38.1.5. tasks

The content package provides the following tasks.

22.38.1.5.1. kvm-enable-nested

Determines if the machine is running Intel or AMD processors and sets up the nested virtualization capability for hypervisors to work inside virtual machines.

22.38.1.5.2. network-add-nat-bridge

This task creates a NAT bridge that will be attached to the proxmox/lab-drp-external-bridge defined bridge.

The NAT bridge will Masquerade for proxmox/lab-nat-subnet.

The template defined in the Param network-add-nat-bridge-template will be expanded in place in this script then rendered to the Hypervisor. This allows for in the field custom configurations that may not have been encompassed in the default Template configuration of this content pack.

!!! warning

This method appears to not correctly NAT Masquerade traffic correctly. Verify the DRP Endpoints have external network connectivity with this method before relying on it. The post-up/down settings may need to be adjusted.

22.38.1.5.3. network-convert-interface-to-bridge

This task converts the systems Boot Interface to an bridge enslaved connection. The bridge name must be defined by the Param proxmox/vm-external-bridge.

The VMs attach to the Bridge, and they will obtain an IP address either from DHCP or Static IP assignment from the same Layer 3 network that the Hypervisor utilizes.

The template defined in the Param network-convert-interface-to-bridge-template will be expanded in place in this script then rendered to the Hypervisor. This allows for in the field custom configurations that may not have been encompassed in the default Template configuration of this content pack.

22.38.1.5.4. network-simple-bridge-with-addressing

This network configuration creates a bridge device on the hypervisor (typically vmbr0), which the DRP Endpoint Virtual Machines will be attached to.

An IP Subnet must be defined via the proxmox/vm-external-subnet Param, and it will be allocated on the interface defined in the Param proxmox/vm-external-bridge.

IF this method is used, you generally will have to either SSH forward to the Proxmox Hypervisor, install a VPN service of some sort on the Hypervisor, or arrange for your external Networking devices (routers/switches) to route this IP block to the addressable interface of the Proxmox Hypervisor.

The template defined in the Param network-simple-bridge-with-addressing-template will be expanded in place in this script then rendered to the Hypervisor. This allows for in the field custom configurations that may not have been encompassed in the default Template configuration of this content pack.

22.38.1.5.5. network-simple-single-bridge-with-nat

This network configuration uses the proxmox/vm-external-bridge to directly attach the Virtual Machines too, and assumes the IP addressing will for the VMs will be provided by proxmox/vm-external-subnet.

This creates a secondary IP interface on the bridge device. In addition, post-up rules will be added to NAT translate outbound traffic for the VMs.

To reach connect to the VMs, you generally will have to either SSH forward to the Proxmox Hypervisor, install a VPN service of some sort on the Hypervisor, or arrange for your external Networking devices (routers/switches) to route this IP block to the addressable interface of the Proxmox Hypervisor.

22.38.1.5.6. proxmox-admin-account

Adds the admin account with Administrator role and rights to the / resources.

22.38.1.5.7. proxmox-create-storage

Create the local-lvm storage to an existing Proxmox VE server if it doesn’t yet exist.

22.38.1.5.8. proxmox-debconf-set-selections

This task provides the Debian Package preset configuration input values needed to ensure automated installation of the samba and postfix packages. It also allows the operator to pre-seed and package configurations for any installed value.

Set the proxmox/debconf-selections-template template to the name of your custom settings, which must conform to the debconf-set-selections structure.

The template will be saved on the Machine under /root/proxmox-debconf-set-selections and read in prior to package installation.

22.38.1.5.9. proxmox-debian-installer

This task sets up and installs latest stable Proxmox VE on top of an already installed Debian 10 (Buster) system. This can be run betweent the finsish-install and complete stage of the RackN provided debian-base` workflow.

This is also used in the proxmox-debian-installer Workflow which installs Debian 10 (Buster) first.

22.38.1.5.10. proxmox-drp-destroy-drp

Provisions the OS for the DRP VMs on the Proxmox host, via the DRP installed on the hypervisor.

22.38.1.5.11. proxmox-drp-install

This is a very opinionated and quick DRP install on the Proxmox Hypervisor. Future iterations should utilize the Multi Site Manager to control the DRP endpoint.

22.38.1.5.12. proxmox-drp-provision-drp

Provisions the OS for the DRP VMs on the Proxmox host, via the DRP installed on the hypervisor.

22.38.1.5.13. proxmox-generate-ssh-key

This is a very opinionated and quick SSH Key generation task. It will build ed25519 elyptical curve Public and Private key halves.

The keys will be stored on the the profile specified by the Param proxmox/data-profile.

Once the lab is built, the operator can retrieve the Private Key half and use that in their ssh-agent, or as an ssh -i keyfile … command line argument.

!!! warning

This task will overwrite the Param values, possibly losing the keys.

22.38.1.5.14. proxmox-iso-modify

The Proxmox ISO is not installable via PXE by default. However, with a relatively simple modification, it can be PXE deployed. This task rebuilds the ISO as a Tar GZ (.tgz) which stages the unmodified ISO image as /proxmox.iso in the boot/ directory, along with the Kernel and InitRD pieces for PXE bootstrap.

22.38.1.5.15. proxmox-storage-setup-dir

This task is injected in to a running workflow via the Stage proxmox-create-storage. You must set the Param proxmox/flexiflow-create-storage to include this task name for it to be added to the system.

This Task creates the dir storage to an existing Proxmox VE server if it doesn’t yet exist. Note that default Storage type is lvm-thin; which uses a full block device. This type allows use of a directory to back the VM and Container images.

The proxmox/storage-config Param defines the configuration to use for all storage types, including dir type.

An example configuraiton for this task:

```yaml proxmox/storage-config:

dir:
local-images:

path: /var/lib/images content: images,iso

local:

path: /var/lib/vz content: iso,vztmpl,backup

backup:

path: /mnt/backup content: backup maxfiles: 7

```

This creates 3 directory structures for storing different content types in the existing filesystem. For documentation on configuration values, please see:

Config values in YAML/JSON stanzas match the Proxmox configuration values.

22.38.1.5.16. proxmox-storage-setup-lvmthin

This task is injected in to a running workflow via the Stage proxmox-create-storage. You must set the Param ` proxmox/flexiflow-create-storage` to include this task name for it to be added to the system. Note that this task is defined as the default storage type to set up on the Proxmox node.

This Task creates the local-lvm storage to an existing Proxmox VE server if it doesn’t yet exist.

The proxmox/storage-config Param defines the configuration to use for all storage types. However, two (DEPRECATED) legacy params can be used for backwards compatibily configuration of this storage type. They are below.

The proxmox/storage-device Param controls the Block device (disk) that will be used to back the Virtual Machines on. It will take complete control of the block device. By default it will usee /dev/sdb if not otherwise specified.

The proxmox/storage-name Param defines the LVM Volume name that will be set. If the LVM Volume name exsts already, then the task will exit, assuming that the Volume has already been setup for use previously.

An example proxmox/storage-config configuraiton for this task:

```yaml proxmox/storage-config:

lvmthin:
local-lvm:

name: local-lvm device: /dev/sdb vgname: pve thinpool: data content: rootdir,images size: 95%FREE

```

This creates 1 LVM Thinpool for storing different content types using the specified device and names. For documentation on configuration values, please see:

Config values in YAML/JSON stanzas match the Proxmox configuration values.

22.38.1.6. workflows

The content package provides the following workflows.

22.38.1.6.1. proxmox-debian-install

Installs Debian 10 (Buster) via standard RackN BootEnv install, using preseed/package based (Debian Installer, d-i) method.

Once install completes, while still inside Debian Installer, update the system, add the Proxmox repositories, provide a minimal preseed set of answers (for Samba and Postfix packages), and then do a Proxmox install of the latest stable version.

The special stage flexiflow-debian-install is added to this workflow. By setting the Param proxmox/flexiflow-debian-install to your target machine, the individually listed Tasks will be injected in to the Workflow dynamically.

This is used to flexibly inject network config/reconfig Tasks to allow for dynamic use of the workflow. For example, setting the Param proxmox/flexiflow-debian-install as follow (in JSON example):

`json ["network-convert-interface-to-bridge"] `

Will inject that named task to modify the network by converting the Boot interface to be enslaved by the Bridge for Virtual Machines.

Another example (again, in JSON format):

`json ["network-convert-interface-to-bridge","network-add-nat-bridge"] `

This will perform the primary boot interface conversion to be enslaved by the bridge, but also bring up a NAT Masquerade bridge to attach machines to.

22.38.1.6.2. proxmox-only-install

Starts the Proxmox install. Assumes that the install is on an existing/already built Debian 10 (Buster) system, update the system, add the Proxmox repositories, provide a minimal preseed set of answers (for Samba and Postfix packages), and then do a Proxmox install of the latest stable version.

The special stage flexiflow-debian-install is added to this workflow. By setting the Param proxmox/flexiflow-debian-install to your target machine, the individually listed Tasks will be injected in to the Workflow dynamically.

This is used to flexibly inject network config/reconfig Tasks to allow for dynamic use of the workflow. For example, setting the Param proxmox/flexiflow-debian-install as follow (in JSON example):

`json ["network-convert-interface-to-bridge"] `

Will inject that named task to modify the network by converting the Boot interface to be enslaved by the Bridge for Virtual Machines.

Another example (again, in JSON format):

` ["network-convert-interface-to-bridge","network-add-nat-bridge"] ``

This will perform the primary boot interface conversion to be enslaved by the bridge, but also bring up a NAT Masquerade bridge to attach machines to.