22.10. cloud-wrappers - Cloud Wrappers

The following documentation is for Cloud Wrappers (cloud-wrappers) content package at version v4.12.0-alpha00.78+gc037aaa40eb3ad853690ce178f9ab8a5bae4c436.

This library contains items that help run Digital Rebar manage machines on public clouds. It uses Terraform tasks to create/delete machines and Ansible tasks join the machine to install the Digital Rebar runner. Once the runners starts, it will collect cloud specific data if a Metadata API is available.

TL;DR: cloud-provision uses the v4.8 Resource Brokers to create and attach machines to Terraform accessible platform.

## Requirements

### Inbound Access

The Digital Rebar Server must be at a location that is accessible to the machines being provisioned. This is required because the machines must be able to download the join-up script from the server using port 8090.

### Outbound Access

Is NOT required unless you are using a cloud provider that requires SSH into the newly created machines.

As of v4.8, none of the major cloud providers (AWS, Azure, Google, Linode, Digital Ocean) required SSH to join-up.

### Catalog Items

The Cloud Wrapper requires Contexts because it uses Runner and Terraform. If SSH is required then the Ansible Context is used.

## Setting Up Cloud Brokers

When you create a Cloud Broker, you must set Security credentials for each cloud.

The [cloud-profiles script](https://gitlab.com/rackn/provision-content/-/blob/v4/tools/cloud-profiles.sh) in the RackN provision-content repo can be used to create the

### AWS

  • aws/access-secret

  • aws/access-key-id

Additional values, e.g. region, image and instance type, have safe defaults but should be reviewed.

### Google

  • google/credential - this is a copy of contents from the JSON file Google provides

Additional values, e.g. region, image and instance type, have safe defaults but should be reviewed.

### Libvirt

  • libvirt/uri

You can additionally provide libvirt/ssh-key if your libvirt instance is not local.

### Linode

  • linode/token

Additional values, e.g. region, image and instance type, have safe defaults but should be reviewed.

### Proxmox

See the Profile documentation for resource-proxmox-cloud for more detailed use of the Proxmox Resource Broker. Specifically; new clusters WILL fail with the default configuration, and the operator MUST set alternative values for broker/set-pipeline and broker/set-workflow on the Cluster.

The following are required Resource Broker Params for Proxmox use:

  • proxmox/node

  • proxmox/user

  • proxmox/password

### Optional Values

When possible, the machine on the cloud provider is given the name of the machine in Digital Rebar.

The reference terraform plan will create tags on the cloud provider based on the assigned profiles. It also creates one called “digitalrebar.” This can be handy to find or manage the machines on the cloud provider.

22.10.1. Object Specific Documentation

22.10.1.1. blueprints

The content package provides the following blueprints.

22.10.1.1.1. broker-start-agents-via-ansible-joinup

Will perform the task that is used by non-cloud-init brokers.

Requires running from a Broker!

22.10.1.1.2. cloud-awscli-reconcile-instances

Designed to use an AWS CLI broker(s), this blueprint will call the AWS CLI describe-instances call in the aws-scan-instances task to collect information.

If unknown machines are found, they are added to Digital Rebar.

Operational Note: the range of AWS infrastructure that can be discovered is limited by the API key and region used when this is called. This will likely require multiple AWSCLI brokers to be created for full coverage.

Unlike Terraform drift detection, this process looks outside of know state for resources.

22.10.1.1.3. cloud-cluster-drift-detection

Designed to be used on a cron trigger, this blueprint uses passes “Plan” into the normal Terraform-Apply task via the Cluster. When running terraform plan the task will error if the known state does match the discovered state.

This allows operators to create a regular scan for clusters to ensure that they have not been changed outside of Digital Rebar Terraform management.

This is limited to the resources that were created by Terraform. To find instances that exist OUTSIDE of Terraform, use a cloud CLI task such as aws-scan-instances

22.10.1.2. params

The content package provides the following params.

22.10.1.2.1. aws/access-key-id

The ID needed to use the AWS secret

If you have the aws cli installed, you can retrieve this key using cat ~/.aws/credentials and then using the aws_access_key_id value.

You can also use the [cloud broker install script](https://gitlab.com/rackn/provision-content/-/raw/v4/tools/cloud-profiles.sh) to create resource brokers from local configuration information.

22.10.1.2.2. aws/ami-id

Provision AWS O/S Image

Default is Amazon Linux 11/11/21 for us-west-2

22.10.1.2.3. aws/inspect

Collects the JSON output from AWS describe-instances call.

This is effectively a AWS API version gohai output.

22.10.1.2.4. aws/instance-type

The type of resource assigned by the cloud provider

22.10.1.2.5. aws/region

Provisioning to Region for AWS

22.10.1.2.6. aws/secret-key

The token required by cloud provider to act aginst the API

If you have the aws cli installed, you can retrieve this key using cat ~/.aws/credentials and then using the aws_secret_access_key value.

You can also use the [cloud broker install script](https://gitlab.com/rackn/provision-content/-/raw/v4/tools/cloud-profiles.sh) to create resource brokers from local configuration information.

22.10.1.2.7. aws/security-groups

Comma seperated list of security groups to be applied to during Terraform plan construction

Only the list values are used, the enclosing [] are added by the cloud-provision-aws-instance.tf.tmpl template.

Default of aws_security_group.digitalrebar_basic.name is created by the default aws template cloud-provision-aws-security-group.tf.tmpl

22.10.1.2.8. aws/vpc

UUID of the target VPC from AWS (e.g.: vpc-01ab234cde5d67890)

If not defined, uses the account’s AWS default VPC

22.10.1.2.9. azure/app_id

App ID from

`sh azure_subscription_id=$(az account list | jq -r '.[0].id') az account set --subscription="$azure_subscription_id" azure_resource=$(az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/$azure_subscription_id") `

22.10.1.2.10. azure/image

Image Information for Azure including * publisher * offer * sku * version

To find images using the Azure CLI: az vm image list -f Ubuntu –all

22.10.1.2.11. azure/password

API Password

You can also use the [cloud broker install script](https://gitlab.com/rackn/provision-content/-/raw/v4/tools/cloud-profiles.sh) to create resource brokers from local configuraiton information.

22.10.1.2.12. azure/region

Region

22.10.1.2.13. azure/security-group

Name of security group id to be applied to during Terraform plan construction

The value is used exactly as provided in the azurerm_network_interface_security_group_association resource block so should include the .id or other key information.

Default of azurerm_network_security_group.security_group.id is created by the default azure template cloud-provision-azure-app.tf.tmpl

22.10.1.2.14. azure/size

Size of Azure instance

To determoine available sizes, try az vm list-sizes –location westus | jq .[].name

22.10.1.2.15. azure/subscription_id

Subscriber ID via az account list

22.10.1.2.16. azure/tenant

API Tenant from

`sh azure_subscription_id=$(az account list | jq -r '.[0].id') az account set --subscription="$azure_subscription_id" azure_resource=$(az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/$azure_subscription_id") `

22.10.1.2.17. cloud/ansible-joinup

Indicate that cloud-provision process should inject ansible-join-up into the created task lists.

This is used when the Terraform Provider cannot use a cloud-init type join.

See the resource-google-cloud for an example.

22.10.1.2.18. cloud/dr-install

Internal operations flag used to identify if cloud provision is used This is set inside on the user’s behalf in the cloud-site-* stages

22.10.1.2.19. cloud/instance-id

The ID reference from cloud provider

22.10.1.2.20. cloud/instance-type

The type of resource assigned by the cloud provider

22.10.1.2.21. cloud/join-method

This Param defines how a newly created Cloud Instance should attempt to join to a DRP Endpoint for subsequent management.

The defined method will alter the Terraform plan file to use different stanzas for the appropriately selected join method. These are often done as a provisioner “local-exec” section.

Note that join-methods should generally have a method for removing the Machine object from the DRP Endpoint as a cleanup step. Typically this is done within the provisioner “local-exec” with a conditional clause like:

  • when = destroy

The following methods are supported:

  • synchronize: Utilizes the syncronize.sh script which uses parsed state information from the Instance creation to pre-create a DRP Machine object.

  • discovery: Uses the default path of allowing a Machine to boot in to the defined unknownBootenv and subsequent defaultWorkflow, which then should create the Machine object if it doesn’t already exist (based on the Fingerprint algorithms).

22.10.1.2.22. cloud/placement/availability-zone

The location of resource assigned by the cloud provider

22.10.1.2.23. cloud/private-ipv4

Private Address assigned by the Cloud Provider

Uses terraform/map-private-ip-address to determine May also be set from cloud-inventory

22.10.1.2.24. cloud/provider

The cloud provider detected by join-up script

Custom types are supported by adding Terraform plan template ‘cloud-provision-[provider].tf.tmpl’

Implemented types:

  • aws (Amazon Web Services)

  • google (Google Compute Engine)

  • linode

  • azure (Microsoft Cloud)

  • digitalocean

  • pnap (Phoenix NAP)

  • oracle

Expand this list as new types are added!

22.10.1.2.25. cloud/public-hostname

Hostname assigned by the Cloud Provider

22.10.1.2.26. cloud/public-ipv4

Address assigned by the Cloud Provider

Determined by cloud-inventory

22.10.1.2.27. digitalocean/image

Provision Digital Ocean O/S Image

Retrieve list of images: curl -X GET -H “Content-Type: application/json” -H “Authorization: Bearer $DIGITALOCEAN_TOKEN” “https://api.digitalocean.com/v2/images” | jq .images.[].slug

22.10.1.2.28. digitalocean/key-fingerprints

The fingerprint(s) of the SSH key(s) registered with Digital Ocean that should be installed in the Droplet

WARNING: these are NOT the SSH keys created by the cluster automation. They must be uploaded into Digital Ocean and will be installed based on the stored figerprints.

This is an array so multiple fingerprints can be added.

22.10.1.2.29. digitalocean/region

Provisioning to Region for Digital Ocean

List of regions: curl -X GET -H “Content-Type: application/json” -H “Authorization: Bearer $DO_KEY” “https://api.digitalocean.com/v2/regions” | jq .regions[].slug

22.10.1.2.30. digitalocean/size

Provision Digital Ocean Droplet Size

Retrieve list of sizes: curl -X GET -H “curl -X GET -H “Content-Type: application/json” -H “Authorization: Bearer $DIGITALOCEAN_TOKEN” “https://api.digitalocean.com/v2/sizes” | jq .sizes[].slug

22.10.1.2.31. digitalocean/token

The token required by cloud provider to act aginst the API

You can also use the [cloud broker install script](https://gitlab.com/rackn/provision-content/-/raw/v4/tools/cloud-profiles.sh) to create resource brokers from local configuraiton information.

22.10.1.2.32. google/boot-disk-image

Provision Google O/S Image

22.10.1.2.33. google/credential

The token required by cloud provider to act aginst the API

You can also use the [cloud broker install script](https://gitlab.com/rackn/provision-content/-/raw/v4/tools/cloud-profiles.sh) to create resource brokers from local configuraiton information.

22.10.1.2.34. google/instance-type

The type of resource assigned by the cloud provider

22.10.1.2.35. google/project-id

NO DEFAULT! You must supply a project name.

Provisioning to Project for Google Cloud

22.10.1.2.36. google/region

Provisioning to Region for Google Cloud

22.10.1.2.37. google/zone

Provisioning to Zone for Google Cloud

22.10.1.2.38. libvirt/memory

Memory to assign to an instance.

22.10.1.2.39. libvirt/network

Libvirt network to assign to an instance.

22.10.1.2.40. libvirt/pool

Libvirt pool to use for volume storage.

22.10.1.2.41. libvirt/ssh-key

SSH Private Key to use with a libvirt connection.

22.10.1.2.42. libvirt/uri

The libvirt URI must be a valid URI:
  • qemu:///system

  • qemu+ssh://user@host:22/system

Do not add any parameters. If a ssh-key is specified, the valid parameters to use it will be specified.

See https://libvirt.org/uri.html for usage examples.

22.10.1.2.43. libvirt/vcpu

Number of vCPUs to assign to an instance.

22.10.1.2.44. libvirt/volume-name

Optional, use a base image for the Libvirt instance

22.10.1.2.45. libvirt/volume-size

Set a custom volume size for a Libvirt instance in GB. If volume-name or volume-url is specified, this is optional and the default volume size will be the source image.

22.10.1.2.46. libvirt/volume-url

Use an image from a URL for the Libvirt instance

22.10.1.2.47. linode/instance-image

Provision Linode O/S Image

To generate, use curl https://api.linode.com/v4/images | jq ‘.data.[].id’

22.10.1.2.48. linode/instance-type

Provision Linode allocation size

retrieve with curl https://api.linode.com/v4/linode/types | jq ‘.data.[].id’

22.10.1.2.49. linode/region

Provisioning to Region for Linode

22.10.1.2.50. linode/root-password

Password for Linodes If not set, should not be added to TF plan.

22.10.1.2.51. linode/token

The token required by cloud provider to act aginst the API

Make sure the Token has the following authority:
  1. create Linodes

  2. create Stackscripts

  3. create Domains

You can also use the [cloud broker install script](https://gitlab.com/rackn/provision-content/-/raw/v4/tools/cloud-profiles.sh) to create resource brokers from local configuration information.

22.10.1.2.52. mist/api-token

The ID needed to use the Mist.io API

22.10.1.2.53. oracle/availability-domain

Data Center Location

Must be in the oracle/region

22.10.1.2.54. oracle/compartment-id

Compartment for the systems being provisioned

If missing, use the oracle/tenancy

22.10.1.2.55. oracle/fingerprint

The fingerprint required by cloud provider to act aginst the API

Consult ~/.oci/config file for this value

You can also use the [cloud broker install script](https://gitlab.com/rackn/provision-content/-/raw/v4/tools/cloud-profiles.sh) to create resource brokers from local configuration information.

22.10.1.2.56. oracle/private-key

The private key required by cloud provider to act aginst the API.

The private key is required for Terraform to correctly validate operations from the Digital Rebar server. The private key is registered by the Oracle cloud.

Consult ~/.oci/config file for location of the PEM file

You can also use the [cloud broker install script](https://gitlab.com/rackn/provision-content/-/raw/v4/tools/cloud-profiles.sh) to create resource brokers from local configuration information.

22.10.1.2.57. oracle/region

Provisioning to Region for Oracle

Consult ~/.oci/config file for location of the PEM file

22.10.1.2.58. oracle/shape

Sizing information for Oracle VMs

22.10.1.2.59. oracle/source-id

Machine Image information

Specific to a region

22.10.1.2.60. oracle/subnet-id

Subnet to use for Oracle network access

Must be supplied by operator. In the future could be created by Terraform

22.10.1.2.61. oracle/tenancy-ocid

The tenancy ocid required by cloud provider to act aginst the API

Consult ~/.oci/config file for location of the PEM file

You can also use the [cloud broker install script](https://gitlab.com/rackn/provision-content/-/raw/v4/tools/cloud-profiles.sh) to create resource brokers from local configuraiton information.

22.10.1.2.62. oracle/user-ocid

The user ocid required by cloud provider to act aginst the API

Consult ~/.oci/config file for location of the PEM file

You can also use the [cloud broker install script](https://gitlab.com/rackn/provision-content/-/raw/v4/tools/cloud-profiles.sh) to create resource brokers from local configuraiton information.

22.10.1.2.63. pnap/client-id

The ID required by cloud provider to act aginst the API

You can also use the [cloud broker install script](https://gitlab.com/rackn/provision-content/-/raw/v4/tools/cloud-profiles.sh) to create resource brokers from local configuraiton information.

22.10.1.2.64. pnap/client-secret

The token required by cloud provider to act aginst the API

You can also use the [cloud broker install script](https://gitlab.com/rackn/provision-content/-/raw/v4/tools/cloud-profiles.sh) to create resource brokers from local configuraiton information.

22.10.1.2.65. pnap/location

Provision PNAP location

22.10.1.2.66. pnap/os

Provision Phoenix NSP O/S Image from available list

22.10.1.2.67. pnap/type

Provision PNAP allocation size

22.10.1.2.68. proxmox/api-port

The Proxmox API Port of the node or cluster that the VMs or Containers will be launched on.

Defaults to port 8006. Should be expressed as a Number, and not a string (do NOT put single/double quotes around the number).

22.10.1.2.69. proxmox/api-url

The URL endpoint of the Proxmox API, for either a single node or cluster.

example:

  • https://proxmox01.example.com:8006/api2/json

Alternatively, set the proxmox/node and/or proxmox/api-port Params to get the dynamically expand this URL reference.

The default value is set to:

  • ‘https://{{ .Param “proxmox/node” }}:{{ .Param “proxmox/api-port” }}/api2/json’

Which will expand to the following, if the proxmox/node and/or the proxmox/api-port is not also set:

  • https://127.0.0.1:8006/api2/json

22.10.1.2.70. proxmox/node

The Proxmox node or cluster that the VMs or Containers will be launched on.

This MUST be the internal Proxmox name for the standalone Proxmox node or cluster VIP to place the VM/Container on. By default the API port 8006 will be used unless the proxmox/api-prot is also set to a value. This must also be resolvable to the iP address of the Proxmox node or cluster VIP.

22.10.1.2.71. proxmox/otp

The two-factor authentication code, if in use. This is the OTP OAUTH token.

22.10.1.2.72. proxmox/parallel_create_limit

The maximum number of parallel create processes allowed. The Proxmox system defined default is 4.

This changes how many VMs or Containers that can be created in parallel in a single API call.

22.10.1.2.73. proxmox/password

Proxmox node or cluster authorization password. This defaults to the RackN default root user password for most operating systems, which is RocketSkates.

22.10.1.2.74. proxmox/provider-source

The Terraform Provider source location for the provider plugin.

The default value is set to:

  • telmate/proxmox

!!! note

This must be a provider that is compatible with the Telmate/proxmox implementation.

22.10.1.2.75. proxmox/provider-version

The Terraform Provider Proxmox plugin version to use. This should be in the form of a valid Provider Version string. See the Terraform Provider documentation for more information.

The default value is set to:

  • >= 2.9.11

!!! note

This must be a provider that is compatible with the Telmate/proxmox implementation, found at “https://registry.terraform.io/providers/Telmate/proxmox/latest”.

The Provider must also be version v2.9.11 or greater to support the VM PXE boot option.

22.10.1.2.76. proxmox/proxy-server

The URL of a Proxy server to send all API calls through. This is generally used for debugging with something like mitmproxy.

example:

  • http://localhost:8080

There is no default value.

22.10.1.2.77. proxmox/tls-insecure

By default most CLI or API tools will not accept the self-signed TLS certificate. Setting this to true will enable CLI/API use of the Proxmox host with a self-signed certificate.

The default value is to *not* trust the self-signed certificate.

22.10.1.2.78. proxmox/user

The username to use when authenticating to the Proxmox node or cluster API endpoint. This defaults to the root user using the PAM Realm module (root@pam).

Set this value appropriately if different authentication Realms (eg Proxmox VE Auth, or pve).

Example:

  • admin@pve

22.10.1.2.79. vsphere/allow-unverified-ssl

By default most CLI or API tools will not accept the self-signed TLS certificate. Setting this to true will enable CLI/API use of the vSphere host with a self-signed certificate.

The default value is to *not* trust the self-signed certificate (eg false).

22.10.1.2.80. vsphere/cluster

The VMware vSphere cluster to operate on.

Defaults to cluster-01.

22.10.1.2.81. vsphere/datacenter

The VMware vSphere datacenter to operate on.

Defaults to dc-01.

22.10.1.2.82. vsphere/datastore

The VMware vSphere datastore to operate on.

Defaults to datastore1 (the default created by DRP ESXi OS provisioning on underlaying ESXi nodes).

22.10.1.2.83. vsphere/network

The VMware vSphere network to operate on.

Defaults to VM Network.

22.10.1.2.84. vsphere/password

VMware vSphere server authorization password to use. This defaults to RocketSkates123^, which is based on the RackN default, but contains the necessary complexity requirements for vSphere vCenter (VCSA) systems.

22.10.1.2.85. vsphere/provider-source

The Terraform Provider source location for the provider plugin.

The default value is set to:

  • hashicorp/vsphere

!!! note

This must be a provider that is compatible with the hashicorp/vsphere implementation.

22.10.1.2.86. vsphere/provider-version

The Terraform Provider vSphere plugin version to use. This should be in the form of a valid Provider Version string. See the Terraform Provider documentation for more information.

The default value is set to:

  • >= 2.2.0

!!! note

This must be a provider that is compatible with the hashicorp/vsphere implementation, found at <https://registry.terraform.io/providers/hashicorp/vsphere>.

22.10.1.2.87. vsphere/server

The VMware vSphere node to connect to.

Either a vCenter/VCSA or ESXi host. There is no default value provided and this value must be set to a valid Hostname or IP Address of a vSphere resource to connect to.

22.10.1.2.88. vsphere/user

The username to use when authenticating to the vSphere server. API endpoint. This defaults to administrator@vsphere.local.

Example:

  • administrator@domain.example.com

22.10.1.2.89. vsphere/vm-disk-label

The VMware vSphere Virtual Machine disk label to set.

Defaults to disk01.

22.10.1.2.90. vsphere/vm-disk-size

The VMware vSphere Virtual Machine disk size to set for the disk with the label specified in vsphere/vm-disk-label. Values are in GigaByte size (eg 20 is 20 GigaByte)

Defaults to 20 (2 GB); must be an Integer value greater than 0 (zero).

22.10.1.2.91. vsphere/vm-memory

The VMware vSphere Virtual Machine memory size, value is in Megabytes (eg 2048 for 2 GB memory).

Defaults to 2048 (2 GB); must be an Integer value greater than 0 (zero).

22.10.1.2.92. vsphere/vm-num-cpus

The VMware vSphere Virtual machine number of CPUs count.

Defaults to 2; must be an Integer number greater than 0 (zero).

22.10.1.3. profiles

The content package provides the following profiles.

22.10.1.3.1. bootstrap-cloud-wrappers

Bootstrap Digital Rebar server for advanced operation

  • Cloud Wrappers * Downloads context containers

This is designed to be extended or replaced with a site specific bootstrap that uses the base tasks but performs additional bootstrapping.

22.10.1.3.2. proxmox-EXAMPLE-gamble

Manages Proxmox Instances via Resource Brokers - currently only supports Proxmox VMs via PXE boot, and not cloud image deploy.

22.10.1.3.3. resource-aws-cli

Manages AWS CLI via Resource Brokers

Used to create a Resource Broker to run the AWS CLI.

22.10.1.3.4. resource-aws-cloud

Manages AWS Instances via Resource Brokers

Sets the rsa/key-user to the AWS default of ec2-user

22.10.1.3.5. resource-azure-cloud

Manages Azure Instances via Resource Brokers

22.10.1.3.6. resource-digitalocean-cloud

Manages Digital Ocean Instances via Resource Brokers

!!! note

terraform-apply retry is a workaround for unreliability in the Digital Ocean provider. Generally, the DO provider will create resources well but may have trouble doing updates on existing resources as of Dec 2021 with the error: 401 … Unable to authenticate you. There are several defensive changes to the plans to work around these issues.

22.10.1.3.7. resource-google-cloud

Manages Google (GCE) Instances via Cloud Resource Broker

22.10.1.3.8. resource-libvirt

Manages Libvirt Domains via Resource Broker

22.10.1.3.9. resource-linode-cloud

Manages Linode Instances via Resource Brokers

!!! note

WORKAROUND: task-retry value included to address Terraform Provider issue related to deleting all resources

22.10.1.3.10. resource-oracle-cloud

Manages Oracle Instances via Resource Brokers

This broker relies on the private/public key pair for authorization in the Oracle cloud. You will need to create a key pair and upload the private key under oracle/private-key.

!!! note

SSH user is opc!

22.10.1.3.11. resource-pnap-cloud

Manages Phoenix NAP Instances via Resource Brokers

22.10.1.3.12. resource-proxmox-cloud

Manages Proxmox Instances via Resource Brokers - currently only supports Proxmox VMs via PXE boot, and not Cloud Image deploy (eg deployed from “templates”).

There are multiple scenarios that the Cluster create process needs to support with regard to Virtual Machines created on a Proxmox cluster. Depending on the desired behavior for newly created Cluster resources (eg machines); you will need to specify the cluster behaviors appropriately.

Potentially desired behaviors:

  • Machines PXE boot in to Discovery (sledgehammer), and wait for interactive input

  • Machines PXE boot and are automatically assigned a Pipeline for installation (either by setting the Pipeline in the Cluster configuration, or via a Classifier that performs the same action)

  • Machines are created from a “Cloud Image” similar to a Public Cloud behavior

It is important to note that the cluster/wait-filter is used to handle the status management of the synchronization between the DRP based Machine object, and the backing resource that is created.

The PXE boot behaviors are supported with this Resource Broker, but each behavior requies a different Filter and/or Workflow setting to correctly instantiate resources to a successful state. This must be done on a cluster-by-cluster basis and you can not mix and match the behaviors on the same cluster.

The “Cloud Image”, behavior is not supported in the DRP Proxmox Cluster capability yet, and as such has no defined Filter specifications. Presumably, the behavior will be supported by the default Filter specification, but has not yet been tested.

Machines that PXE boot and are assigned an OS install Pipeline will very often end up with a Runnable: false condition. The default defined Filter will cause a Cluster provision failure in this case.

The cluster/wait-filter is required for the synchronization between the DRP Machine object, and the backing resource that is created by the Resource Broker.

PXE boot with Install Pipeline

For Machines that are discovered via the standard Sledgehammer discovery process, and then transitioned manually by an operator to an OS Install require a different Filter. This Resource Broker sets the Param cluster/wait-filter to the following:

  • And(Workflow=Eq(universal-runbook),WorkflowComplete=Eq(true),BootEnv=Eq(local))

!!! note

This assumes that the Pipeline ends with the final chained workflow being set to universal-runbook. If the Pipeline uses a different Workflow as the final/finishing workflow, this Filter needs to be adjusted to reference the last Workflow in the chain

You MUST also set the Pipeline and Workflow Params on the Cluster, and example which defines Photon Linux 4 OS install would look like:

  • EXAMPLE

  • broker-set-pipeline: universal-application-photon4

  • broker-set-workflow: universal-discover

The universal-discover workflow is usually the correct starting point for the OS install pipelines, as defined by the universal/workflow-chain-map (or via some other override param value being set)

!!! warning

If the cluster/wait-filter is NOT set as defined above, the cluster-provision task WILL fail when the machine chains in to the OS installer workflow.

PXE boot and Discovery Only

If the intent is to have Machines created and stop and wait in Sledgehammer without zero-touch provisioning / automation transition to an OS install Pipeline, you MUST leave the Filter for the Cluster at the default value, which is the following:S

  • Or(Runnable=Eq(false),WorkflowComplete=Eq(true))

AND you must set broker/set-pipeline and broker/set-workflow Params to:

  • broker-set-pipeline: universal-application-discover

  • broker-set-workflow: universal-discover

!!! note

Normally cluster configuraiton values should be changed on the dynamically created Profile with the same name as the cluster.

!!! warning

If the backing Machine object encounters any Workflow failures, the cluster provision operation will fail. In many cases, if the cluster-provision Task hasn’t failed, but the backing Machine can be corrected (eg getting stuck in centos-only-repos) then the cluster-provision may succeed.

22.10.1.3.13. resource-vsphere-cloud

Manages VMware vSphere Instances via DRP Resource Brokers.

Initial release of the vSphere Resource Broker only provides support for empty VM creation and subsequent installation via standard PXE network scripted installs.

Future versions of the Resource Broker will support OVA machine creation and Cloning of existing Template Virtual Machines.

For the PXE path to work, the Cluster MUST HAVE the following Params set on the Cluster at creation time.

  • broker/set-pipeline: universal-application-discover

  • broker/set-workflow: universal-discover

Otherwise, the Virtual Machine will never be served the Sledgehammer BootEnv during the boot of the virtual machine.

These Params are set in this Profile; however they end up being overridden by the Params defined in the universal-application-broker-base definition.

22.10.1.4. tasks

The content package provides the following tasks.

22.10.1.4.1. aws-scan-instances

Uses AWS CLI content to scan AWS inventory and find instances that do not exist in Digital Rebar. Then create machines with information for the unregistered machines.

Operational Note: the range of AWS infrastructure that can be discovered is limited by the API key and region used when this is called.

When detecting drift (machines in AWS that are not known to Digital Rebar), scan will raise an aws.drift.[cluster name] event with the created machine ID and AWS instance ID.

Designed to work with the awscli-runner context

22.10.1.4.2. cloud-validate

Used as a validate that the right params have been set for cloud scenarios and provide operator friendly feedback in the Machine.Description

Maintainer Notes: Remember to synchronize with the cloud/provider enum!

22.10.1.4.3. inject-ansible-joinup

For cloud providers that do not support injecting a start-up script, add the Ansible Join Up and start the Ansible context.

Will skip if cloud/ansible-joinup is false. Requires that rsa-public-key is installed for ssh on the provisioned machine

22.10.1.4.4. mist-io-sync

Make sure instance is registered with Mist.io

22.10.1.5. triggers

The content package provides the following triggers.

22.10.1.5.1. cloud-drift-alert

Catches *.drift.[cluster name] events emitted from terraform-apply and aws-scan-instances. Then uses that information to raise an Alert.

This is designed to be a generic drift event catch.