22.52. vmware-lib - VMware Library¶
The following documentation is for VMware Library (vmware-lib) content package at version v4.10.0-alpha00.46+g1471cf9bbea9d50f3360a078e87e85f65fff4c7c.
The Vmware Library provides additional content that enables capabilities for interacting with VMware ESXi, vSphere, and VCF (VMware Cloud Foundation) environments.
This content provides Software Defined Data Center (SDDC) build and operational management workflows.
Some examples of operations that can be achieved with this workflow:
deploy vCenter on one or more ESXi nodes
create a Datacenter construct in vCenter
create a Cluster construct in vCenter
enroll specified ESXi machines in to the Cluster
configure VSAN datastore capabilities
claim disks in to the VSAN datastore
deploy arbitrary OVA appliance devices
deploy NSX-T
This content pack utilizes several Digital Rebar Provision (DRP) Context containers to perform most of the heavy lifting work. The context container is generally a separate “fake” machine which drives API operations against either ESXi or vCenter API endpoints. There are several tools which are available and used, based on the tools capabilities, and the required job at hand:
govc
- a Golang compiled binary which implements the GoVMOMI libraryVMware’s Python SDK
VMware’s Ansible Galaxy modules, which utilize PyVMOMI and Python SDK libraries
OVFTool
All of the content provided in vmware-lib
builds upon, and requires the vmware
plugin to be installed, and generally relies on ESXi nodes to be built by a workflow
like the esxi-install
workflow. It is feasible that setting the appropriate Param
values will allow this content to work on non-DRP built ESXi nodes; however this is
not tested nor advised.
22.52.1. GoVC General Information¶
GoVC is a Golang binary that implements the VMOMI library of capabilities. The primary benefit is it’s a single statically compiled binary (stand alone) that has no external dependencies. It implements API interaction with vSphere and services (eg ESXi and vCenter).
The GoVC binary (govc
) is compiled from the GoVMOMI project, which can be
found at:
The GoVC tool is capable of an extremely broad and complete set of control plane
interactions with vSphere (ESXi and vCenter) services. Please review the
examples
directory in the above referenced Repo for more details.
For usage examples of the govc
binary in use inside the Digital Rebar _govc_
context container, please see:
22.52.1.1. GoVC Context¶
The GoVC context implements a RackN Context Container with the Agent (runner, drpcli
binary) and the govc
compiled binary inside of it. By use of setting Param values,
govc
commands can be executed against vSphere resources.
22.52.1.2. GoVC and VCSA Deployment¶
VCSA (vCenter Server Appliance) can be deployed via the GoVC tool. The operator must
perform the following preparotry tasks to enable the Context environment to operate
the govc
binary in the RackN Context Container. This setup must be performed on
the DRP Endpoint. In the future, the _bootstrap_ workflows will be available to help
set up these environments.
First, either clone the githup repo, or save the scripts in the tools
directory to
your DRP Endpoint, and run them as specified below.
To check out the github repo with just the vmware-lib
content pack and tooling in it,
do:
# checkout github vmware-lib content pack and associated tools git init git remote add origin https://gitlab.com/rackn/provision-content.git git fetch origin git checkout origin/v4 -- vmware-lib
Setup Instructions
There is a “helper” script that attempts to do all of the below steps, called
tools/do-all.sh
. Individual steps below:
Install the
docker-context, ``vmware
,vmware-lib
, andtask-library
catalog items
drpcli catalog item install vmware
drpcli catalog item install vmware-lib
drpcli catalog item install task-library
drpcli catalog item install docker-context
Create the docker container for the Runner and GoVC tools
run the
tools/build-docker.sh
script to build the containers*OR* install from pre-build Docker Hub images
run the
tools/dockerhub-containers.sh
scriptUpload the containers and enable them in drpcli
See the
tools/drpcli-commands.sh
script to do thisCreate the Context Runner machines to start Workflow from
See the
tools/drpcli-create-machines.sh
script to do thisThe VCSA OVA must be staged on an HTTP server for the tooling to download
Obtain the VMware provide VCSA ISO image and extract the OVA from the ISO
example download location - https://my.vmware.com/web/vmware/details?productId=742&rPId=39682&downloadGroup=VC67U3B
can be extracted with
bsdtar
like:bsdtar -xvf VMware-VCSA-all-6.7.0-15132721.iso vcsa/*.ova
upload with drpcli like:
export N=$(ls -1 vcsa/*.ova); drpcli files upload $N as images/vcsa/$N
reference this location on the DRP endpoint as:
{{.ProvisionerURL}}/files/images/vcsa/{...name...}
Prepare the Template JSON file that GoVC will use to deploy the OVA (*see below*)
Set the Param values on your Runner fake machine (either directly. or as a Profile)
Run the Workflow
govc-vcenter-create
Scripts referenced in this document should be available from:
22.52.1.3. vCenter Complete Note¶
If install vCenter 7.x - the govc
connect URL method seems to have changed.
AS of 2020/07/01 - the Stage govc-wait-for-vcenter
will not complete successfully.
You will have to monitor the VAMI web interface (on port 5480 by default), to determine
when it has successfully finished.
The workflow will error out after 60 minutes in this case. Either force remove the Workflow from the Context Machine, or ignore the status stage error.
22.52.1.4. Prepare the VCSA JSON Deployment Template¶
The Param ova/template-json
defines the name of a Template that you must provide with
the configuration details for the deployed VCSA instance. This template can be a standard
Digital Rebar template, provided via another Content Pack, or you can upload a one-off template
for the job. See the templates/govc-*EXAMPLE*.json
examples in the vmware-lib
content
pack for an example template.
Once you have prepared the Template JSON file and uploaded it, you must set the Param to point to it. This param will be set on the fake Runner Machine that the Workflow is run on.
In addition to the Template JSON Param, you must provie a vSphere resource (eg ESXi) node to execute the deployment to. Set these Params as defined in the below section.
22.52.1.5. Define the Deployment Target¶
You must define the vSphere deployment target (eg ESXi node) to deploy the VCSA OVA to. This is done by specifying the URL directly as a single Param, or the individual Param values for the Username, Password, Node, and optionally Port. See the Param documentation for these values.
These values can all be combined in to a single Profile along with the Template JSON Param defined above for easier add/remove on the Machine object.
Example Profile for vCenter deployment:
--- Name: "vcsa-govc-esxi-ewr1" Description: "EXAMPLE PROFILE - CHANGE VALUES !!!!" Documentation: | Change these values to match the JSON template details, the uploaded OVA, and related network information for your vCenter deployment. govc/* params are for the target Node (vSphere ESXi) to deploy the vCenter VCSA OVA on. The JSON Template defines the vCenter installation details. Meta: color: "blue" icon: "hdd" title: "Digital Rebar" Params: govc/datastore: "datastore1" govc/datastore-skip-create: false govc/insecure: true govc/node: "10.75.75.250" govc/ova-location: "{{.ProvisionerURL}}/files/images/vcsa/VMware-vCenter-Server-Appliance-7.0.0.10300-16189094_OVF10.ova" ova/template-json: "esxi-ewr1-vc01.json.tmpl" govc/username: "root" govc/password: "VMware123" Profiles: []
Save the above to file, and use drpcli to add to your Endpoint (eg drpcli profiles create vcenter.yaml
,
then add the Profile to the Context Machine that will deploy the vCenter VCSA OVA.
22.52.2. Example GOVC Usage¶
A (begining of) a collection of useful resources for understanding how to
use govc
to manage vSphere resources.
22.52.3. Object Specific Documentation¶
22.52.3.1. params¶
The content package provides the following params.
22.52.3.1.1. ansible/additional-options¶
Allows injecting additional Ansible command line flags to supported executions of
ansible
or ansible-playbook
. By defaul the following options are passed:
--connection=local -e 'ansible_python_interpreter=/usr/bin/python3'
If the operator changes the Param values, you must also specify the above options in most cases, in addition to any other changes you may desire.
22.52.3.1.2. esxi/cluster-folder¶
A string of the folder name that the cluster should be added to for inventory management.
For example, if the esxi/cluster-name
is set to honey-nut
, and
the esxi/cluster-folder
is set to cereal
, then the Cluster
will be added to the cereal
folder when it is created.
Multi-level paths are allowed (eg /cereal/honey-nut/box
). Parent directories
that do not exist will be created in order appropriately.
The string will be interpreted and expanded by the Golang Templating engine at time of use, so an example construct as follows is allowed:
/{{ .Param "esxi/datacenter-name" }}/{{ .Param "esxi/cluster-name" }}
If the values of the above params have been set to dc01 and cluster01 (respectively), then the rendered result will be:
/dc01/cluster01
22.52.3.1.3. esxi/cluster-name¶
A simple string name of the cluster that the Machine belongs to.
For example, if the esxi/cluster-name
is set to honey-nut
, then
all Machines with this param set to the same value will be acted on
by subsequent Stages that reference the esxi/cluster-name
.
22.52.3.1.4. esxi/cluster-options¶
A simple string of space separated arguments of cluster configuration.
Supported values are defined by the GoVC cluster.change
directive,
which is documented at:
Example setting of this Param:
-drs-enabled -vsan-enabled -vsan-autoclaim
Note
Use of this param requires that esxi/cluster-name
is set to define
which cluster to operate on.
22.52.3.1.5. esxi/cluster-profile¶
Set this to the name of a Profile, which will be used to store cluster wide state information on for cluster build/operational management.
For example, set this to:
my-esxi-cluster-info
A profile of this name will be used to read/write data to in relation to cluster operations and coordination.
22.52.3.1.6. esxi/datacenter-name¶
A simple string name of the Datacenter that should be either created,
manipulated, or used in subsequent govc
command calls.
22.52.3.1.7. esxi/datastore-command¶
This param defines what command to run for the govc-datastore-create
Task. By default, an add
operation will be performed, attempting
to create the specified Datastores listed in the esxi/datastore-mappings
Param.
Valid command options are:
add
= Add datastores
remove
= Remove all datastores
list
= List connected datastores
22.52.3.1.8. esxi/datastore-mappings¶
This data structure defines a set of Datastores to create in a single set of mappings. Multiple datastore types can be created in one task run.
This Param data structure defines the configurations of each of the described
datastores to be created. In addition, you must add the Param
esxi/datastore-memberships
to each vSphere ESXi node with the name of the
datastores to add to that node.
The datastore mappings are performed via the Context system, which means the actions are implemented as an API call to the ESXi host directly or via the vCenter instance that manages the node.
Required datastore configuration values are as follows:
name
= the name of the datastore to create
type
= eitherget_first_available
,nfs
,nfs41
,cifs
,vmfs
, orlocal
Note
The get_first_available
simulates the orignal datastore create
pattern in the mappings data structure. Provided only for legacy use.
The following additional values can be specified, and are type dependent (either nfs/nfs41, vmfs, or local type):
disk
= Canonical name of disk (VMFS only)
force
= Ignore DuplicateName error if datastore is already mounted on a host (false
by default)
host
= Host system
mode
= Access mode for the mount point (readOnly|readWrite)
path
= Local directory path for the datastore (local only)
remote-host
= Remote hostname of the NAS datastore (NFS/CIFS)
remote-path
= Remote path of the NFS mount point (NFS/CIFS)
username
= Username to use when connecting (CIFS only)
password
= Password to use when connecting (CIFS only)
version
= VMFS major version
Digital Rebar Golang templating constructs can be used in the datastructures and will be filled in when the template is evaluted. This is useful for Datastores that are created bound to specific Node information. A further example; creating a local datastore with the ESXi node name in the name construct. When these are rolled up under vCenter management, it allows datastore uniqueness.
The general data structure follows the below pattern:
datastore-reference: name: name-of-datastore type: supported-datastore-type other-options: ...
The Datastore Reference is different than the actual datastore name to create, as in many cases, the Datastore Name may be based on the specific ESXi node name or other unique reference for that node. This allows for golang templatization structures to dynamically reference the Datastore name value, but use a generic Reference for the configuration structure.
Example in YAML:
esxi/datastore-mappings: nfs1-datastore: name: "nfs-1-{{.Machine.Name}}" type: "nfs" mode: "readWrite" remote-host: "nfs1.example.com" remote-path: "/hosts/{{ .Machine.Name }}" vmfs-datastore: name: "vmfsDatastore" type: "vmfs" disk: "mpx.vmhba0:C0:T0:L0" local1-datastore: name: "localDatastore" type: "local" path: "/var/datastore"
Example in JSON:
{ "nfs1-datastore": { "name": "nfs-1-{{.Machine.Name}}", "type": "nfs", "mode": "readWrite", "remote-host": "nfs1.example.com", "remote-path": "/hosts/{{ .Machine.Name }}" }, "vmfs-datastore": { "name": "vmfsDatastore" "type": "vmfs" "disk": "mpx.vmhba0:C0:T0:L0" }, "local1-datastore": { "name": "localDatastore" "type": "local" "path": "/var/datastore" } }
In the above examples, three separate Datastores will be created on the systems
based on their membership mappings. The membership mappings are managed by adding
the Param esxi/datastore-memberships
with a list of the datastore references
to be added to that machine.
For example, on the machine named esxi-01
, we add the param with the following
values:
esxi/datastore-memberships: - nfs1-datastore - local1-datastore
In this scenario, the machine will be added with the NFS datastore named nfs-1-esxi-01
which maps to the share nfs1.example.com:/hosts/esxi-01. This happens because the
Params are expanded to the Machine name, mapping a custom NFS share under the common
reference name nfs1-datastore.
The second datastore (local1-datastore) is mapped on the machine as well. However, the VMFS datastore is not mapped on this machine.
22.52.3.1.9. esxi/datastore-memberships¶
An array of strings listing datastores references configured in the Param
esxi/datastore-mappings
. The combination of the mapping which
describes the datastore configuration, and the Machine object being
mapped with a list of Datastores found in that mapping, creates the
bindings for the tooling to create/add those datastores to specific
machines.
For more complete details, see the esxi/datastore-mappings
Param
documentation.
22.52.3.1.10. esxi/datastore-skip-manage¶
Boolean true
/false
value - determines if the datastore Stage manage
in a workflow should skip creating a datastore.
Defaults to false
- running the manage actions for datastore(s). The
default manage action is to add
datastores.
22.52.3.1.11. esxi/dvs-mappings¶
This data structure defines how the DVS switches should be created in a VMware ESXi cluster.
This Param data structure defines the configurations of the Distributed
Virtual Switches and Portgroups. In addition, you must add the Param
esxi/dvs-memberships
to each vSphere ESXi node with the DVS switches
to add to that node.
Supported DVS configuration values are as follows:
mtu
= From 1000 to 9999
version
= one of6.5.0
,6.6.0
,7.0.0
discovery
= one ofcdp
orlldp
For Portgroups on a DVS, the following values are supported:
type
= one ofephemeral
,earlyBinding
, orlateBinding
ports
= 0 for elastic, or from 1 to 60,000
vlans
= empty or 0 for none, or VLAN tag ID from 1 to 4096
migrate.portgroup
= standard switch portgroup name to migrate during DVS creation (egManagement Network
)
migrate.vswitch
= standard vswitch name to migrate during DVS creation (egvSwitch0
)
migrate.vmk
= VMK interface to migrate (egvmk0
)
migrate.vms_to_migrate
= An array of VM names to migrate from the Standard to Distributed vSwitch
Portgroup values type
, ports
, and vlans
can optionally be left
empty. If they are, they will default to the Portgroup version based default
values.
The migrate
options will allow migrating the specified Standard Virtual Switch
portgroup specified, to the DVS Portgroup. If they are not specified, no portgroup
migrations will be made.
Example in YAML:
esxi/dvs-mappings: dvs01: mtu: 9000 version: 7.0.0 discovery: lldp vmnic: vmnic1 portgroups: pg_internal: migrate: portgroup: "Management Network" vswitch: "vSwitch0" vmk: "vmk0" vms_to_migrate: - "vm01" - "vm02" type: ephemeral ports: 16 vlan: 10 pg_external: type: ephemeral ports: 8 vlan: 0 dvs02: mtu: 1534 version: 7.0.0 discovery: cdp vmnic: vmnic2
Example in JSON:
"dvs01": { "mtu": 9000, "version": "7.0.0", "discovery": "lldp", "vmnic": "vmnic1", "portgroups": { "pg_external": { "type": "ephemeral", "vlan": 10 "ports": 8 }, "pg_internal": { "migrate": { "portgroup": "Management Network", "vmk": "vmk0", "vswitch": "vSwitch0", "vms_to_migrate": [ "vm01", "vm02" ] } "type": "ephemeral", "vlan": 0, "ports": 16 } } }, "dvs02": { "mtu": 1534, "version": "7.0.0", "discovery": "cdp", "vmnic": "vmnic2" }
In the above example, the dvs01
DVS will have jumbo frames, switch
version set to 7.0.0, discovery packets will use the LLDP protocol, and
it will map to the vmnic1
device. In addition, it will define two
Portgroups, named pg_internal
and pg_external
; both of type
ephemeral with different numbers of ports defined.
The dvs02
switch will use standard size packets, the CDP discovery
protocol, the vmnic2
device, and will not map any Portgroups.
Note
You must also add the named DVS Switches (eg dvs01
and dvs02
in the above example), to the vSphere ESXi nodes that will use these
switches; via the esxi/dvs-memberships
Param.
22.52.3.1.12. esxi/dvs-memberships¶
An array of strings listing Distributed Virtual Switch names, for an ESXi node to become a member of.
Each DVS must have a matching configuration in the esxi/dvs-mappings Param which define how the DVS and (optionally) any subsequent PortGroups that are created on the DVS.
The vSphere ESXi node must be in the same Datacenter and Cluster as the creation of the DVS and PortGroups, otherwise, no memberships in the DVS will be created.
22.52.3.1.13. esxi/member-reference¶
This param defines the method that ESXi nodes in a cluster or datacenter should be referenced by. This is used when adding members to clusters, distributed virtual switches are added to host members, etc.
The default method is address
. The following methods are
available:
address
= The DRP Machine registered Address value
name
= The DRP Machine registered Name value
fqdn-dhcp
= The DRP Machine registered Name plus the DRP managed DHCP Lease Option 15 (domain name) value
fqdn-dns-domain
= the DRP Machine registered Name plus the value of the Paramdns-domain
22.52.3.1.14. esxi/object-rename¶
This data structure defines a vSphere object that should be renamed.
An example task that uses it is the esxi-object-rename
, which
utilizes the Ansible Galaxy VMware modules and the playbook named
vmware_rename_object
.
The top-level object is a reference name to group rename operations together. This allows the operator to specify more than a single rename operation using the single Param data structure.
Requires the vCenter or ESXi authentication information is passed in via
the govc/*
Param values.
The following values are required:
new_name
- the new name of the renamed object
object_type
- the new name of the renamed object
In addition, only one of the following may be specified:
object_name
- the object name in the inventory (mutually exclussive withobject_moid
)
object_moid - the *managed object identifier* (mutually exclussive with ``object_name
)
For the object_type
, only the following values are valid:
Cluster
ClusterComputeResource
Datacenter
Datastore
Folder
ResourcePool
VM
orVirtualMachine
Example in YAML:
esxi/object-rename: vsan-datastore: new_name: "vsan-cluster01-datastore" object_name: "vsanDataStore" object_type: "Datastore" vm-generic: new_name: "Fedora_31" object_name: "Fedora_VM" object_type: "VirtualMachine"
Example in JSON:
"vsan-datastore": { "new_name": "vsan-cluster01-datastore", "object_name": "vsanDataStore", "object_type": "Datastore" } "vm-generic": { "new_name": "fedora-vm01", "object_name": "Fedora_VM", "object_type": "VirtualMachine" }
In the above example, the VSAN Datastore named vsanDataStore should
be renamed to vsan-cluster01-datastore
. Subsequently, the Virtual
Machine named Fedora_VM will be renamed to fedora-vm01
.
Note
If a Golang Templating construct is desired for the new_name
value, set the esxi/object-rename-override
for the new_name
value. In that case, this objects setting will be ignored.
22.52.3.1.15. esxi/object-rename-override¶
This param allows the operator to override the new_name
value
assigned in the esxi/object-rename
Param.
This is ONLY needed if the operator needs to use Golang Templating
constructs to inject a new_name during the object rename task.
Unfortunately, the .ParamExpand
method can’t be used on object
types.
An example value setting for this Param:
vsan-{{ .Param 'esxi/cluster-name'}}-datastore
Assuming the esxi/cluster-name
param is set to cluster01, the
override value would be set to:
vsan-cluster01-datastore
Note
If you do not need to use Golang Templating constructs in the
rename, do not use this Param - simply set the new_name value
in the esxi/object-rename
data structure.
22.52.3.1.16. esxi/thumbprint-sha1¶
Defines the ESXi SHA-1 thumbprint.
22.52.3.1.17. esxi/vsan-cluster-id¶
Defines the VSAN cluster ID, added to the esxi/vsan-leader. Once added, then the followers use it to join the cluster.
DO NOT set this value, it is set by the esxi-vsan-build-cluster
task.
22.52.3.1.18. esxi/vsan-data/sub-cluster-uuid¶
Recorded value by tasks which contains the VSAN SubClusterUUID used for cluster join operations.
This is set by content, you should not set this, unless you are adding VSAN members to an existing VSAN cluster that was NOT created by Digital Rebar.
22.52.3.1.19. esxi/vsan-disk-selection-rule¶
An array of strings, each of which defines a supported disk selection
rule used in govc-vsan-claim-disks
Task.
Current supported rules are:
simple
Future iterations of the task will support injecting additional Templates with custom rules for selection.
22.52.3.1.20. esxi/vsan-enabled¶
If set to true
, then set up and enable VSAN traffic on
esxi/vsan-vmk
if defined. If not defined, fall back to the
vMware Kernel interface defined on esxi/network-firstboot-vmk
.
The esxi/vsan-options
can be set to control disk enrollment
(eg set to -vsan-autoclaim
).
By default, the ESXi machines are NOT set up to enable VSAN.
22.52.3.1.21. esxi/vsan-host¶
A boolean value that defines if the vSphere ESXi host should have VSAN configuration built on it.
22.52.3.1.22. esxi/vsan-leader¶
The Digital Rebar machine name of the three members defined in the
esxi/vsan-members
array, responsible for the cluster initialization.
This leader will initialize the VSAN cluster and provide the VSAN Cluster UUID for the remaining members to join the cluster.
Note
This pattern should be moved to the cluster mechanisms in the task-library content.
22.52.3.1.23. esxi/vsan-members¶
An array of strings that are the Digital Rebar machine names that should be built in to the VSAN clsuter.
YAML Example of defining the machines to build in to the cluster:
esxi/vsan-members: - "machine01" - "machine02" - "machine03"
Note
The defined Machines must have successfully completed the
VSAN configuration task (esxi/vsan-configure-host
).
22.52.3.1.24. esxi/vsan-nodes-override¶
Normally ESXi clusters are grouped by use of the esxi/cluster-name
param.
Machines belonging to the same cluster name are operated on as a full set.
However, an operator can override which Nodes to operate on, by setting this param. The value is a space separated list of machine Names.
There is no default value.
22.52.3.1.25. esxi/vsan-operations¶
Defines the VSAN cluster operations to perform on the cluster or cluster
members defined by esxi/cluster-name
. Typically this Param will be
given a value(s) on a Stage, which drives a specific set of behaviors.
Some example Param values include:
cluster-build
cluster-destroy
cluster-list
These Param values must be supported operations within the Task(s) that are using them.
22.52.3.1.26. esxi/vsan-portgroup¶
Defines the Portgroup name to use for VSAN configuration on the vSphere ESXi host.
Defaults to VSAN
if not otherwise defined.
22.52.3.1.27. esxi/vsan-tag-only¶
By default the VSAN disk claim operations will both tag the disks for VSAN use and claim them in the VSAN datastore. In some use cases (eg preparation for VCF use) the operator may only want the disks tagged for VSAN, but not actually claimed.
In this use case, set this Param to boolean true
, and the VSAN Disk Claiming
process will not claim the disks; they will only be tagged according to the rules.
22.52.3.1.28. esxi/vsan-vmk¶
Defines the VMK device to use for VSAN configuration on the vSphere ESXi host.
Defaults to vmk0
if not otherwise defined.
22.52.3.1.29. esxi/vsan-vmknic-ip¶
Defines the vmknic IP address for VSAN configuration on the vSphere ESXi host.
If this setting and the esxi/vsan-vmknic-netmask
are both empty, then the
configured already by prior Stage/Task runs.
22.52.3.1.30. esxi/vsan-vmknic-netmask¶
Defines the vmknic netmask for VSAN configuration on the vSphere ESXi host.
If this setting and the esxi/vsan-vmknic-ip
are both empty, then the
configuration will assume it has been configured already by prior Stage/Task runs.
22.52.3.1.31. esxi/vsan-vmnic¶
Defines the VMNIC to use for VSAN configuration on the vSphere ESXi host.
Defaults to vmnic1
if not otherwise defined.
22.52.3.1.32. esxi/vsan-vswitch-standard¶
Defines the Standard vSwitch to use for VSAN configuration on the vSphere ESXi host.
Defaults to vSwitch1
if not otherwise defined.
22.52.3.1.33. esxi/vsan-zero-count-fatal¶
If set to true
if a node has no available disks to offer
(minimum 1 cache and 1 capacity), then exit with fatal error.
If set to false
(the default), then if no disks on a given node
is not considered a fatal condition.
By default, zero count is not considered fatal.
22.52.3.1.34. esxi/wait-time¶
Defines the number of seconds that the task esxi-wait-time will use to sleep.
The default value is 15
.
22.52.3.1.35. govc/commands¶
An array of strings for govc
to run. Each array string will be
run in order defined in when the Param is populated.
Only a single import.ova
command can be specified in any single
set of commands to run.
Defaults to govc about
command.
As an example, you can print the govc
environment which affects the
runtime operation of the govc
command, using govc env
. To use
this command, set the Param to the value env
.
Note
Do not specify govc
ifself.
YAML Example of setting multiple commands to run in a single Task run:
govc/commands: - "about" - "env" - "datastore.ls"
22.52.3.1.36. govc/datastore¶
Datastore that subsequent govc
commands will use, if required.
For example set this to something like datastore1
.
Defaults to an empty (unused) value.
22.52.3.1.37. govc/datastore-create-disk¶
This param sets the disk to create the datastore defined in the param
govc/datastore-create-name
. The param can be set to a rule that
will search for a disk, or directly to a specific disk.
Supported rules and direct disk definition settings:
datastore_mappings
first_available
disk=t10.ATA_____Micron_M500DC_MTFDDAK120MBB_____________________14260DAD9402
Defaults to datastore_mappings
.
The first_available
rule attempts to filter out used disk devices, then
chooses the first of any remaining disks that are unused.
Note
IF using the govc command, you can find the disk information with the command
govc host.esxcli storage core path list
(after setting up the appropriate
GOVC_
environment variables, of course.).
Warning
Setting this param to first_available
, and setting a Param
esxi/datastore-mappings
type of first_available
is not
supported and will result in a failure error.
22.52.3.1.38. govc/datastore-create-name¶
The name of the datastore to be created.
Defaults to datastore1
if not otherwise defined.
22.52.3.1.39. govc/debug¶
If set to true
, then additional debug output and behaviors will occur with the govc
command usage.
By default this value is set to falsse
.
22.52.3.1.40. govc/insecure¶
If set to true
, then accept self signed certificates of the
VMware ESXi or vCenter resource.
By default this value is set to true
.
22.52.3.1.41. govc/network¶
Network of the ESXi/vCenter instance to use when deploying OVAs via the govc
command.
For example, set to something like VM Network
.
Defaults to an empty (unused) value.
22.52.3.1.42. govc/ova-location¶
The URL location of the OVA Resource to deploy via the govc/commands
.
The OVA will be downloaded inside the Context container and used by the govc
command to deploy the resource. This could be a VMware vCenter Server Appliance
(VCSA), NSX-T OVA, or any other deployable OVA format appliance device.
This Parameter can utilize Digital Rebar Golang Templating constructs, which will be expanded appropriately when called. For example:
{{ .ProvisionerURL }}/files/images/vcsa/foo.vcsa
Will expand to something that looks like:
http://10.10.10.10:8091/files/images/vcsa/foo.vcsa
22.52.3.1.43. govc/ova-type¶
The type of the OVA Resource that is deployed via the govc/command
.
Various OVA appliances exhibit unique and strange behaviors that need to be accounted for by the Digital Rebar deployment tooling at times.
For example, the VCSA 6.x OVA deployment; when complete, uses ‘root@vsphere.local’
account name. This is used by the govc-wait-for-vcenter
task to verify
the API is fully up, and the deployment is complete. However, in VCSA 7.x
the username is secretly and mystically changed to ‘administrator@vsphere.local’.
Setting the govc/ova-type
to vcsa
, and the govc/ova-version
to 7
,
allows the tooling to magically switch auth accounts.
In the future, if unique rules in various places are required, then the combination of “ova-type” and “ova-version” can be codified in the tooling to react accordingly.
22.52.3.1.44. govc/ova-version¶
The version of the OVA Resource that is deployed via the govc/commands
.
Various OVA appliances exhibit unique and strange behaviors that need to be accounted for by the Digital Rebar deployment tooling at times.
For example, the VCSA 6.x OVA deployment; when complete, uses ‘root@vsphere.local’
account name. This is used by the govc-wait-for-vcenter
task to verify
the API is fully up, and the deployment is complete. However, in VCSA 7.x
the username is secretly and mystically changed to ‘administrator@vsphere.local’.
Setting the govc/ova-type
to vcsa
, and this Param to 7
, allows the
tooling to switch auth accounts.
In the future, if unique rules in various places are required, then the combination of “ova-type” and “ova-version” can be codified in the tooling to react accordingly.
22.52.3.1.45. govc/password¶
Password (secure) of the govc/username
to authenticate against on the VMware
ESXi or vCenter URL.
Defaults to RackN root
default password for ESXi.
22.52.3.1.46. govc/port¶
Sets the Port number of the VMware ESXi or vCenter resource, if it has been relocated from the default (443).
22.52.3.1.47. govc/resource-pool¶
Resource Pool to use for the govc deployed OVA.
For example, set to something like */Resources
.
Defaults to empty (unused) value.
22.52.3.1.48. govc/skip-ova-stage¶
If set to true
, then skip the govc-stage-ova
staging process in
a workflow. Some tools that utilize an OVA import/load process can do
so from a remote HTTP URL path, and do not require the OVA local on the
filesystem (which govc
requires).
By default this value is set to false
.
22.52.3.1.49. govc/url¶
The VMware ESXi or vCenter URL resource to connect to, to execute a
govc/commands
against.
Example:
192.168.1.10
Example:
vc01.example.com
Example:
vc01.example.com:1443
You must also specify the accompanying govc
configuration Params,
to successfully connect and authenticate.
Required:
govc/url: 192.168.124.109
govc/username: root
govc/password: s3cr3t
Optional (defaults to ‘443’):
govc/port: 1443
Note
Previous versions allowed single Param specification in
the govc/url
param. As of vmware-lib
v4.4.0
,
it is no longer supported in recent govc versions, so
that usage is also removed form Digital Rebar.
22.52.3.1.50. govc/username¶
Username of the VMware ESXi or vCenter account to authenticate for the
govc/commands
.
Defaults to RackN default ESXi username of root
.
22.52.3.1.51. ova/ovftool-deploy-more-mapping-template¶
The ovftool-deploy
task remaps JSON generated template values to
OVFTool command line arguments. In the event the provided script does
not remap required values for your deployment scenario, this Param can
point to the Name of a Template.
The template must be a BASH script with Golang Template constructs that
remap the JSON attributes to OVFTool values. Please see the ovftool-deploy
Task for more details. Relevant command line build up is performed by appending
to the shell variable name MORE
.
Alternatively, use the Param ova/ovftool-extra-options
to append values
via a simple string injection. See Param and Task for more details.
22.52.3.1.52. ova/ovftool-extra-options¶
This Param allows the operator to inject any specific additional
options to the ovftool
command when doing an OVA deployment.
This Param should:
use properly formatted
ovftool
optionsMUST NOT repeat any existing options built up by the Task
do not include ANY newlines in the options
options should be separated by a single white space
By default, this Param does not specify a default
value.
Note
Do not use ova/ovftool-options
for additional options,
this Param is the correct one to insert additional flags.
As an alternative for command line option parsing, see the Param
ova/ovaftool-deploy-more-mapping-template
- which is a MUCH more
advanced use case alternative.
Example: enhancing the OVFtool standard output to provide more verbose details if there are problems deploying an OVA, set this extra options Param as follows:
--X:logToConsole=True --X:logLevel="verbose"
Rerun the deployment Stage/Task, and now the job-log should have much more verbose logged output for troubleshooting.
22.52.3.1.53. ova/ovftool-options¶
This Param defines a set of standard options to inject in to an OVFTool OVA deployment. This is provided as a Param, in the event that these need to be altered in the field.
Warning
To add ADDITIONAL flags, do not use this Param, use the Param
ova/ovftool-extra-options
.
This Param should:
use properly formatted
ovftool
optionsMUST not repeat any existing options built up by the Task
do not include ANY newlines in the options
options should be separated by a single white space
By default, this Param sets these values:
--acceptAllEulas --allowAllExtraConfig --allowExtraConfig --X:noPrompting --X:connectionReconnectCount=5
Note
If the operator uses this Param, and would like to retain these values, you must also add them in addition to your extra options.
Warning
Removing the --acceptAllEulas
flag is an EXTREMELY BAD idea.
OVA deployment without this option will cause OVFTool to interactively
ask Y/N - repeatedly. Real world impact has generated 1.1 BILLION
requests, which filled up a 47 GB job-log file and filled the DRP
backing filesystem. YOU HAVE BEEN WARNED.
As an alternative for command line option parsing, see the Param
ova/ovaftool-deploy-more-mapping-template
- which is a MUCH more
advanced use case alternative.
22.52.3.1.54. ova/param-json¶
This Param holds the configuration JSON information to use for OVA appliance deployment configuration.
22.52.3.1.55. ova/template-json¶
This param is a reference to a Digital Rebar Template to use for OVA appliance deployment configuration.
Set the Param to the name of a defined Template. For example:
# JSON example { "ova/template-json": "govc-vcsa-vc01.json.tmpl" } # YAML example ova/template-json: govc-vcsa-vc01.json.tmpl
The contents of this external template will be used as the JSON configuration for the OVA deployment. Typically this is used to define the deployed Virtual Machines bootstrap configuration.
22.52.3.2. profiles¶
The content package provides the following profiles.
22.52.3.2.1. EXAMPLE-govc-about-test¶
Warning
THIS IS AN EXAMPLE - you must modify values to fit your local environment appropriately.
Runs the simple govc about
command on a test ESXi instance.
Note
Running the govc-command
Workflow without a govc/commands
Param value setting will also run the govc about
test.
22.52.3.2.2. EXAMPLE-govc-cluster-create¶
Warning
THIS IS AN EXAMPLE - you must modify values to fit your local environment appropriately.
Defines govc
related Params for creating and manipulating cluster
create operations. See individual Param documentation for the values
and usage of the Params.
22.52.3.2.3. EXAMPLE-govc-vcsa-vc01¶
Warning
THIS IS AN EXAMPLE - you must modify values to fit your local environment appropriately.
Runs govc
to deploy a test VCSA deployment. Requires that the VCSA
configuration JSON file be saved to the context container as
/tmp/template.json
prior to the command being run. The JSON
configuration should be saved to a Template on the system, and then
referenced in the ova/template-json
Param.
The govc-command
task reads the ova/template-json
Param if it has
been specified; and writes the referenced template to the temporary json
location.
The govc/username
and govc/password
values are required for
the vSphere ESXi node that the OVA is being deployed to, for API
authentication.
The govc/url
should be the IP Address or correctly resolving
DNS hostname of the vSphere ESXi node to deploy the OVA to.
22.52.3.2.4. EXAMPLE-vcsa-deploy¶
Warning
THIS IS AN EXAMPLE - you must modify values to fit your local environment appropriately.
Runs vcsa-deploy
to deploy a test VCSA deployment. Requires that the VCSA
configuration JSON file be saved to the context container as /tmp/template.json
prior to the command being run.
The vcsa-deploy-command
task reads the vcsa-deploy/template-json
Param
and writes the referenced template to the temporary json location.
22.52.3.2.5. cluster¶
testing
22.52.3.3. stages¶
The content package provides the following stages.
22.52.3.3.1. ansible-vmware-migrate-vmk¶
This Stage runs an Ansible playbook utilizing PyVMOMI to migrate a standard vSwitch virtual NIC (VMK), to a distributed vSwitch.
22.52.3.3.2. ansible-vmware-object-rename¶
This Stage runs an Ansible playbook utilizing the VMWare Python SDK to rename objects in vSphere.
22.52.3.3.3. esxi-vsan-configure-host¶
This stage configures a vSphere ESXi host for VSAN cluster.
22.52.3.3.4. esxi-vsan-detailed-info¶
This stage shows detailed VSAN host info and debug data information.
22.52.3.3.5. esxi-wait-time¶
Several race conditions in ESXi can be exposed by the DRP Agent requesting task execution faster than ESXi subsystems are ready to service them. This can cause random failures in some stages/tasks.
This Stage allows the operator to inject a wait/sleep timer as a Stage or Task in to appropriate workflow places to help reduce those race conditions.
The default vaule is 15
seconds, and can be changed by setting the Param
esxi/wait-time
to the appropriate value at the right place.
22.52.3.3.6. govc-cluster-create¶
This Stage runs _govc_ commands to create clusters in vCenter.
22.52.3.3.7. govc-commands¶
This Stage runs a _govc_ command specified by the govc/commands
Parameter in the _govc_ context container. If not command is
specified, the default action is to simply run govc about
against
the remote specified ESXi or vCenter API service.
22.52.3.3.8. govc-complete¶
This Stage just marks the workflow running the govc/commands
as complete.
22.52.3.3.9. govc-datastore-manage¶
This Stage runs a _govc_ command to create a datastore on the ESXi instance.
The Param govc/datastore-device
will be used as the backing volume for
the datastore defined by name in govc/datastore-name
.
If not specified, the defaults for device are “first found device”, and the name of the datastore will be set to “datastore1”.
In addition, the more advanced “mapping” Param can be used to define multiple Datastore creations of different types (NFS, CIFS, VMFS, Local, etc.).
22.52.3.3.10. govc-deploy-ova¶
This Stage runs _govc_ command with the import.ova
option, to
deploy an OVA appliance device to a vSphere target.
The OVA should be specified by the govc/ova-location
Param, and
the associated JSON template for the deployment must be specified
by the govc/ova-template
Param.
22.52.3.3.11. govc-host-add¶
This Stage runs _govc_ commands to add an ESXi host to the Datacenter and/or Cluster. See the Taks documentaiton for more details.
22.52.3.3.12. govc-stage-ova¶
This stage makes the OVA referenced in govc/ova-location
availabe in the
container for the _govc_ command’s use. It will be saved as /ova/import.ova
inside the container, which will be appended to the govc
command if the
argument _import.ova_ is found in the command line call.
22.52.3.3.13. govc-vsan-claim-disks¶
This Stage claims ALL available disks on a vSphere ESXi node for use by VSAN.
Disk claiming filtering rules can be selected by setting the Param value
esxi/vsan-disk-selection-rule
to a supported value. The default value is
to use the simple selection rule. See the Param for further documentation.
22.52.3.3.14. govc-vsan-cluster-build¶
Build VSAN cluster from VSAN configured hosts.
22.52.3.3.15. govc-vsan-cluster-destroy¶
Destroy VSAN on ESXi hosts.
22.52.3.3.16. govc-vsan-cluster-enable¶
This stage enables the VSAN cluster after the Build and disk Claim process.
22.52.3.3.17. govc-vsan-cluster-list¶
Get VSAN cluster info on ESXi hosts.
22.52.3.3.18. govc-wait-for-ova¶
This Stage blocks and waits until the deployed OVA service API is fully avaialble after deployment. The stage uses the govc about which requires API access to get the about details from the remote API service.
22.52.3.3.19. ovftool-deploy¶
This Stage runs OVFTool command to deploy an OVA appliance utilizing a context container. deploy an OVA appliance device to a vSphere target.
The OVA JSON configuration should be specified in the ova/param-json
Param.
22.52.3.4. tasks¶
The content package provides the following tasks.
22.52.3.4.1. ansible-vmware-migrate-vmk¶
Migrates an existing virtual NIC (VMK) in ESXi hosts from a Standard vSwitch to a Distributed vSwitch.
Utilizes the Ansible Galaxy collection communities.vmware
.
Acts on ESXi nodes identified by the esxi/cluster-name
Param set on the
machines.
Requires the esxi/dvs-mappings
Param contains a migrate
stanza that is
properly filled out.
22.52.3.4.2. ansible-vmware-object-rename¶
Renames an object in vSphere using the Ansible Galaxy module. Requires setting the Param datastrucute to specific values based on the object you are trying to rename. Documentation can be found at:
22.52.3.4.3. esxi-cluster-hosts-get¶
This task collects all ESXi members that are in a given Datacenter and Cluster,
and records it on the Profile specified in the esxi/cluster-profile
Param.
This is used by subsequent tasks that need to render with a pure Golang Template
to reference the ESXi members in a cluster. For example the ansible-vmware-migrate-vmk
task.
22.52.3.4.4. esxi-vsan-configure-host¶
Configure VSAN on a vSphere ESXi host. This should be done prior to deployment of vCenter for optimal results.
The esxi/vsan-vmk
Param should be used to define the VMware Kerenel device
to use for VSAN operations. If this param is not set, then the value set in
esxi/network-firstboot-vmk
will be used. The default value will fall through
to vmk0
if neither of these Params have been set by an operator.
This script is intended to run on an ESXi host.
22.52.3.4.5. esxi-vsan-detailed-info¶
Gets detailed info and VSAN debug data from a specific ESXi host.
Note
Future usage of this tasks will set GOVC_* variables in a bash script, and then include the “esxi-vsan-detailed-info.sh.tmpl” template which drives the primary debug/info get process
22.52.3.4.6. esxi-wait-time¶
See task stage for complete usage details.
22.52.3.4.7. govc-cluster-create¶
This task will create a cluster, file it in the esxi/cluster-folder
if
provided, and enroll all Machines with the same cluster designation defined
by the Param esxi/cluster-name
.
22.52.3.4.8. govc-commands¶
This task executes a series of govc
calls in a container context. The govc
command is defined via the Param govc/commands
, which is required for this
task. The govc/commands
Param is an array of govc command to execute in
the container.
The operator must also specify the remote vSphere ESXi or vCenter resource to
connect to, to execute govc
commands against. This is accomplished
by setting the govc/url
Param, and the individual govc/username
,
govc/password
, and optionally govc/port
(if using a non-standard Port).
See the documentation for each of those Params for more details.
Many of the GoVC commands require a JSON configuration that defines more the
values for customizing. In these cases, use the ova/template-json
param
to define the Template to render inside of the GoVC container context.
If an OVA file is found inside the container at /ova/import.ova
, and if the
base argument of _import.ova_ is found in the _govc_ command, then the path
and OVA name will be appended to the command sequence.
Documentation and usage examples for govc
can be found on the GoVMOMI
website at:
Warning
Only a single OVA deploy action (import.ova
) can be specified in
the command sequnce.
22.52.3.4.9. govc-datastore-manage¶
Add, remove, or list datastore(s) on a remote ESXi instance specified in the
esxi/datastore-mappings
Param.
22.52.3.4.10. govc-dvs-create¶
This task will create Distributed Virtual Switches in a vCenter service.
It uses the esxi/dvs-mappings
Param to define which DVSs and (optionally)
portgroups to create, along with the configuration values for the DVS and
Portgroups.
Please review the Param documentation for esxi/dvs-mappings
for structure
and usage examples.
In addition to esxi/dvs-mappings
configuration values for each of the DVS
and Portgroups, you must also add the esxi/dvs-memberships
Param, which is
an array of strings. Each string should be the name of a Distributed Virtual
Switch to create on the vSphere ESXi node.
Typically the esxi/dvs-memberships
Param will be added to a machine via
classification rules.
22.52.3.4.11. govc-get-thumbprint¶
Gets the SHA-1 thumbprint from an ESXi host via the govc
command, and
stores it on the Machine object as govc/thumbprint-sha1
.
22.52.3.4.12. govc-host-add¶
This task will enroll ESXi nodes in to either a datacenter or cluster. If
just esxi/datacenter-name
is specified and esxi/cluster-name
is not
specified then the host will be added to the Datacenter. If both are specified,
then the host will be added to the Cluster that exists in the specified
Datacenter.
If esxi/cluster-folder
is provided then the host will also be added to
that Folder.
If datacenter and/or cluster does not yet exist on the vCenter, they will be created first.
The Param esxi/cluster-options
can be specified to change the cluster
configuration options. This is also used to enable VSAN configuration on
the cluster level. Cluster options are only configured if the cluster name
Param is also specified.
This task is intended to be run from the govc
context. It is not run
as a standalone workflow, as the older govc-cluster-create
pattern used
to operate.
22.52.3.4.13. govc-stage-ova¶
This tasks stages the specified OVA in govc/ova-location
Param inside
the container Context. Unfortunately, the _govc_ command does not appear
to have support to specify remote resources as an HTTP/S URL reference.
The OVA specified in the Param will be downloaded inside the container as
a file named _import.ova_, which if that file exists, the govc-command
Task will append to the end of the executed _govc_ arguments.
22.52.3.4.14. govc-vsan-claim-disks¶
Uses selection rule listed in esxi/vsan-disk-selection-rule
to select
disks to use for VSAN claimed use.
22.52.3.4.15. govc-vsan-cluster-enable¶
This task will enable VSAN on the cluster after create and claim operations. The Param esxi/vsan-options
must be set with a value to enable VSAN, if desired. For example:
-vsan-enabled=true
If the Param esxi/vsan-enabled
is set to true
, and there is no value defined for esxi/vsan-options
, then esxi/vsan-options
will be set to -vsan-enabled=true
.
If there is no value specified for esxi/vsan-options
AND esxi/vsan-enabled
, then this task will be skipped without generating an error.
22.52.3.4.16. govc-vsan-cluster-get¶
DESTROY the VSAN cluster membership on a given ESXi host.
22.52.3.4.17. govc-vsan-cluster-operations¶
Build a VSAN cluster. Requires at minimum; the following Params to be set:
esxi/vsan-enabled
- set to ‘true’ to enable VSAN cluster building (defaults to ‘false’)
esxi/cluster-name
- on each ESXi host that will become part of the cluster
esxi/cluster-profile
- the name of a Profile that will store Cluster state data
esxi/vsan-operations
- the Operations to perform on the VSAN cluster
The primary three operations that are run on VSAN clusters:
cluster-build
- create a cluster, assumes clean unconfigured state
cluster-destroy
- wipe a cluster out completely
cluster-list
- show the current cluster vSAN network and cluster state
22.52.3.4.18. govc-vsan-destroy-cluster¶
DESTROY the VSAN cluster membership on a given ESXi host.
22.52.3.4.19. govc-wait-for-ova¶
Simple task to wait until the remote vSphere deployed resource responds correctly
to a govc about
request call.
22.52.3.4.20. ovftool-deploy¶
This task deploys an OVA appliance utilizing the ovftool
which is found
in the vmware-tools
context container.
This requires the JSON configuration to be described in the ova/param-json
Param to build up the ovftool
command line options. govc import.spec
(also in the vmware-tools
context container) can be used to generate the
JSON template for a given OVA file.
The import.spec formats the JSON structure in an inconsistent way from how
OVFTool command line flags are operated. As a consequence, mapping of
arguments must be performed in a lot of cases. It is impossible to map
all argument possibilities without seeing how they are output by OVFTool in
conjunction with govc import.spec
. Additionally, ovftool
can be
used to determine what the OVF properties are for customization. Examples:
govc import.spec VMware-Cloud-Builder-4.1.0.0-16961769_OVF10.ova
or
ovftool --schemaValidate VMware-Cloud-Builder-4.1.0.0-16961769_OVF10.ova
The GoVC command produces a JSON data structure which should be valid for use
in conjunction with ovo/param-json
or ova/template-json
Params.
This tool attempts to map as many values as possible. Where the mappings don’t exist for your use case, there are some options available to customize the OVFTool command line flags.
Use the Param
ova/ovftool-extra-options
to add command line flag overrides - see the Param for more documentation.For more advanced/flexible use cases in the field, create a Shell script template defined by the Param
ova/ovftool-deploy-more-mapping-template
The template override (option 2) is a much more advanced use case. It utilizes Golang Templating to parse the JSON data structure options, and remap them to OVFTool arguments. Please review the Template script that this Task calls for details.
Warning
If you use the template override option, your template must absolutely
conform to the exact white space formatting, golang templating, etc
rules as is used in this template. Review the shell variable build up
for MORE
.
Use of the command ovftool --help
can help map OVFtool arguments to the JSON
spec template structure.
Note
If you encounter any new mappings that should be in this Task, please contact RackN (support@rackn.com) and provide that information. We would like to make this as complete as possible in product; so in-the-field template maintenance is not necessary.
22.52.3.5. workflows¶
The content package provides the following workflows.
22.52.3.5.1. esxi-sddc-cluster-configure¶
This workflow runs SDDC (Software Defined Data Center) cluster configuration operations via Context containers. It performs the following functions:
record each ESXi nodes thumbprint on the Machine object
initial datacenter create and cluster create
enroll the specified ESXi nodes in the cluster
build VSAN datastore if specified
claim disks according to the (extensible) rule selection
enable VSAN in the cluster
create Distributed Virtual Switch(es)
migrate Standard Virtual Switch to DVS (if desired)
Requires that the ESXi nodes have been completely built, and vCenter instance(s) have been deployed on to ESXI.
22.52.3.5.2. esxi-sddc-ovftool-deploy¶
This workflow utilizes ovftool
to deploy an OVA. It requires that
the JSON description of the OVA options is generated. This is done using
the govc import.spec <OVA_FILE>
option. The output is a JSON stanza
that should be saved to ova/param-json
. The ovftool-deploy
task
parses the JSON in that Param and generates an OVFTool command line
argument list.
See the ovftool-deploy
Task for more details on this processing.
Currently, the OVFTool command line options utilizes the govc
based
Params to describe the resource to deploy the OVA to. The Param values
that describe where to deploy to can be found below.
required:
govc/url
= URL for the vSphere target to deploy the OVA to (eg. ESXi or vCenter)
govc/username
= Username on the vSphere target to deploy to
govc/password
= Password on the vSphere target to deploy to
govc-ova-location
= URL location of the OVA appliance
optional:
govc/port
= optional non-standard port of the govc/url
govc/ova-type
= used (along with govc/ova-version) to create custom rules
govc/ova-version
= used (along with govc/ova-type) to create custom rules
Unlike govc
- the OVFTool is capable of importing an OVA from a remote
HTTP resource, so the OVA is not staged inside of the Contex Container before
deployment - so no “stage-ova” task is required.
22.52.3.5.3. esxi-sddc-vcenter-deploy¶
This workflow deploys a vSphere vCenter OVA (VCSA) on to a vSphere ESXi
node. The process requires the use of a RackN Context to run this
workflow, and a Context container with govc
in it. RackN provides
a lightweight govc
only context container, or a larger bloated
vmware-tools
context container that also includes govc
.
The operator must specify the following Params for deploying the VCSA via the OVA.
govc/commands
- set toimport.ova
govc/insecure
- eithertrue
orfalse
depending on TLS certificates (self signed requirestrue
)
govc/ova-location
- the URL to download the VCSA OVA from - example{{.ProvisionerURL}}/files/images/VMware-vCenter-Server-Appliance-7.0.0.10700-16749653_OVF10.ova
govc/ova-type
- must be set tovcsa
govc/ova-version
- must be set to7
for vSphere 7.x vCenter version
govc/username
- must be set to the vSphere ESXi node account to authorize the deploy operation (usuallyroot
)
govc/password
- the password of the ESXi node for authorizing the deploy operation
govc/url
- the IP address or resolvable DNS host/domain name (FQDN) of the vSphere ESXi node to deploy vCenter OVA to
ova/template-json
- the Digital Rebar template that describes the vCenter deployment options - there are several “EXAMPLE” named templates in the vmware-lib content pack.
In addition, the operator may specify creation of a local Datastore on the
ESXi node to back the vCenter instance. See the govc-datastore-create
Stage for more details on customizing this.
22.52.3.5.4. govc-cluster-create¶
This workflow builds the vCenter Datacenter and Cluster constructs, and enrolls the vSphere ESXi nodes in to the cluster.
22.52.3.5.5. govc-commands¶
Requires that operator has created a Contexts for runner
and govc
that can run DRP Angent and govc. The runner
context is used for
starting the workflow on a fake machine, and the govc
context is
responsible for executing the govc
commands and tooling.
Leaves the machines in a Runner Context not on the machine
Note
If the Param govc/ova-location
is specified on the machine, the
OVA will be downloaded to /ova/import.ova
. To skip this behavior,
set the Param govc/skip-ova-stage
to true
.
22.52.3.5.6. govc-datastore-manage¶
This workflow creates a datastore on a remote vSphere API node. The datastore creation is controlled primarily by two Params:
govc/datastore-create-name
- sets the name of the DataStore
govc/datastore-create-disk
- defines what disk to make the DataStore on
Either a _rule_ or a specific Device can be specified by the govc/datastore-create-disk
Param. Supported _rules_ and disk device definition settings examples:
first_available
disk=t10.ATA_____Micron_M500DC_MTFDDAK120MBB_____________________14260DAD9402
The default is to use the _rule_ first_available
.
The first_available
rule attempts to filter out used disk devices, then
chooses the first of any remaining disks that are unused.
Note
IF using the govc command, you can find the disk information with the command
govc host.esxcli storage core path list
(after setting up the appropriate
GOVC_
environment variables, of course.).
22.52.3.5.7. govc-deploy-ova¶
This workflow first stages the a specified OVA inside the container context,
creates an appliance from teh deployed OVA via the govc
command, and then waits
for the API services to become available.
This process can take upwards of 60 minutes to complete.
The deployment is controlled by the following Param settings.
OVA location param
govc-ova-location
= URL location of the OVA appliance
vSphere ESXi deployment target
required:
govc/url
= URL for the vSphere target to deploy the OVA to (eg. ESXi or vCenter)
govc/username
= Username on the vSphere target to deploy to
govc/password
= Password on the vSphere target to deploy to
govc-ova-location
= URL location of the OVA appliance
optional:
govc/port
= optional non-standard port of the govc/url
govc/ova-type
= used (along with govc/ova-version) to create custom rules
govc/ova-version
= used (along with govc/ova-type) to create custom rules
The VCSA OVA file must be staged and made available, and referenced in
the govc/ova-location
Param. See the full documentation for additional
configuration Params that are available.
Note
The govc-stage-ova
Stage downloads and stages an OVA based on
the govc/ova-location
URL.
22.52.3.5.8. govc-dvs-create¶
This workflow builds the Distributed Virtual Switches on the vSphere ESXi nodes.
22.52.3.5.9. govc-vsan-build-and-claim¶
This workflow builds the VSAN volumes on the target vSphere ESXi nodes and then claims disks.