22.45. task-library - Core Task Library

The following documentation is for Core Task Library (task-library) content package at version v4.12.0-alpha00.78+gc037aaa40eb3ad853690ce178f9ab8a5bae4c436.

This content package is a collection of useful stages and operations for Digital Rebar. It also includes several handy workflows like the CentOS base, Fedora base, and Ubuntu base workflows. You can also find many handy stages in this like the network-lldp stage which can be added to your discovery to add additional networking information discovered by lldp.

## Cluster Stages

These stages implement and augment Clusters.

Allows operators to orchestrate machines into sequential or parallel operation.

## Inventory Stage

Convert gohai and other JSON data into a map that can be used for analysis, classification or filtering.

22.45.1. Object Specific Documentation

22.45.1.1. blueprints

The content package provides the following blueprints.

22.45.1.1.1. alerts-on-content-change

This blueprint calls the same named task to to raise Alerts when a Content pack event is raised using the Triggers.

It requires that operators define an Event Trigger sets MergeData: true because the task relies on the Event meta data being passed as a param called meta.

Recommended Configuration:

`yaml MergeDataIntoParams: true Blueprint: alerts-on-content-change Params.event-trigger/event-match: contents.*.* Filter: Filter: "Params.cluster/tags=In(event-worker-pool) Endpoint=" FilterCount: 1 `

22.45.1.1.2. alerts-raise-from-events

This blueprint allows operators to raise alerts based on system events

22.45.1.1.3. ansible-apply

Uses ansible/playbook-templates to run playbook templates against a machine or cluster.

If the blueprint is run against a cluster, then all machines in the cluster will run the playbook templates remotely through SSH from a single ansible call. An inventory fill will be automatically execute.

If the blueprint is run against a machine or resource broker, the playbook will be run locally without SSH connection. The one exception to this is a machine with a Context. In this case, SSH will be attempted.

The Machine, WorkOrder, or Trigger using this blueprint must specify the playbook templates.

22.45.1.1.4. ansible-run-playbook-local-on-machine

Uses ansible-playbooks-local to run playbook(s) defined in Param ansible/playbooks in “local” mode on the selected machine(s).

Forces the ansible/connection-local to true.

There is a safe default so this can be used without a Param defined for testing.

22.45.1.1.5. cluster-reevaluate

This blueprint will cause the cluster-provision task to rerun on the cluster to pick up changes to the cluster.

22.45.1.1.6. cluster-run-blueprint-on-each-member

This is blueprint/trigger can used to run a blueprint on each member of a cluster.

This is an event trigger that requires the following parameters.

cluster/for-all-blueprint - defines the blueprint to run. cluster/for-all-cluster - defines the cluster to pull machines from.

This is an event trigger that can also use the following parameters.

cluster/for-all-profile - defines a profile to add to the WorkOrder for additional parameters.

The firing the trigger through event must contain the blueprint and cluster.

Applying the trigger/blueprint to a cluster requires the blueprint, but infers the cluster.

22.45.1.1.7. cost-calculator

Runs inventory-cost-calculator and inventory-cost-summation on the machines in question.

This can be used to regularly update/refresh the inventory/cost value for a machine, cluster or resource broker.

22.45.1.1.8. docker-context-run

This is a long running blueprint to run the docker-context plugin as a service.

To stop this blueprint, mark the machine running it as Not Runnable.

22.45.1.1.9. dr-server-add-content-from-git

This blueprint will check out a git repo, bundle the content. See the help on git-build-content-push for more details.

Based upon parameters, the system will put the content pack into the server or add the content pack to the catalog.

When used with the git-lab-trigger-webhook-push trigger, the repository param will be automatically populated when MergeData is set to true. This allows the webhook data to automically flow into the blueprint.

Generally, this blueprint runs from the DRP endpoint selfrunner and should use Params.cluster/tags=In(event-worker-pool) Endpoint= as a filter.

22.45.1.1.10. guacd-run

This is a long running blueprint to run guacd as a service.

To stop this blueprint, mark the resource_broker running it as Not Runnable.

22.45.1.1.11. manager-update-catalog

Runs the manager bootstrap work at some triggerable event. This is scheduled for nightly updates.

This can be used to regularly update/refresh the inventory/cost value for a machine, cluster or resource broker.

22.45.1.1.12. utility-endpoint-systems-check

This blueprint will performs basic diagnostic checks on a Digital Rebar endpoint to ensure healthy operation of the system

22.45.1.1.13. utility-update-ssh-keys

This blueprint will update the SSH keys set on the target machine using the ssh-access task

22.45.1.2. params

The content package provides the following params.

22.45.1.2.1. alert/cluster

Used by alerts, Params allows alert creators to provide useful back reference information.

In this instance, a backlink to the cluster is provided.

Dev Note: this is intended to be used by task developers when raising alerts from tasks. It does not influence configuration or behavior of an alert.

22.45.1.2.2. alert/content

Used by alerts, Params allows alert creators to provide useful back reference information.

In this instance, a backlink to the content is provided.

Dev Note: this is intended to be used by task developers when raising alerts from tasks. It does not influence configuration or behavior of an alert.

22.45.1.2.3. alert/job

Used by alerts, Params allows alert creators to provide useful back reference information.

In this instance, a backlink to the job is provided.

Dev Notes:

  • this is intended to be used by task developers when raising alerts from tasks. It does not influence configuration or behavior of an alert.

  • generally, use {{.CurrentJob}} to retrieve this information

22.45.1.2.4. alert/level

This operators to override the default level via Param in tasks that raise alerts.

For example, alerts-on-content-change uses the default value, but operators may define a WARN level by setting this value in the calling trigger.

22.45.1.2.5. alert/machine

Used by alerts, Params allows alert creators to provide useful back reference information.

In this instance, a backlink to the machine is provided.

Dev Notes:

  • this is intended to be used by task developers when raising alerts from tasks. It does not influence configuration or behavior of an alert.

  • generally, use {{.Machine.Name}} to retrieve this information

22.45.1.2.6. alert/source

Used when reposting alerts from events, allows event to identify a source

In this instance, a backlink to the task is provided.

Dev Note: this is intended to be used by task developers when raising alerts from tasks. It does not influence configuration or behavior of an alert.

22.45.1.2.7. alert/task

Used by alerts, Params allows alert creators to provide useful back reference information.

In this instance, a backlink to the task is provided.

Dev Note: this is intended to be used by task developers when raising alerts from tasks. It does not influence configuration or behavior of an alert.

22.45.1.2.8. alert/trigger

Used by alerts, Params allows alert creators to provide useful back reference information.

In this instance, a backlink to the content is provided.

Dev Notes:

  • this is intended to be used by task developers when raising alerts from tasks. It does not influence configuration or behavior of an alert.

  • genernerally use {{get .WorkOrder.Meta “triggered-by” }} to retrieve this information

22.45.1.2.9. alert/user

Used by alerts, Params allows alert creators to provide useful back reference information.

In this instance, a backlink to the task is provided.

Dev Note: this is intended to be used by task developers when raising alerts from tasks. It does not influence configuration or behavior of an alert.

22.45.1.2.10. alert/workorder

Used by alerts, Params allows alert creators to provide useful back reference information.

In this instance, a backlink to the workorder is provided.

Dev Note: this is intended to be used by task developers when raising alerts from tasks. It does not influence configuration or behavior of an alert.

22.45.1.2.11. alerts-bootstrap-handled

When the bootstrap error handler runs, it will make that it ran by setting this parameter to true. When the task is retried, it will so that it is already run and return to the original task to see if it can continue.

22.45.1.2.12. ansible-inventory

Holds the value from the ansible-inventory task.

22.45.1.2.13. ansible/connection-local

This parameter forces the ansible system to use localhost non-ssh ansible playbook execution.

22.45.1.2.14. ansible/output

Generic object that is constructed by the ansible-apply task if the playbook creates a file called [Machine.Name].json

For example, the following task will create the correct output:

```yaml - name: output from playbook

local_action:

module: copy content: “{{{{ myvar }}}}” dest: “{{ .Machine.Name }}.json”

```

22.45.1.2.15. ansible/playbook-templates

This is an array of strings where each string is a template that renders an Ansible Playbook. They are run in sequence to allow building inventory dynamically during a run.

Output from a playbook can be passed to the next one in the list by setting the ansible/output value. This value gets passed as a variable into the next playbook as part of the params on the machine object json.

22.45.1.2.16. ansible/playbooks

Used by Task ‘ansible-playbooks-local’ task.

Runs the provided playbooks in order from the json array. The array contains structured objects with further details about the ansible-playbook action.

Each playbook MUST be stored in either 1. a git location accessible from the machine running the task. 1. a DRP template accessible to the running DRP

The following properties are included in each array entry:

  • playbook (required): name of the playbooks passed into ansible-playbook cli

  • name (required): determines the target of the git clone

  • either (required, mutually exclusive):
    • repo: path related to machine where the playbook can be git cloned from

    • template: name of a DRP template containing a localhost ansible playbook

  • path (optional): if playbooks are nested into a single repo, move into that playbook

  • commit (optional): commit used to checkout a specific commit tag from the git history

  • data (optionalm boolean): if true, use items provided in args

  • args (optional): additional arguments to be passed into the ansible-playbook cli

  • verbosity (optional, boolean): if false, suppresses output from ansiblie using no_log.

  • extravars (optional, string): name of a template to expand as part of the extra-vars. See below.

For example

```json [

{

“playbook”: “become”, “name”: “become”, “repo”: “https://github.com/ansible/test-playbooks”, “path”: “”, “commit”: “”, “data”: false, “args”: “”, “verbosity”: true

}, {

“playbook”: “test”, “name”: “test”, “template”: “ansible-playbooks-test-playbook.yaml.tmpl”, “path”: “”, “commit”: “”, “data”: false, “args”: “”, “verbosity”: true

}

Using extravars allows pulling data from Digital Rebar template expansion into github based playbooks. These will show up as top level variables. For example:

`yaml foo: "{{ .Param "myvar" }}", bar: "{{ .Param "othervar" }"} `

22.45.1.2.17. bootstrap-download-contexts

This is an array of strings where each string is a context that should be downloaded / updated during the bootstrap-context task.

This is used by the bootstrapping system to add context images to the DRP Endpoint.

By default, no contexts are specified. If the operator sets this Param on the self-runner Machine object (either directly or via a Profile), then runs one of the bootstrap workflows, the context images will be installed.

This parameter is Composable and Expandable. This allows other profiles to incrementally and dynamically add to the list.

An example workflow is universal-bootstrap.

Example setting in YAML:

```yaml bootstrap-download-contexts:

  • drpcli-runner

  • context2

```

Or in JSON:

`json { "bootstrap-download-contexts": [ "drpcli-runner", "context2" ] } `

22.45.1.2.18. bootstrap-network-reservations

This Param can be set to an array of Reservation type objects. If it is added to the bootstrap-network profile, then during the bootstrap Stage, the defined reservations will be added to the system.

The easiest way to generate a valid reservation set of objects is to create the reservation on an existing DRP Endpoint, then show the JSON structure. An example CLI command reservation named 192.168.124.101 is shown below:

`sh drpcli reservations show 192.168.124.101 `

Multiple Subnets may be specified (eg drpcli reservations list; to dump all reservations on a given DRP Endpoint). Each Subnet will be added to the system if they do not exist already.

Example structures of two reservations is defined below. NOTE that in real world use, more options or values may need to be set to make the subnet operationally correct for a given environment.

YAML:

```yaml bootstrap-network-reservations:

  • Addr: 192.168.124.222 Token: “01:02:03:04:05:00”

  • Addr: 192.168.124.223 Token: “01:02:03:04:05:01”

```

JSON:

```json [

{ “Addr”: “192.168.124.222”, “Token”: “01:02:03:04:05:00” }, { “Addr”: “192.168.124.223”, “Token”: “01:02:03:04:05:01” }

Generally speaking this process is used for initial DRP Endpoint installation with the install.sh installation script. Preparing the Subnets and Reservations in either a Content Pack or a Profile that is applied to the system at installation time will enable the creation of the Objects.

Example installation script invocation that would use this bootstrap process, assumes that network.json is provided by the operator with the network Subnet and/or Reservations definitions:

`sh # you must create "network.json" which should be a valid Profile object, containing # the appropriate Params and object information in each param as outlined in the # examples in the bootstrap-network-subnets and boostrap-network-reservations # Params "Documentation" field curl -fsSL get.rebar.digital/stable | bash -s -- install --universal --initial-profiles="./network.json" `

22.45.1.2.19. bootstrap-network-subnets

This Param can be set to an array of Subnet type objects. If it is added to the bootstrap-network profile, then during the bootstrap Stage, the defined subnets will be added to the system.

The easiest way to generate a valid subnet object is to create the subnet on an existing DRP Endpoint, then show the JSON structure. An example CLI command for the subnet named kvm-test is shown below:

`sh drpcli subnets show kvm-test `

Multiple Subnets may be specified (eg drpcli subnets list; to dump all subnets on a given DRP Endpoint). Each Subnet will be added to the system if they do not exist already.

Example structures of two subnets is defined below.

!!! note

In real world use, more options or values may need to be set to make the subnet operationally correct for a given environment.

YAML:

```yaml - Subnet: 10.10.10.1/24

Name: subnet-10 ActiveEnd: 10.10.10.254 ActiveStart: 10.10.10.10 OnlyReservations: false Options: - Code: 1

Description: Netmask Value: 255.255.255.0

  • Code: 3 Description: DefaultGW Value: 10.10.10.1

  • Code: 6 Description: DNS Server Value: 1.1.1.1

  • Code: 15 Description: Domain Value: example.com

  • Subnet: 10.10.20.1/24 Name: subnet-20 ActiveEnd: 10.10.20.254 ActiveStart: 10.10.20.10 OnlyReservations: false Options: - Code: 1

    Description: Netmask Value: 255.255.255.0

    • Code: 3 Description: DefaultGW Value: 10.10.20.1

    • Code: 6 Description: DNS Server Value: 1.1.1.1

    • Code: 15 Description: Domain Value: example.com

```

JSON:

```json [

{

“Subnet”: “10.10.10.1/24”, “Name”: “subnet-10”, “ActiveEnd”: “10.10.10.254”, “ActiveStart”: “10.10.10.10”, “OnlyReservations”: false, “Options”: [

{ “Code”: 1, “Value”: “255.255.255.0”, “Description”: “Netmask” }, { “Code”: 3, “Value”: “10.10.10.1”, “Description”: “DefaultGW” }, { “Code”: 6, “Value”: “1.1.1.1”, “Description”: “DNS Server” }, { “Code”: 15, “Value”: “example.com”, “Description”: “Domain” }

]

}, {

“Subnet”: “10.10.20.1/24”, “Name”: “subnet-20”, “ActiveEnd”: “10.10.20.254”, “ActiveStart”: “10.10.20.10”, “OnlyReservations”: false, “Options”: [

{ “Code”: 1, “Value”: “255.255.255.0”, “Description”: “Netmask” }, { “Code”: 3, “Value”: “10.10.20.1”, “Description”: “DefaultGW” }, { “Code”: 6, “Value”: “1.1.1.1”, “Description”: “DNS Server” }, { “Code”: 15, “Value”: “example.com”, “Description”: “Domain” }

]

}

Generally speaking this process is used for initial DRP Endpoint installation with the install.sh installation script. Preparing the Subnets and Reservations in either a Content Pack or a Profile that is applied to the system at installation time will enable the creation of the Objects.

Example installation script invocation that would use this bootstrap process, assumes that network.json is provided by the operator with the network Subnet and/or Reservations definitions:

`sh # you must create "network.json" which should be a valid Profile object, containing # the appropriate Params and object information in each param as outlined in the # examples in the bootstrap--network-subnets and boostrap-network-reservations # Params "Documentation" field curl -fsSL get.rebar.digital/stable | bash -s -- install --universal --initial-profiles="./network.json" `

22.45.1.2.20. bootstrap-tools

This is an array of strings where each string is a package name for the base Operating System that is running on the DRP Endpoint, to be installed.

This is used by the bootstrapping system to add packages to the DRP Endpoint.

By default no packages are specified. If the operator sets this Param on the self-runner Machine object (either directly or via a Profile), then runs one of the bootstrap workflows, the packages will be installed.

An example workflow is universal-bootstrap.

Example setting in YAML:

```yaml bootstrap-tools:

  • package1

  • package2

```

Or in JSON:

`json { "bootstrap-tools": [ "package1", "package2" ] } `

22.45.1.2.21. broker-pool/allocate

If true (default), use the allocate/release methods when getting machines from the source pool. If false, use the add/remove methods when getting machines from the source pool

This flag allows operators to choose between pulling machines for multiple clusters from a shared pool (set to true) or creating new pools when using the Pool Broker (set to false) to create new clusters.

22.45.1.2.22. broker-pool/pool

The pool to pull machines from.

Will use default if not set.

22.45.1.2.23. broker/mach-name

The cluster gives a machine a name. This parameter tracks that name.

This is needed because the Pool Broker does not rename machines but to match the requested machine back to the actual machine allocated.

22.45.1.2.24. broker/name

The broker that provided these machines.

22.45.1.2.25. broker/pass-params-to-workorder

When cluster-provision task creates a work_order, these parameters will be included in that work_order.

If the Param is not defined for the cluster, then it will be skipped.

This allows pipeline developers to include needed information for the broker from the cluster data.

!!! note

The following params are always included and do not need to be added to this param: broker/. cluster/

22.45.1.2.26. broker/set-color

Color to use for broker/set-icon when creating machines using a broker.

22.45.1.2.27. broker/set-context

Sets the BaseContext to build machines with when using the broker-provision task.

Only applies for the Context Broker.

22.45.1.2.28. broker/set-icon

Icon to use when creating machines using a broker

Typical values:

  • aws: aws

  • azure: microsoft

  • default: cloud

  • google: google

  • linode: linode

  • digitalocean: digital ocean

  • oracle: database

22.45.1.2.29. broker/set-pipeline

When the broker creates machines, it assigns a pipeline to start during the join process. This will determine the operational path that the new machine follows.

This value sets the pipeline that will started when the machine is crated. Typically, the pipeline includes key configuration information and additional tasks to be injected during the configuration process.

22.45.1.2.30. broker/set-workflow

When the broker creates machines, it assigns a workflow to start during the join process. For cloud machines, this is typically universal-start (default). For physical machines, this is typically universal-discover.

For Pipelines, this value must be one of the chained workflow states; consequently, the typical values are universal-start and universal-discover.

22.45.1.2.31. broker/state

Generic object that is constructed by the brokers and stored on the broker machine to track general information.

22.45.1.2.32. broker/tfinfo

Generic object that is constructed by the brokers and stored on the broker machine to track terraform information.

22.45.1.2.33. broker/type

The broker type that provided these machines.

22.45.1.2.34. cloud/cost-lookup

Used by Task cloud-inventory to set cloud/cost.

This is intended for budgetary use only. Actual costs should be determined by inspecting the API for the cloud.

22.45.1.2.35. cloud/meta-apis

Used by Task ‘cloud-inventory’ and ‘cloud-detect-meta-api’ tasks.

Keys:

  • clouds is the ordered list of clouds to test during cloud-detect-meta-api

  • id is the field to use for the instance-id

  • apis is a dictionary per cloud that maps params to bash commands to retrieve the param

For each apis entry:

  • cloud/instance-id (required):

  • cloud/instance-type (optional):

  • cloud/placement/availability-zone (optional):

  • cloud/public-ipv4 (optional):

apis entries are bash commands that will yield the expected value when evalated at VAR=$([api command])

22.45.1.2.36. cluster/count

Used by cluster automation to determine the size of a cluster. More advanced clusters may use this field to define counts for workers in a multi-tiered cluster.

Defaults to 3. Set to 0 to drain the cluster.

This is used by v4.8+ cluster pipelines to build out the cluster size. Check the specific cluster for details about whehter it is used.

22.45.1.2.37. cluster/destroy

Used by cluster automation to destroy the cluster.

22.45.1.2.38. cluster/for-all-blueprint

Name of the blueprint to apply to all members of a cluster.

22.45.1.2.39. cluster/for-all-cluster

Name of the cluster to apply a blueprint to all members of a cluster.

22.45.1.2.40. cluster/for-all-profile

Name of a profile to apply to a blueprint to all members of a cluster.

22.45.1.2.41. cluster/machine-types

For v4.8 Cluster Pattern, this defines a list of machine types that will be added or removed by the plan.

22.45.1.2.42. cluster/machines

For v4.8 Cluster Pattern, this defines a map of machine types and their names. Each type can have additional parameters.

The purpose of this map is to uniquely identify the machines and allows the machines to be rotated in a controlled way.

If omitted the following items will be set via the designated Param:

  • pipeline: sets the initial pipeline or assigns from broker/set-pipeline [has safe default]

  • workflow: set the initial workflow or assigns from broker/set-workflow [has safe default]

  • icon: sets the initial Meta.icon or assigns from broker/set-icon [has safe default]

  • color: sets the initial Meta.color or assigns from broker/set-color [has safe default]

  • tags: [] but automatically includes cluster/profile

  • meta: meta data to set on the machine by cluster-add-params, if Meta.icon or Meta.color is set, it will over-write the icon/color.

  • Params: (note capitalization!): parameters to use in Terraform templates. Also set on machine during cluster-add-params

While most of these values are used only during terrafor-apply when building machines, meta and Params is used during cluster-add-params to update Meta and Params of the provisioned machines.

Example:

```yaml machine:

names:
  • cl_awesome_1

  • cl_awesome_2

  • cl_awesome_3

Params:

broker/set-workflow: universal-application-application-base broker/set-pipeline: universal-start broker/set-icon: server broker/set-color: black

```

22.45.1.2.43. cluster/profile

Name of the profile used by the machines in the cluster for shared information. Typically, this is simply self-referential (it contains the name of the containing profile) to allow machines to know the shared profile.

Note: the default value has been removed in v4.7. In versions 4.3-4.6, the default value was added to make it easier for operators to use clusters without having to understand need to create a cluster/profile value. While helpful, it leads to non-obvious cluster errors that are difficult to troubleshoot.

22.45.1.2.44. cluster/provision-kill-switch

Used as an emergency circuit breaker action to kill the cluster-provision Task during execution when in the machines await loop.

Set this Param to true to cause the next loop iteration to fail the task and stop processing.

22.45.1.2.45. cluster/provision-timeout

Used by cluster provision as a safety to determine how long to wait before failing the Task. By default this is set to 1 hour, and can be set to a user defined value in Seconds.

If set to -1, then no timeout evaluation will ever occur.

22.45.1.2.46. cluster/tags

For v4.8 Cluster Pattern, this defines tags added to the machine during construction by the broker.

22.45.1.2.47. cluster/use-single-wait

Flag to indicate that the cluster should do one wait

When true, the cluster will wait for the maximum timeout for the requested work order to complete. This can cause the the requesting task to appear hung.

If false, the cluster will loop using a 10 second wait period for requested work order(s) to complete.

22.45.1.2.48. cluster/wait-filter

The Task cluster-wait-for-members requires a filter condition to match a Machine object against to determine if the Machine has either succeeeded or failed the wait conditions.

By default the filter wait condition is set to:

  • Or(Runnable=Eq(false),WorkflowComplete=Eq(true))

For Clusters that utilize an OS installable Pipeline, the Machine objects will be set to Runnable: false when they reboot in to the OS Installer. in those cases; the cluster/wait-filter must be adjusted to not fail out when the Runnable: false condition is set (as expected).

For OS installable Pipelines the following filter MAY work. Future DRP versions may have better Machine fields to supporting this, rather than relying on overloading the Runnable field.

  • And(Workflow=Eq(universal-runbook),WorkflowComplete=Eq(true),BootEnv=Eq(local))

!!! note

The above filter assumes that universal-runbook is the final chained workflow in the Pipeline. This must be adjusted to be the correct final chained workflow.

!!! note

The above filter will NOT correctly catch a machine that failures out due to Workflow/Task errors setting the machine correctly to Runnable: false. The Pipeline/Workflow chains must succeed for the cluster to complete building.

22.45.1.2.49. cluster/wait-for-members

Used by cluster-provision task to inject cluster-wait-for-members into the cluster to be workflow complete.

Normally, pipeline designers can include cluster-wait-for-members one more more times in flexiflow, this Param provides a additional option to to make it built into cluster-provision.

22.45.1.2.50. context-broker/docker-context-version

Version of docker-context to use

22.45.1.2.51. context-broker/member-name

Used by docker-context to identify which system should create a machine

22.45.1.2.52. context-broker/tag

Used by broker-context-manipulate to find a machine to host containers

22.45.1.2.53. context-broker/tags

Used by manipulate-context-broker to distribute containers to members

22.45.1.2.54. context/container-info

Used by docker-context to provide details on the currently running container.

22.45.1.2.55. context/image-exists-counts

Used by docker-context to determine if an image already exists.

22.45.1.2.56. context/image-name

Used by docker-context for the name of the image.

22.45.1.2.57. context/image-path

Used by docker-context to provide the path in the local filestore for the image to upload.

22.45.1.2.58. context/name

Used by context-set as the target for the Param.BaseContext value.

Defaults to “” (Machine context)

22.45.1.2.59. dr-server/initial-password

Defaults to r0cketsk8ts

22.45.1.2.60. dr-server/initial-user

Defaults to multi-site-manager

22.45.1.2.61. dr-server/initial-version

The version DRP to install.

Typically: stable or tip, can also be a specific v4.x.x version from the RackN catalog.

This defaults to stable.

22.45.1.2.62. dr-server/install-drpid

If not set, will use site-[Machine.Name]

22.45.1.2.63. dr-server/install-license

If not set, it will attempt to update the current server’s license.

22.45.1.2.64. dr-server/skip-port-check

If true, includes the –skip-port-check flag during installation

This defaults to false.

22.45.1.2.65. dr-server/update-content

Boolean to indicate that the content should be updated.

22.45.1.2.66. drive-signatures

A map of drive to SHA1 signatures.

22.45.1.2.67. inventory/CPUCores

From inventory/data, used for indexable data.

22.45.1.2.68. inventory/CPUSpeed

From inventory/data, used for indexable data.

22.45.1.2.69. inventory/CPUType

From inventory/data, used for indexable data.

22.45.1.2.70. inventory/CPUs

From inventory/data, used for indexable data.

22.45.1.2.71. inventory/DIMMSizes

From inventory/data, used for indexable data.

22.45.1.2.72. inventory/DIMMs

From inventory/data, used for indexable data.

22.45.1.2.73. inventory/DisplayBusInfo

From inventory/data, used for indexable data.

22.45.1.2.74. inventory/DisplayHandle

From inventory/data, used for indexable data.

22.45.1.2.75. inventory/DisplayPhysId

From inventory/data, used for indexable data.

22.45.1.2.76. inventory/DisplayProduct

From inventory/data, used for indexable data.

22.45.1.2.77. inventory/DisplayVendor

From inventory/data, used for indexable data.

22.45.1.2.78. inventory/DisplayVersion

From inventory/data, used for indexable data.

22.45.1.2.79. inventory/Displays

From inventory/data, used for indexable data.

22.45.1.2.80. inventory/Family

Hardware Family based on DMI System.

See inventory/os-family for operating system family.

From inventory/data, used for indexable data.

22.45.1.2.81. inventory/Hypervisor

From inventory/data, used for indexable data.

22.45.1.2.82. inventory/Manufacturer

From inventory/data, used for indexable data.

22.45.1.2.83. inventory/NICDescr

From inventory/data, used for indexable data.

22.45.1.2.84. inventory/NICInfo

From inventory/data, used for indexable data.

22.45.1.2.85. inventory/NICMac

From inventory/data, used for indexable data.

22.45.1.2.86. inventory/NICSpeed

From inventory/data, used for indexable data.

This reports the speed as a number and the duplex state as true/false.

22.45.1.2.87. inventory/NICs

From inventory/data, used for indexable data.

22.45.1.2.88. inventory/ProductName

From inventory/data, used for indexable data.

22.45.1.2.89. inventory/RAM

From inventory/data, used for indexable data.

22.45.1.2.90. inventory/RaidControllers

From inventory/data, used for indexable data.

22.45.1.2.91. inventory/RaidDiskSizes

From inventory/data, used for indexable data.

22.45.1.2.92. inventory/RaidDiskStatuses

From inventory/data, used for indexable data.

22.45.1.2.93. inventory/RaidDisks

From inventory/data, used for indexable data.

22.45.1.2.94. inventory/RaidTotalDisks

From inventory/data, used for indexable data.

22.45.1.2.95. inventory/SerialNumber

From inventory/data, used for indexable data.

22.45.1.2.96. inventory/TpmPublicKey

From inventory/data, used for indexable data.

This is the base64 encoded public key from the TPM.

If an error occurs, it will contain the error.

  • no device - no TPM detected

  • no tools - no tools were available to install

22.45.1.2.97. inventory/check

Using BASH REGEX, define a list of inventory data fields to test using Regular Expressions. Fields are tested in sequence, the first to fail will halt the script.

22.45.1.2.98. inventory/collect

Map of commands to run to collect Inventory Input Each group includes the fields with jq maps to store. For example; adding drpcli gohai will use gohai JSON as input. Then jq will be run with provided values to collect inventory into a inventory/data as a simple map.

To work correctly, Commands should emit JSON.

Special Options:

  • Change the command to parse JSON from other sources

  • Add JQargs to give hints into jq arguments before the parse string

Gohai example:

```json {

gohai: {

Command: “drpcli gohai”, JQargs: “” Fields: {

RAM: “.System.Memory.Total / 1048576 | floor”, NICs: “.Networking.Interfaces | length”

}

}

22.45.1.2.99. inventory/cost

Tracks the estimated daily cost for the instance.

This value can be manually set per machine.

For machines with cloud/provider set, cloud-inventory will use the cloud/cost-lookup table to set this value.

22.45.1.2.100. inventory/data

Stores the data collected by the fieds set in inventory/collect. If inventory/integrity is set to true, then used for comparision data.

22.45.1.2.101. inventory/df-pwd

Used to calculate the available disk space for the runner

Determined during the inventory-collect task by running df -k $(pwd) | tail -n1 | awk ‘{print $4}’

22.45.1.2.102. inventory/esxi-vib-list

Updated by inventory-os task.

Contains the list of VIBs installed on the ESXi system when it ran.

22.45.1.2.103. inventory/flatten

Creates each inventory value as a top level Params This is needed if you want to filter machines in the API by inventory data. For example, using Terraform filters.

This behavior is very helpful for downstream users of the inventory params because it allows them to be individually retried and searched.

!!! warning

This will create LOTS of Params on machines. We recommend that you define Params to match fields instead of relying on adhoc Params.

22.45.1.2.104. inventory/integrity

Allows operators to compare new inventory/data to stored inventory/data on the machine. If true and values not match (after first run) then Stage will fail.

22.45.1.2.105. inventory/os-class

Updated by inventory-os task.

Determined from the the $osclass value determined in setup.tmpl template

22.45.1.2.106. inventory/os-family

Updated by inventory-os task.

Determined from the the $osfamily value determined in setup.tmpl template

22.45.1.2.107. inventory/os-version

Updated by inventory-os task.

Determined from the the $osversion value determined in setup.tmpl template

22.45.1.2.108. inventory/tpm-device

The device to use to query the TPM.

Defaults to /dev/tpm0

22.45.1.2.109. inventory/tpm-fail-on-notools

If set to true, the system fail if the TPM tools are not present.

Defaults to false.

22.45.1.2.110. manager/allow-bootstrap-manager

This should be set to false in the global profile to turn off auto-updating.

22.45.1.2.111. manager/turn-on-manager

Boolean to indicate if the manager flag should be turned on

22.45.1.2.112. manager/update-catalog

Boolean to indicate if the catalog should be updated

22.45.1.2.113. migrate-machine/complete

This should not be set manually.

22.45.1.2.114. migrate-machine/new-endpoint-token

This is a token for the new endpoint that will create the machine.

22.45.1.2.115. migrate-machine/new-endpoint-url

More to come

22.45.1.2.116. migrate-machine/old-endpoint-token

This is used by the machine-migrate task to clean up the old endpoint after migrating to the new endpoint. Do not manually add the parameter.

22.45.1.2.117. migrate-machine/old-endpoint-url

This is used by the machine-migrate task to clean up the old endpoint after migrating to the new endpoint. Do not manually add the parameter.

22.45.1.2.118. migrate-machine/skip-content-check

Setting the parameter to true will skip the content check. It is recommended that both endpoints have the same content versions when migrating.

22.45.1.2.119. migrate-machine/skip-profiles

Setting the parameter to true will skip checking and creating profiles on the new endpoint. It is recommended that both endpoints have the same profiles when migrating.

22.45.1.2.120. network/firewall-ports

Map of ports to open for Firewalld including the /tcp or /udp filter.

Skip: An array to empty [] to skip this task. Disable Firewall: including “*/*” will disable the firewall

22.45.1.2.121. network/lldp-skip-types

Allows operators to skip LLDP operations for some machine types where LLDP is not commonly needed.

  • Add “always” to always skip (never run) LLDP

  • Add “never” to never skip (always run) LLDP

  • Other options machine/type are: machine, container, virtual, switch, storage.

Default skips are container and virtual

22.45.1.2.122. network/lldp-sleep

Sleep is required in LLDP process to ensure that switches have time to respond to the LLDP request. Depending on your environment, this could be drastically reduced.

If you want to SKIP LLDP, please use the lldp/skip-machine-types.

22.45.1.2.123. profile-cleanup-selection-pattern

This Param is an array of wildcard patterns of Profile names that should be deleted from a Machine object.

If no values are specified, no profiles will be removed from the system. The Task profile-cleanup can also optionally skip removing the Profiles if set to true. Setting the skip to true will still print the pattern matched profiles on the given machine before exiting with a success error code. This allows for development testing to determine (by the Job Log contents), what Profiles would have been removed from the system.

Regex patterns are PCRE as implemented in jq in the match() function. For documentation on them, see:

Please note that to explicitly match a single Profile name, you must anchor the pattern with a begin/end anchor, like:

  • ^profile-name-to-match$

Not doing so can result in the following selection pattern:

  • foo-bar

Incorrectly matching any profiles with that pattern contained in it, for example

  • baz-foo-bar

  • foo-bar-blatz

The correct exact match for a Profile named foo-bar, would be formed like:

  • ^foo-bar$

By default no Profile names or patterns are specified.

22.45.1.2.124. profile-cleanup-skip

This Param defines if the profile(s) specified by the pattern(s) in the Param profile-cleanup-selection-pattern should be removed or not. This is intended as a validity/verification check when adding new patterns for removing Profiles.

Set this Param to true to prevent the profiles that match specified by the profile-cleanup-selection-pattern Param from being removed from the system.

When set to true all matching Patterns will be output in the Job Log prior to the Task exiting with a success (zero) code. This allows for development/debug testing of the wildcard patterns being used, without actually removing the Profiles from the Machine object.

The default value of false will remove any Profiles that match on the system based on the Param profile-cleanup-selection-pattern wildcard matches.

22.45.1.2.125. reset-workflow

Workflow to set before rebooting system.

22.45.1.2.126. rsa/key-name

File name of the RSA key

22.45.1.2.127. rsa/key-private

Private SSH Key (secure)

No default is set.

22.45.1.2.128. rsa/key-public

Public SSH Key.

No default is set.

22.45.1.2.129. rsa/key-user

SSH Key User.

22.45.1.2.130. slack/message

JSON formatted data for Slack Webhook postings.

Must be formatting in Slack acceptable JSON.

Uses .ParamExpand so that operators can insert information into the message without having to customize the task. This allows you to dynamically embed Params into Slack messages.

For example, using a Trigger with MergeDataIntoParams: true exposes properties from the Object causing the Event. If that Event is an Alert, then you can include the Level and Name of the Alert in your Slack message: {“text”:”{{.Param “Level”}} Alert {{.Param “Name”}} from Digital Rebar at {{.ApiURL }}”}

See: <https://api.slack.com/messaging/sending>

22.45.1.2.131. slack/service-url

URL target for Slack App webhooks.

!!! warning

Ihe information contained in Slack service-url is sensitive and should not be shared or posted publicly. DO NOT CHECK IT INTO CONTENT PACKS.

See: <https://api.slack.com/messaging/sending>

22.45.1.2.132. storage/mount-devices

## Mount Attached Storage

Ordered list of of devices to attempt mounting from the OS.

storage/mount-devices task will attempt to mount all the drives in the list in in order. If the desired mount point is already in use then the code will skip attempting to assign it.

This design allows operators to specific multiple mount points or have a single point with multiple potential configurations.

  • rebuilt will wide and rebuild the mount

  • reset will rm -rf all files if UUID changes

Example:

```json [

{

disk: “/dev/mmcblk0”, partition: “/dev/mmcblkp1”, mount: “/mnt/storage”, type: “xfs”, rebuild: true, reset: true, comment: “example”

disk: “/dev/sda”, partition: “/dev/sda1”, mount: “/mnt/storage”, type: “xfs”, rebuild: true, reset: true, comment: “put something here”

}

22.45.1.2.133. terraform/debug-plan

Captures the Plan generated by terraform-apply for the attached system and stores in terraform/debug-review-plan attach to the requesting the cluster.

If false (default) then terraform-apply will attempt to REMOVE terraform/debug-plan-review to avoid have stale or secure information saved.

!!! warning

If true, exposes secure information in the Plan and should be used only for debug purposes.

22.45.1.2.134. terraform/debug-plan-review

If terrform/debug-plan is true, then captures the Plan generated by terraform-apply for the attached system and stores in terraform/debug-review-plan attach to the requesting the cluster.

!!! note

Stored as BASE64 Encoded!

In all other cases, terraform-apply will attempt to REMOVE this Param to avoid have stale or secure information saved.

!!! warning

This exposes secure information in the Plan and should be used only for debug purposes.

22.45.1.2.135. terraform/lint-templates

This is an array of strings where each string is a DRP template that is added into the generated .tflint.hcl file.

TO DISABLE: If an empty list is provided then tflint will not run

22.45.1.2.136. terraform/map-instance-id

Provides the TF lookup self.[name] reference when the .id field does not map to the providers true ID.

Uses the ${self.[path]} address space used from inside the Terraform resource so self. is required.

This allows users to create machines using the Terraform Instance ID from a Terraform run

Typical values:

  • most use: .id

  • google: .instance_id

22.45.1.2.137. terraform/map-instance-name

Provides the TF self.[name] reference that should be stored in the DRP Machine.Name field.

Uses the ${self.[path]} address space used from inside the Terraform resource so self. is required.

This allows users to create machines using the Terraform Instance Name from a Terraform run

Typical values:

  • aws: self.private_dns

  • azure: self.name

  • google: self.name

  • linode: self.label

  • digitalocean: self.name

  • pnap: self.hostname

  • proxmox: self.name

22.45.1.2.138. terraform/map-ip-address

Provides the Terraform self.[name] reference that should be stored in the DRP Machine.Name field.

Uses the ${self.[path]} address space used from inside the Terraform resource so self. is required.

This allows users to create machines using the Terraform Instance Name from a Terraform run

Typical values:
  • aws: self.public_ip

  • azure: self.public_ip_address

  • google: self.network_interface[0].access_config[0].nat_ip

  • linode: self.ip_address

  • digitalocean: self.ipv4_address

  • pnap: element(self.public_ip_addresses,0)

  • proxmox: not applicable

For proxmox instances, no IP Address is recorded in the Terraform state file. This value can not be used for proxmox private cloud instances.

22.45.1.2.139. terraform/map-mac-address

This value is usually dynamically populated after the Terraform apply step, based on the newly created instance’s assigned MAC address.

Some common values:

  • proxmox: self.network.macaddr

22.45.1.2.140. terraform/map-private-ip-address

Provides the Terraform self.[name] reference that should be stored in the DRP Machine.Name field.

Uses the ${self.[path]} address space used from inside the Terraform resource so self. is required.

This allows users to create machines using the Terraform Instance Name from a Terraform run

If missing, uses the terraform/map-ip-address

Typical values:
  • aws: self.private_ip

  • azure: self.private_ip_address

  • google: self.network_interface[0].network_ip

  • linode: self.private_ip_address

  • digitalocean: self.ipv4_address_private

  • pnap: tolist(self.private_ip_addresses)[0]

  • oracle: self.private_ip

  • proxmox: not applicable

For proxmox instances, no IP Addresses are recorded in the Terraform state file. This value can not be used for proxmox private cloud instances.

22.45.1.2.141. terraform/map-target-node

In clustered environments the operator may need to specify which node the instance should be created on, when no automatic placement policies exist.

This value should be either an IP Address or DNS resolvable node name that the instance should be created on.

22.45.1.2.142. terraform/plan-action

Verb used with Terraform. Generally used to apply or destroy plans.

Defaults to apply.

22.45.1.2.143. terraform/plan-instance-resource-name

This is the name of the terraform resource instance.

Some examples:

  • Linode: linode_instance

  • Proxmox: proxmox_vm_qemu

22.45.1.2.144. terraform/plan-instance-template

This is the name of a template that gets placed inside the terraform instance resource.

22.45.1.2.145. terraform/plan-templates

This is an array of strings where each string is a template that renders an Terraform Plan. They are built in sequence and then run from a single terraform apply

Outputs from a plan will can be automatically saved on the Machine.

22.45.1.2.146. terraform/sane-templating

The terraform apply task defaults to templating terraform files specified in the param terrafrom/plan-templates within a bash heredoc that expands variables and commands. This requires all terraform code to be escaped (and become invalid when not templated) so they aren’t interpreted, e.g. ${var.variable} must be ${var.variable}.

This parameter fixes the heredoc to not require escaping. It defaults to false for backwards compatibility reasons.

22.45.1.2.147. terraform/set-profiles

Since Terraform creates machines, we can add profiles to the machines during the create process.

An ordered list is provided to allow for multiple profiles

22.45.1.2.148. trigger-disk-full-warning

Used by alerts-low-disk-free task as the trigger level for out of storage warnings and errors.

Default is 10Gb (10485760 in kilobytes)

22.45.1.2.149. trigger/endpoint-systems-check-enabled

If included in the global profile and set to true, will enable the endpoint-systems-check trigger

!!! note

will be overridden by trigger/endpoint-systems-check-disabled

22.45.1.2.150. trigger/manager-nightly-catalog-update-enabled

If included in the global profile and set to true, will enable the manager-nightly-catalog-update trigger

!!! note

will be overridden by trigger/manager-nightly-catalog-update-disabled

22.45.1.3. profiles

The content package provides the following profiles.

22.45.1.3.1. bootstrap-contexts

Bootstrap Digital Rebar server for advanced operation

  • Context Operations
    • Installs Docker and downloads context containers

    • Locks the endpoint to prevent accidential operations

This is designed to be extended or replaced with a site specific bootstrap that uses the base tasks but performs additional bootstrapping.

22.45.1.3.2. bootstrap-drp-endpoint

Bootstrap Digital Rebar server with the bootstrap operations for:

  • bootstrap-tools - install additional packages (*)

  • bootstrap-ipmi - install ipmitool package and ipmi plugin provider if needed

  • bootstrap-contexts - install docker-context plugin_provider, and contexts in installed content

Intended to be driven by a bootstrapping workflow on the DRP Endpoint (like universal-bootstrap) during DRP Endpoint installation.

!!! note

(*) The bootstrap-tools specification exists in the bootstrap-ipmi Profile definition. It is not explicitly called out here, as that would duplicate the pockage install process needlessly.

The bootstrap-ipmi Profile defines the Param bootstrap-tools to contain ipmitool. The Param is a composable Param, so all instances of the Param will be aggregated together in one list, instead of the regular order of precedence.

22.45.1.3.3. bootstrap-elasticsearch

Profile to bootstrap elasticsearch

22.45.1.3.4. bootstrap-guacd

Bootstrap Digital Rebar server to have guacd service running.

22.45.1.3.5. bootstrap-ipmi

Bootstrap Digital Rebar server for with IPMI plugin provider, and install the ipmitool for IPMI protocol operations.

22.45.1.3.6. bootstrap-kibana

Profile to bootstrap kibana

22.45.1.3.7. bootstrap-manager

Bootstrap Digital Rebar server into a Manager

22.45.1.3.8. bootstrap-tools

Bootstrap Digital Rebar server with commonly required tools for content/plugins (eg ipmitool).

22.45.1.3.9. resource-context

Manages Context via Resource Broker

22.45.1.3.10. resource-pool

Manages Pool via Resource Broker

22.45.1.4. stages

The content package provides the following stages.

22.45.1.4.1. ansible-inventory

Collects ansible inventory data from ansible’s setup module.

!!! note

This will attempt to install ansible if it is not already installed.

22.45.1.4.2. ansible-playbooks-local

Invoke ansible playbooks from git.

Forces the ansible/connection-local to true.

!!! note

ansible/playbooks is a Required Param - List of playbook git repos to run on the local machine.

22.45.1.4.3. bootstrap-advanced

!!! warning

DEPRECATED: Use unversal-bootstrap with the bootstrap-contexts profile Bootstrap stage to build out an advanced setup

This augments the bootstrap-base. It does NOT replace it.

Bootstrap Operations:

  • Install Docker & Building Contexts if defined

  • Lock the Machine

22.45.1.4.4. bootstrap-network

Installs specified network Subnets and Reservations found in the profile named bootstrap-network.

See the Task Documentation field for complete usage and example content pieces.

22.45.1.4.5. broker-provision

Using the cluster/machines, cluster/machine-types, cluster/profile parameters allocate or destroy machines.

22.45.1.4.6. cloud-inventory

Collect internal API information about a cloud instance

Requires cloud/provider to be set correctly. If not set then does nothing.

Depending on the cloud provider, sets discovered cloud/* data from the cloud’s discovery API including:

  • public-ipv4

  • public-hostname

  • instance-type

  • placement/availability-zone

!!! note

Will throw an error if the reported instance-id does not known cloud/instance-id

22.45.1.4.7. cluster-destroy

Forces the resources away via a “de-provisioning” operation.

Also performs an ophan sweet to ensure that the resource broker didn’t leave machines attached to the cluster before removal

22.45.1.4.8. cluster-provision

Using the specified parameters on the cluster, create, resize, or empty a cluster.

22.45.1.4.9. drive-signature

Builds a signature for each drive and stores that on the machine.

22.45.1.4.10. drive-signature-verify

Verifies signatures for drives.

22.45.1.4.11. inventory

Collects selected fields from Gohai into a simpler flat list.

The process uses JQ filters, defined in inventory/fields, to build inventory/data on each machine.

Also, applies the inventory/check map to the data and will fail if the checks do not pass.

22.45.1.4.12. inventory-minimal

Set some of the initial inventory pieces that could be useful for other task in the discover stages.

22.45.1.4.13. inventory-os

Collects inventory/os-* params from the O/S.

22.45.1.4.14. migrate-machine

Stage to migrate machine to new DRP endpoint.

22.45.1.4.15. runner-service

!!! warning

This stage has been deprecated. Use drp-agent from drp-community-content instead.

22.45.1.5. tasks

The content package provides the following tasks.

22.45.1.5.1. alerts-bootstrap-error

This task should be set to run when any of the important bootstrapping tasks fail. This task then raises an Alert on the system to help improve visibility of a failed self bootstrap error condition.

22.45.1.5.2. alerts-low-disk

To improve endpoint security, this will check to ensure there is sufficient space available on the Digital Rebar self-runner working disk. If not, it will raise an Alert event.

22.45.1.5.3. alerts-on-content-change

This task is designed to raise Alerts when a Content pack event is raised using the Triggers. It requires that operators define an Event Trigger sets MergeData: true because the task relies on the Event meta data being passed as a param called meta.

22.45.1.5.4. alerts-raise-from-events

Create alerts from Digital Rebar Events

!!! warning

This event is designed to handle specialized events, not system object events

Attempts to populate Alert using data from the Event.Object data

  • Name: selected from Name, Id or Uuid (in that order)

  • Contents: select from Contents

  • Params: copies all Params from Object into Alert

Params alert/trigger and alert/task are always added

To set the Alert.Level, set the alert/level value in the Trigger. Otherwise, INFO

22.45.1.5.5. ansible-apply

Runs one or more Ansible Playbook templates as defined by the ansible/playbook-templates variable in the stage calling the task.

Requires an ansible context or a way to install ansible.

Expects to have rsa-key-create run before stage is called so that rsa/* params exist for remote execution.

Information can be chained together by having the playbook write [Machine.Uuid].json as a file. This will be saved on the machine as Param.ansible/output. The entire Machine JSON is passed into the playbook as the digitalrebar variable so it is available.

22.45.1.5.6. ansible-inventory

Install ansible, if needed, and record the setup module ansible variables onto the machine as parameter named ansible-inventory.

22.45.1.5.7. ansible-join-up

Runs an embedded Ansible Playbook to run the DRP join-up process.

Requires an ansible context.

Expects to be in a Workflow that allows the joined machine to continue Discovery and configuration steps as needed.

Expects to have rsa-key-create run before stage is called and the public key MUST be on the target machine.

Idempotent - checks to see if service is installed and will not re-run join-up.

22.45.1.5.8. ansible-playbooks

A task to invoke a specific set of ansible playbooks pulled from git.

Sequence of operations (loops over all entries:

1. collect args if provided 1. git clone repo to name 1. git checkout commit if provided 1. cd to name & path 1. run ansible-playbook playbook and args if provided 1. remove the directories

!!! note

Requires Param ansible/playbooks - List of playbook git repos to run on the machines (either local or cluster machines).

22.45.1.5.9. ansible-playbooks-local

!!! warning

DEPRECATED: Use the ansible-playbooks task instead. It matches the ansible-apply style.

A task to invoke a specific set of ansible playbooks pulled from git.

Sequence of operations (loops over all entries:

1. collect args if provided 1. git clone repo to name 1. git checkout commit if provided 1. cd to name & path 1. run ansible-playbook playbook and args if provided 1. remove the directories

!!! note

Requires Param ansible/playbooks - List of playbook git repos to run on the local machine.

22.45.1.5.10. blueprint-to-cluster-members

This task makes creates a WorkOrder for all members of a cluster.

If the task is run on a cluster, the cluster will be used and the cluster/for-all-cluster parameter is not required and ignored.

The cluster/for-all-blueprint is required.

Optionally, the cluster/for-all-profile can be specified to add a profile to the WorkOrders.

22.45.1.5.11. bootstrap-container-engine

Install Pre-reqs for Docker-Context.

This task is idempotent.

Stage installs Podman and requires access to the internet

22.45.1.5.12. bootstrap-contexts

Download RackN containers defined in Contexts

This task is idempotent.

Attempts to upload images for all Docker-Contexts from RackN repo.

If Meta.Checksum exists, will perform checksum test. If checksums do not match then will remove the file! WARNING: No checksum test is performed if Meta.Checksum is missing.

If Meta.Imagepull provided, will use the URL to download image from provided address.

If not Meta.Imagepull is provided then will pull container from RackN get.rebar.digital/containers at https://get.rebar.digital/containers/[image].tar.gz.

In both cases, will invoke imagePull on the Docker Context to ensure that the referenced image is downloaded to the Digital Rebar host. This does NOT automatically change machines using the existing image (if any) to using the new image.

DEV NOTE: ExtraClaims for resource_brokers also requires a matching entry for machines. This is an expected behavior due to Digital Rebar the backend implementation of Resource Brokers.

22.45.1.5.13. bootstrap-guacd

Start the guacd service running in a resource broker

22.45.1.5.14. bootstrap-ipmi

This task bootstraps the DRP Endpoint to be functional for Baseboard Management Control capabilities by installing the ipmi plugin provider and installs the ipmitool package on the DRP Endpoint operating system.

This Task utilizes the external template bootstrap-tools.sh.tmpl, which must be configured with a list of packages. In this case, the bootstrap-tools array Param must be set to incldue ipmitool package.

Generally, the bootstrap process is controlled by a bootstrapping workflow (ege universal-bootstrap) which uses a Profile to expand the bootstrap workflow. This profile should contain the Param value setting. This is due to Tasks not carrying their own Param or Profile definitions.

22.45.1.5.15. bootstrap-manager

Turn the DRP Endpoint into a Manager and update cache

22.45.1.5.16. bootstrap-network

Used (primarily) during the DRP Endpoint bootstrap configuration process to install Subnets (DHCP “pools”) and Reservations (static DHCP assignments).

It uses the bootstrap-network-subnets and bootstrap-network-reservations Params to specify the appropriate objects to install during the boostrap stages.

If a Profile named bootstrap-network exists on the system, the values in it are used. The Profile must exaclty match this name. The Subnet and Reservations objects added in via the Profile must be valid Object types with appropriate required values (eg Reservations require Strategy: MAC).

This is often/typically used during DRP installation time with an installer command example like (NOTE that bootstrap-network.yaml is a Content Pack!!):

`sh # you must create a Content Pack which should contain a valid Profile object # which carries the network bootstrapping configuration data. The below # example assumes the Content Pack has been bundled up and named "bootstrap-networ.yaml". curl -fsSL get.rebar.digital/stable | bash -s -- install --universal --initial-contents="./bootstrap-network.yaml" `

An example of the bootstrap-network.yaml Content Pack in the above command might look like the following (see the individual Params documentation for more details on use of them):

```yaml # NOTE - this is a DRP Content Pack, and is applied to the system # with ‘drpcli contents update ./bootstrap-network.yaml’ # YAML example Subnets and Reservations for bootstrap-network task — meta:

Name: bootstrap-network Version: v1.0.0

sections:
profiles:

bootstrap-network:

Name: bootstrap-network Description: Profile for network bootstrap of subnets and reservations. Documentation: |

Uses the bootstrap-network task for DRP Endpoint configuration.

Installs 2 Subnets and 2 Reservations during DRP Endpoint bootstrap.

Meta:

color: blue icon: sitemap title: Digital Rebar Provision

Params:
bootstrap-network-reservations:
  • Addr: 192.168.124.222 Token: “01:02:03:04:05:00” Strategy: MAC

  • Addr: 192.168.124.223 Token: “01:02:03:04:05:01” Strategy: MAC

bootstrap-network-subnets:
  • ActiveEnd: 10.10.10.254 ActiveStart: 10.10.10.10 Name: subnet-10 OnlyReservations: false Options: - Code: 1

    Description: Netmask Value: 255.255.255.0

    • Code: 3 Description: DefaultGW Value: 10.10.10.1

    • Code: 6 Description: DNS Server Value: 1.1.1.1

    • Code: 15 Description: Domain Value: example.com

    Subnet: 10.10.10.1/24

  • ActiveEnd: 10.10.20.254 ActiveStart: 10.10.20.10 Name: subnet-20 OnlyReservations: false Options: - Code: 1

    Description: Netmask Value: 255.255.255.0

    • Code: 3 Description: DefaultGW Value: 10.10.20.1

    • Code: 6 Description: DNS Server Value: 1.1.1.1

    • Code: 15 Description: Domain Value: example.com

    Subnet: 10.10.20.1/24

```

!!! note

This can be generated using the drpcli contents bundle operation. Simply create a minimal content pack on disk, with profiles/bootstrap-network.yaml Object containing the appropriate Param definition.

22.45.1.5.17. bootstrap-tools

If the Param bootstrap-tools contains an Array list of packages, then this task will install those packages on the DRP Endpoint, when used with one of the bootstrap workflows (eg universal-bootstrap).

By default, no packages are defined in the bootstrap-tools Param, so this task will no-op exit.

22.45.1.5.18. broker-context-manipulate

This task will use parameters to drive the machine construction.

22.45.1.5.19. broker-pool-manipulate

This task will use parameters to drive the machine allocation via DRP pooling APIs.

There are two operational models supported:

  • allocate/release allows clusters to remain in a shared pool

  • add/remove allows clusters move into dedicated pools from a shared pool

When using broker/pool-allocate: false, broker will create a pool matching the cluster name to ensure that removed machines revert to the correct source pool. These pools are NOT deleted when the cluster is removed in case secondary configuration has been added.

broker-pool/pool defaults to default

22.45.1.5.20. broker-provision

This task will use parameters to drive the machine construction.

Detect Broker Type and injects tasks specific to the broker type:

  • cloud-terraform - uses terraform-apply to run Terraform plan

  • pool - uses DRP pooling to allocation machines

  • context - creates contexts

When using the context broker, pass broker/set-context in order to change the starting context.

22.45.1.5.21. cloud-detect-meta-api

Use various Cloud Meta API to infer cloud plugin_provider

Detects clouds based on cloud/meta-apis values

If detected, will set the cloud/provider

22.45.1.5.22. cloud-inventory

Collect internal API information about a cloud instance

Requires cloud/provider to be set correctly. Cloud provisioners should set this field automatically. You can use cloud-detect-meta-api to discover the cloud/provider by inspection.

If cloud/provider is not set then does nothing.

Depending on the cloud provider, sets discovered cloud/* data from the cloud’s discovery API including:

  • public-ipv4

  • public-hostname

  • instance-type

  • placement/availability-zone

!!! note

Will throw an error if the reported instance-id does not known cloud/instance-id

22.45.1.5.23. cluster-add-params

When building a cluster, this task will also add Params for each cluster/machine-type to the created machines.

This is needed because we do not pass Params through the terraform-apply task when creating machines.

22.45.1.5.24. cluster-empty

After de-provisioning, check for cluster assigned machines and clean them up if they still exist

22.45.1.5.25. cluster-provision

This task will use parameters to drive the creation of a cluster, resize the cluster, or remove elements from the cluster.

22.45.1.5.26. cluster-restart-pipeline

This task forces all machines in a cluster to restart the machine pipeline instead of just the newly added ones.

!!! warning

Depending on the pipeline, this may be distruptive to the cluster operation!

This behavior is very handy if you are CHANGING the pipeline of an existing cluster. For example, if you pre-provision systems into a waiting state then this task will ensure that the pre-provisioned machines will process the new pipeline.

22.45.1.5.27. cluster-set-destroy

Sets the cluster destroy flag to true to ensure that the cluster destroy operations are run in the workflow

22.45.1.5.28. cluster-to-pool

When building a cluster, this task will also add the unassigned cluster members into a pool using the pool allocate command.

The name of the pool is determined by the cluster/profile.

22.45.1.5.29. cluster-wait-for-members

This task makes the cluster pipeline wait until all the cluster members have completed their workflows. It does this by waiting on individual machines to create the WorkflowComplete = true state. This means that the loop is waiting events, not polling the API.

Since there is no “break” in the loop, you cannot stop this loop from the cluster pipeline; however, you can easily break the loop by clearing the cluster machines Workflow and marking them Ready.

22.45.1.5.30. context-set

This task allows stages to change the BaseContext for a machine as part of a larger workflow.

This is especially helpful creating a new machine using an endpoint context, such as Terraform to create a machine, and then allowing the machine to transition to a runner when it starts.

To code is written to allow both clearing BaseContext (set to “”) or setting to a known value. If setting to a value, code ensures that the Context exits.

22.45.1.5.31. docker-context-run

This task runs forever and is intended for a long running machine.

The docker-context plugin is run continuously until the machine is marker not runnable. This will stop the task and end the blueprint.

22.45.1.5.32. docker-context-start

This task starts forever and is intended for a long running machine.

The docker-context plugin is run continuously until the machine is marker not runnable. This will stop the task and end the blueprint.

22.45.1.5.33. dr-server-install

Installs DRP Server. Sets DRP-ID to Machine.Name

LIMITATIONS:

  • firewall features only available for Centos family

The primary use cases for this task are

  1. drp-server pipline

Will transfer the DRP license to the machine being created.

For operators, this feature makes it easy to create new edge sites using DRP Manager.

22.45.1.5.34. dr-server-install-ansible

Installs DRP Server using the Ansibile Context. Sets DRP-ID to Machine.Name

!!! note

All connections from FROM THE MANAGER, the provisioned site does NOT need to connect to the manager.

LIMITATIONS:

  • firewall features only available for Centos family

The primary use cases for this task are

1. Creating a remote site for Multi-Site-Manager 1. Building Development installations for rapid testing

Requires install.sh and zip artifacts to be bootstrap/*

Will transfer the DRP license to the machine being created.

If DRP is already installed, will restart DRP and update license

For operators,this feature makes it easy to create new edge sites using DRP Manager.

22.45.1.5.35. drive-signature

Generate a signature for each drive and record them on them machine.

22.45.1.5.36. drive-signature-verify

Using the signatures on the machine, validate each drive’s signature matches.

22.45.1.5.37. elasticsearch-setup

A task to install and setup elasticsearch. This is a very simple single instance.

22.45.1.5.38. git-build-content-push

This task is used to automatically import and synchronize content packages from a git repository. The process DOES NOT assume anything about the content; consequently, the repository MUST have scripts to facilitate building and uploading the content.

When used with the git-lab-trigger-webhook-push trigger, the repository param will be automatically populated when MergeData is set to true. This allows the webhook data to automically flow into the blueprint.

!!! note

At this time, the repository cannot require authentication.

The process is:

1. clone the git repository named in params value 1. run the tools/build_content.sh script 1. optionally add the content to the system (if manager/update-catalog is true) 1. optionally rebuild the manager catalogs (if dr-server/update-content` is true)

This assumes that the output of the content script will go into a directory structure like.

rebar-catalog/<cp-name>/<version>.json

22.45.1.5.39. guacd-run

This task runs forever and is intended for a long running service broker, a subset of resource broker.

The guacd program is run continuously until the broker is marker not runnable. This will stop the task and end the blueprint.

22.45.1.5.40. inventory-check

Using the inventory/collect parameter, filters the JSON command output into inventory/data hashmap for rendering and validation.

Will apply inventory/check for field valation.

Raises a machines.inventory.[uuid] event if there’s a compliance issue.

22.45.1.5.41. inventory-cost-calculator

Stores cost estimate in inventory/cost

For cloud costs (requires cloud/provider), will use the cloud/instance-type parameter and lookup cost models from cloud/cost-lookup

If no specific type match is found, will use the approximate using the following formula: [RAM of machine in Gb]*[ram_multiple]*[fallback]

At this time, no action for non-cloud machines

22.45.1.5.42. inventory-cost-summation

Sums inventory/cost for machines in clusters or brokers. If value is missing from machine it is assumed 0.

Stores cost estimate in inventory/cost

22.45.1.5.43. inventory-minimal

Sets the machine/type parameter for other tasks to use later.

22.45.1.5.44. inventory-os

Using the setup.tmpl OS evaluation, store operating system information as params for the machine.

!!! note

Has different implementations depending on the OS!

22.45.1.5.45. kibana-setup

A task to install and setup kibana. This is a very simple single instance.

22.45.1.5.46. linux-package-enterprise

This task is designed to work on select Linux versions and families. This should be injected by Universal Workflow if needed.

Installs the EPEL libraries on Linux families that use them

This task performs that update step so that install tasks do not have to repeat that work.

22.45.1.5.47. linux-package-updates

This task is designed to work on multiple Linux versions and families. This should be injected by Universal Workflow if needed.

Before doing package manager install, Linux families expect that you have updated the packages.

This task performs that update step so that install tasks do not have to repeat that work.

22.45.1.5.48. manager-join-cluster-members

Walk the cluster members and add them to the manager(self) if not present.

22.45.1.5.49. manager-remove-cluster-members

Walk the cluster members and remove them from the manager(self) if not present.

22.45.1.5.50. migrate-machine

The migrate-machine task does the following:

  • Gathers machine information on the old endpoint.

  • Normally checks the contents and versions are the same. This can be skipped with the migrate-machine/skip-content-check parameter.

  • Normally checks for profiles on the machine that don’t exist and creates them on the new endpoint. This can be skipped with the migrate-machineskip-profiles parameter.

  • Creates a new machine with the same UUID.

  • Updates the machine’s parameter

  • Updates the drp-agent config with new endpoint and machine token on the new endpoint.

  • Restarts the agent

At this point the task restarts and catches the migrate-machine/complete is set to true and runs the following on the new endpoint:

  • Deletes all jobs related to the machine on the old endpoint.

  • Removes the machine from the old endpoint.

22.45.1.5.51. network-firewall

Requires that firewall-cmd or ufs to be enabled system, will attempt to install if missing.

Will reset the firewall at the end of the task.

Including port */* in network/firewall-ports will disable the firewall.

No action when running in a context

22.45.1.5.52. network-lldp

Assumes Sledgehammer has LLDP daemon installed so that we can collect data.

Also requires the system to have LLDP enabled on neighbors including switches to send the correct broadcast packets.

22.45.1.5.53. password-default-audit

To improve endpoint security, this will check to ensure that the default password of Digital Rebar has been reset. If not, it will raise an Alert.

22.45.1.5.54. profile-cleanup

Removes Profiles on a Machine, based on the array of specific or wildcard matched Profile in the Param profile-cleanup-selection-pattern.

Use of the profile-cleanup-skip Param set to true will output the matching Profiles in the Job Log, but skip the actual removal of the profile(s) from the machine.

22.45.1.5.55. rsa-key-create

Uses ssh-keygen to create an RSA public/private key pair.

Stores keys in rsa/key-public and rsa/key-private on the cluster profile.

The public key (which contains newlines) is stored in single line format where newlines are removed.

Noop if the rsa/key-private exists.

This does NOT set the rsa/key-user (default: root) because this is often set specifically by the environment on each machine.

22.45.1.5.56. slack-app-webhook

Very simple POST used to notify Slack.

This requires that operators to create and authorize a Slack Application webhook that will receive the POST.

!!! warning

the information contained in Slack service-url is sensitive and should not be shared or posted publicly. DO NOT CHECK IT INTO CONTENT PACKS.

Reference: see <https://api.slack.com/messaging/sending>

22.45.1.5.57. stage-chooser

This task uses the stage-chooser/next-stage and stage-chooser/reboot parameters to change the stage of the current machine and possibly reboot.

This is not intended for use in a stage chain or workflow. But a transient stage, that can be used on a machine that is idle with a runner executing.

22.45.1.5.58. storage-mount-devices

Uses array of devices from storage/mount-devices to attach storage to system.

If we’ve need a storage area, this task will mount the requested resources under /mnt/storage.

See [Mount Attached Storage](../../params/storage-mount-devices/#mount-attached-storage)

22.45.1.5.59. terraform-apply

Runs one or more Terraform Plan templates as defined by the terraform/plan-templates variable in the stage calling the task.

Requires an terraform context with Terraform v1.0+. and plans must comply with v1.0 syntax

The terraform apply is only called once. All plans in the list are generated first. If sequential operations beyond the plan are needed, use multiple calls to this task.

Only DRP API, Provisioning URL, RSA Public Key and SSH user are automatically passed into the plan; however, the plans can use the .Param and .ParamExists template to pull any value needed.

Terraform State is stored as a Param broker/tfinfo on the Cluster Machine after first execution. It is then retrieved for all subsequent runs so that Terraform is able to correctly use it’s state values. The broker/tfinfo parameter is a map of brokers that can be used to track state. Anything can be stored in this parameter.

The synchronize.sh script is used by “local-exec” to connect/update/destroy machines from Terraform into Digital Rebar.

To match existing machines, cloud/instance-id and broker/name are used first. Name is used as a backup.

When updating/creating sets the Params for

  • cloud/instance-id

  • broker/name

  • cloud/provider

  • rsa/key-user (if available in broker)

When used to detect drift mode (via calling Plan on an existing plan), then will raise a terraform.drift.[cluster name] event with details about the drift from Terraform with drift is detected.

Notes:

  • To create SSH keys, use the rsa-key-create generator task.

  • If creating cloud machines, use the cloud-init task for join or flexiflow to add ansible-join

  • When using the synchronize operations, you must define terraform/map-ip-address and terraform/map-instance-name for the created machines

  • Setting terraform/debug-plan to true will cause the TF plan to be written to terraform/debug-plan-review. This is UNSAFE and for debugging only.

22.45.1.5.60. workflow-pause

This task will set the machine to NOT runnable, but leave the runner running. This will act as a pause in the workflow for other system to restart.

22.45.1.6. triggers

The content package provides the following triggers.

22.45.1.6.1. blueprint-to-cluster-members

This is blueprint/trigger can used to run a blueprint on each member of a cluster.

This is an event trigger that requires the following parameters.

  • cluster/for-all-blueprint - defines the blueprint to run.

  • cluster/for-all-cluster - defines the cluster to pull machines from.

This is an event trigger that can also use the following parameters.

  • cluster/for-all-profile - defines a profile to add to the WorkOrder for additional parameters.

The firing the trigger through event must contain the blueprint and cluster.

Applying the trigger/blueprint to a cluster requires the blueprint, but infers the cluster.

22.45.1.6.2. manager-nightly-catalog-update

Trigger to nightly update the manager catalog at 1:10am everyday.

Disable by adding trigger/manager-nightly-catalog-update to the global profile.

22.45.1.6.3. utility-endpoint-systems-check

Trigger to run basic diagnostics

Disable by adding trigger/endpoint-systems-check-disabled to the global profile.

22.45.1.7. version_sets

The content package provides the following version_sets.

22.45.1.7.1. license

Synchronize RackN entitlements file (which is defined as a content pack) with the edge site.

This version set requires that the license file be saved in the manager’s files area under rebar-catalog/rackn-license/v1.0.0.json

The bootstrap-manager task will automatically place the license file in this location.

22.45.1.8. workflows

The content package provides the following workflows.

22.45.1.8.1. bootstrap-advanced

!!! warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See [Deploying Linux with Universal Workflows](../../../../resources/kb/kb-00061). Use universal-bootstrap with bootstrap-contexts profile

Bootstrap Digital Rebar server for advanced operation Includes the bootstrap-base!

REQUIRES that the Endpoint Agent has been enabled.

  • Basic Operations
    • Make sure Sledgehammer bootenvs are loaded for operation.

    • Set the basic default preferences.

    • Setup an ssh key pair and install it to the global profile.

    • Locks the endpoint to prevent accidential operations

  • Advanced Operations
    • Installs Docker and downloads context containers

This is designed to be extended or replaced with a site specific bootstrap that uses the base tasks but performs additional bootstrapping.

22.45.1.8.2. broker-provision

Requires that operator has created a Contexts for runner, terraform then starts the DRP Agent using Cloud-Init by default. Ansible Join-Up can be injected using Flexiflow for clouds that cannot inject Cloud-Init.

After v4.8, workflow operates on a cluser or resource broker only.

  • Machines are created/destroyed during provisioning and mapped to the cluster by adding the cluster/profile.

  • Machines use the linux-install application when created

22.45.1.8.3. centos-7-base

!!! warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See [Deploying Linux with Universal Workflows](../../../../resources/kb/kb-00061). Use universal-bootstrap with bootstrap-contexts profile

This workflow includes the DRP Runner in CentOS provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

!!! note

To enable, upload the CentOS 7 ISO as per the centos-7 BootEnv

22.45.1.8.4. centos-base

!!! warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See [Deploying Linux with Universal Workflows](../../../../resources/kb/kb-00061). Use universal-bootstrap with bootstrap-contexts profile

This workflow includes the DRP Runner in CentOS provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

!!! note

To enable, upload the CentOS ISO as per the centos-8 BootEnv

22.45.1.8.5. discover-joinup

Discover, run as a service and complete-nobootenv

This workflow is recommended for joining cloud machines instead of discover-base.

!!! note

You must set defaultBootEnv to sledgehammer in order to use join-up to discover machines.

Some operators may chooose to first create placeholder machines and then link with join-up.sh to the placeholder machine model using the UUID. See the ansible-joinup task for an example.

complete-nobootenv ensures that Digital Rebar does not force the workflow into a bootenv (potentially rebooting) when finished.

22.45.1.8.6. fedora-33-base

!!! warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See [Deploying Linux with Universal Workflows](../../../../resources/kb/kb-00061). Use universal-bootstrap with bootstrap-contexts profile

This workflow includes the DRP Runner in Fedora 33 provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

!!! note

To enable, upload the Fedora ISO as per the fedora-33 BootEnv

22.45.1.8.7. fedora-34-base

!!! warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See [Deploying Linux with Universal Workflows](../../../../resources/kb/kb-00061). Use universal-bootstrap with bootstrap-contexts profile

This workflow includes the DRP Runner in Fedora 34 provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

!!! note

To enable, upload the Fedora ISO as per the fedora-34 BootEnv

22.45.1.8.8. fedora-base

!!! warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See [Deploying Linux with Universal Workflows](../../../../resources/kb/kb-00061). Use universal-bootstrap with bootstrap-contexts profile

This workflow includes the DRP Runner in Fedora provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

!!! note

To enable, upload the Fedora ISO as per the fedora-31 BootEnv

22.45.1.8.9. migrate-machine

This workflow will migrate a machine from one endpoint to another.

A profile called migrate-machine can be created. It will not be added to the machine on the new endpoint.

The following parameters are required:

  • migrate-machine/new-drp-server-api-url: URL to the endpoint (https://localhost:8092 as an example.)

  • migrate-machine/new-drp-server-token: An token for the new endpoint that can create/modify profiles and machines.

The following optional parameters will skip certain steps that are recommended.

  • migrate-machine/skip-content-check: If set to true, task will skip verifying if the content is the same between both endpoints.

  • migrate-machine/skip-profiles: If set to true, task will skip creating new profiles. Any profiles on the old machine will be lost when created on the new endpoint.

The following parameters should not be added or changed manually.

  • migrate-machine/new-drp-server-complete: The task will not migrate if set to true. It defaults to false.

  • migrate-machine/old-enpoint-url: Used to clean up the old endpoint after migration.

  • migrate-machine/old-endpoint-token: Used to clean up the old endpoint after migration.

The migrate-machine task does the following:

  • gathers machine information on the old endpoint.

  • normally checks the contents and versions are the same. This can be skipped with the migrate-machine/skip-content-check parameter.

  • normally checks for profiles on the machine that don’t exist and creates them on the new endpoint. This can be skipped with the migrate-machine/skip-profiles parameter.

  • creates a new machine with the same UUID.

  • updates the machine’s parameter

  • updates the drp-agent config with new endpoint and machine token on the new endpoint.

  • restarts the agent

At this point the task restarts and catches the migrate-machine/complete is set to true and runs the following on the new endpoint:

  • Deletes all jobs related to the machine on the old endpoint.

  • Removes the machine from the old endpoint.

22.45.1.8.10. ubuntu-20.04-base

!!! warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See [Deploying Linux with Universal Workflows](../../../../resources/kb/kb-00061).

This workflow includes the DRP Runner in Ubuntu provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

!!! note

To enable, upload the Ubuntu-20.04 ISO as per the ubuntu-20.04 BootEnv

22.45.1.8.11. ubuntu-base

!!! warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See [Deploying Linux with Universal Workflows](../../../../resources/kb/kb-00061).

This workflow includes the DRP Runner in Ubuntu provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

!!! note

To enable, upload the Ubuntu-18.04 ISO as per the ubuntu-18.04 BootEnv