22.43. task-library - Core Task Library

The following documentation is for Core Task Library (task-library) content package at version v4.10.0-alpha00.46+g1471cf9bbea9d50f3360a078e87e85f65fff4c7c.

22.43.1. RackN Task Library

This content package is a collection of useful stages and operations for Digital Rebar. It also includes several handy workflows like the CentOS base, Fedora base, and Ubuntu base workflows. You can also find many handy stages in this like the network-lldp stage which can be added to your discovery to add additional networking information discovered by lldp.

22.43.1.1. Cluster Stages

These stages implement and augment Clusters v4.8+.

Allows operators to orchestrate machines into sequential or parallel operation.

22.43.1.2. Inventory Stage

Convert gohai and other JSON data into a map that can be used for analysis, classification or filtering.

22.43.2. Object Specific Documentation

22.43.2.1. blueprints

The content package provides the following blueprints.

22.43.2.1.1. ansible-run-playbook-local-on-machine

Uses ansible-playbooks-local to run playbook(s) defined in Param ansible/playbooks in “local” mode on the selected machine(s).

There is a safe default so this can be used without a Param defined for testing.

22.43.2.1.2. cluster-reevaluate

This blueprint will cause the cluster-provision task to rerun on the cluster to pick up changes to the cluster.

22.43.2.1.3. cost-calculator

Runs inventory-cost-calculator and inventory-cost-summation on the machines in question.

This can be used to regularly update/refresh the inventory/cost value for a machine, cluster or resource broker.

22.43.2.1.4. dr-server-add-content-from-git

This blueprint will check out a git repo, bundle the content. See the help on git-build-content-push for more details.

Based upon parameters, the system will put the content pack into the server or add the content pack to the catalog.

When used with the git-lab-trigger-webhook-push trigger, the repository param will be automatically populated when MergeData is set to true. This allows the webhook data to automically flow into the blueprint.

Generally, this blueprint runs from the DRP endpoint selfrunner and should use Params.machine-self-runner=true Endpoint= as a filter.

22.43.2.1.5. manager-update-catalog

Runs the manager bootstrap work at some triggerable event. This is scheduled for nightly updates.

This can be used to regularly update/refresh the inventory/cost value for a machine, cluster or resource broker.

22.43.2.1.6. utility-update-ssh-keys

This blueprint will update the SSH keys set on the target machine using the ssh-access task

22.43.2.2. params

The content package provides the following params.

22.43.2.2.1. ansible-inventory

Holds the value from the ansible-inventory task.

22.43.2.2.2. ansible/output

Generic object that is constructed by the ansible-apply task if the playbook creates a file called [Machine.Name].json

For example, the following task will create the correct output:

- name: output from playbook
  local_action:
    module: copy
    content: "{{`{{ myvar }}`}}"
    dest: "{{ .Machine.Name }}.json"

22.43.2.2.3. ansible/playbook-templates

This is an array of strings where each string is a template that renders an Ansible Playbook. They are run in sequence to allow building inventory dynamically during a run.

Output from a playbook can be passed to the next one in the list by setting the ansible/output value. This value gets passed as a variable into the next playbook as part of the params on the machine object json.

22.43.2.2.4. ansible/playbooks

Used by Task ‘ansible-playbooks-local’ task.

Runs the provided playbooks in order from the json array. The array contains structured objects with further details about the ansible-playbook action.

Each playbook MUST be stored in either 1. a git location accessible from the machine running the task. 1. a DRP template accessible to the running DRP

The following properties are included in each array entry:

  • playbook (required): name of the playbooks passed into ansible-playbook cli

  • name (required): determines the target of the git clone

  • either (required, mutually exclusive): * repo: path related to machine where the playbook can be git cloned from * template: name of a DRP template containing a localhost ansible playbook

  • path (optional): if playbooks are nested into a single repo, move into that playbook

  • commit (optional): commit used to checkout a specific commit tag from the git history

  • data (optionalm boolean): if true, use items provided in args

  • args (optional): additional arguments to be passed into the ansible-playbook cli

  • verbosity (optional, boolean): if false, suppresses output from ansiblie using no_log.

  • extravars (optional, string): name of a template to expand as part of the extra-vars. See below.

For example

[
  {
    "playbook": "become",
    "name": "become",
    "repo": "https://github.com/ansible/test-playbooks",
    "path": "",
    "commit": "",
    "data": false,
    "args": "",
    "verbosity": true
  },
  {
    "playbook": "test",
    "name": "test",
    "template": "ansible-playbooks-test-playbook.yaml.tmpl",
    "path": "",
    "commit": "",
    "data": false,
    "args": "",
    "verbosity": true
  }
]

Using extravars allows pulling data from Digital Rebar template expansion into github based playbooks. These will show up as top level variables. For example:

foo: "{{ .Param "myvar" }}",
bar: "{{ .Param "othervar" }"}

22.43.2.2.5. bootstrap-network-reservations

This Param can be set to an array of Reservation type objects. If it is added to the bootstrap-network profile, then during the bootstrap Stage, the defined reservations will be added to the system.

The easiest way to generate a valid reservation set of objects is to create the reservation on an existing DRP Endpoint, then show the JSON structure. An example CLI command reservation named 192.168.124.101 is shown below:

drpcli reservations show 192.168.124.101

Multiple Subnets may be specified (eg drpcli reservations list; to dump all reservations on a given DRP Endpoint). Each Subnet will be added to the system if they do not exist already.

Example structures of two reservations is defined below. NOTE that in real world use, more options or values may need to be set to make the subnet operationally correct for a given environment.

YAML:
bootstrap-network-reservations:
  - Addr: 192.168.124.222
    Token: "01:02:03:04:05:00"
  - Addr: 192.168.124.223
    Token: "01:02:03:04:05:01"
JSON:
[
  { "Addr": "192.168.124.222", "Token": "01:02:03:04:05:00" },
  { "Addr": "192.168.124.223", "Token": "01:02:03:04:05:01" }
]

Generally speaking this process is used for initial DRP Endpoint installation with the install.sh installation script. Preparing the Subnets and Reservations in either a Content Pack or a Profile that is applied to the system at installation time will enable the creation of the Objects.

Example installation script invocation that would use this bootstrap process, assumes that network.json is provided by the operator with the network Subnet and/or Reservations definitions:

# you must create "network.json" which should be a valid Profile object, containing
# the appropriate Params and object information in each param as outlined in the
# examples in the bootstrap-network-subnets and boostrap-network-reservations
# Params "Documentation" field
curl -fsSL get.rebar.digital/stable | bash -s -- install --universal --initial-profiles="./network.json"

22.43.2.2.6. bootstrap-network-subnets

This Param can be set to an array of Subnet type objects. If it is added to the bootstrap-network profile, then during the bootstrap Stage, the defined subnets will be added to the system.

The easiest way to generate a valid subnet object is to create the subnet on an existing DRP Endpoint, then show the JSON structure. An example CLI command for the subnet named kvm-test is shown below:

drpcli subnets show kvm-test

Multiple Subnets may be specified (eg drpcli subnets list; to dump all subnets on a given DRP Endpoint). Each Subnet will be added to the system if they do not exist already.

Example structures of two subnets is defined below. NOTE that in real world use, more options or values may need to be set to make the subnet operationally correct for a given environment.

YAML:
- Subnet: 10.10.10.1/24
  Name: subnet-10
  ActiveEnd: 10.10.10.254
  ActiveStart: 10.10.10.10
  OnlyReservations: false
  Options:
  - Code: 1
    Description: Netmask
    Value: 255.255.255.0
  - Code: 3
    Description: DefaultGW
    Value: 10.10.10.1
  - Code: 6
    Description: DNS Server
    Value: 1.1.1.1
  - Code: 15
    Description: Domain
    Value: example.com
- Subnet: 10.10.20.1/24
  Name: subnet-20
  ActiveEnd: 10.10.20.254
  ActiveStart: 10.10.20.10
  OnlyReservations: false
  Options:
  - Code: 1
    Description: Netmask
    Value: 255.255.255.0
  - Code: 3
    Description: DefaultGW
    Value: 10.10.20.1
  - Code: 6
    Description: DNS Server
    Value: 1.1.1.1
  - Code: 15
    Description: Domain
    Value: example.com
JSON:
[
  {
    "Subnet": "10.10.10.1/24",
    "Name": "subnet-10",
    "ActiveEnd": "10.10.10.254",
    "ActiveStart": "10.10.10.10",
    "OnlyReservations": false,
    "Options": [
      { "Code": 1, "Value": "255.255.255.0", "Description": "Netmask" },
      { "Code": 3, "Value": "10.10.10.1", "Description": "DefaultGW"  },
      { "Code": 6, "Value": "1.1.1.1", "Description": "DNS Server" },
      { "Code": 15, "Value": "example.com", "Description": "Domain" }
    ]
  },
  {
    "Subnet": "10.10.20.1/24",
    "Name": "subnet-20",
    "ActiveEnd": "10.10.20.254",
    "ActiveStart": "10.10.20.10",
    "OnlyReservations": false,
    "Options": [
      { "Code": 1, "Value": "255.255.255.0", "Description": "Netmask" },
      { "Code": 3, "Value": "10.10.20.1", "Description": "DefaultGW"  },
      { "Code": 6, "Value": "1.1.1.1", "Description": "DNS Server" },
      { "Code": 15, "Value": "example.com", "Description": "Domain" }
    ]
  }
]

Generally speaking this process is used for initial DRP Endpoint installation with the install.sh installation script. Preparing the Subnets and Reservations in either a Content Pack or a Profile that is applied to the system at installation time will enable the creation of the Objects.

Example installation script invocation that would use this bootstrap process, assumes that network.json is provided by the operator with the network Subnet and/or Reservations definitions:

# you must create "network.json" which should be a valid Profile object, containing
# the appropriate Params and object information in each param as outlined in the
# examples in the bootstrap--network-subnets and boostrap-network-reservations
# Params "Documentation" field
curl -fsSL get.rebar.digital/stable | bash -s -- install --universal --initial-profiles="./network.json"

22.43.2.2.7. bootstrap-tools

This is an array of strings where each string is a package name for the base Operating System that is running on the DRP Endpoint, to be installed.

This is used by the bootstrapping system to add packages to the DRP Endpoint.

By default no packages are specified. If the operator sets this Param on the self-runner Machine object (either directly or via a Profile), then runs one of the bootstrap workflows, the packages will be installed.

An example workflow is universal-bootstrap.

Example setting in YAML:

bootstrap-tools:
  - package1
  - package2

Or in JSON:

{ "bootstrap-tools": [ "package1", "package2" ] }

22.43.2.2.8. broker-pool/pool

The pool to pull machines from.

22.43.2.2.9. broker/mach-name

The cluster gives a machine a name. This parameter tracks that name.

22.43.2.2.10. broker/name

The broker that provided these machines.

22.43.2.2.11. broker/pass-params-to-workorder

When cluster-provision task creates a work_order, these parameters will be included in that work_order.

If the Param is not defined for the cluster, then it will be skipped.

This allows pipeline developers to include needed information for the broker from the cluster data.

Note: the following params are always included and do not need to be added to this param: broker/. cluster/

22.43.2.2.12. broker/set-context

Sets the BaseContext to build machines with when using the broker-provision task.

Only applies for the Context Broker.

22.43.2.2.13. broker/set-icon

Icon to use when creating machines using a broker

Typical values:

  • aws: “aws”

  • azure: “microsoft”

  • default: “cloud”

  • google: “google”

  • linode: “linode”

  • digitalocean: “digital ocean”

  • oracle: “database”

22.43.2.2.14. broker/set-pipeline

When the broker creates machines, it assigns a pipeline to start during the join process. This will determine the operational path that the new machine follows.

This value sets the pipeline that will started when the machine is crated. Typically, the pipeline includes key configuration information and additional tasks to be injected during the configuration process.

22.43.2.2.15. broker/set-workflow

When the broker creates machines, it assigns a workflow to start during the join process. For cloud machines, this is typically universal-start (default). For physical machines, this is typically universal-discover.

For Pipelines, this value must be one of the chained workflow states; consequently, the typical values are universal-start and universal-discover.

22.43.2.2.16. broker/state

Generic object that is constructed by the brokers and stored on the broker machine to track general information.

22.43.2.2.17. broker/tfinfo

Generic object that is constructed by the brokers and stored on the broker machine to track terraform information.

22.43.2.2.18. broker/type

The broker type that provided these machines.

22.43.2.2.19. cloud/cost-lookup

Used by Task ‘cloud-inventory’ to set ‘cloud/cost’.

This is intended for budgetary use only. Actual costs should be determined by inspecting the API for the cloud.

22.43.2.2.20. cloud/meta-apis

Used by Task ‘cloud-inventory’ and ‘cloud-detect-meta-api’ tasks.

Keys: * clouds is the ordered list of clouds to test during cloud-detect-meta-api * id is the field to use for the instance-id * apis is a dictionary per cloud that maps params to bash commands to retrieve the param

For each apis entry: * cloud/instance-id (required): * cloud/instance-type (optional): * cloud/placement/availability-zone (optional): * cloud/public-ipv4 (optional):

apis entries are bash commands that will yield the expected value when evalated at VAR=$([api command])

22.43.2.2.21. cluster/count

Used by cluster automation to determine the size of a cluster. More advanced clusters may use this field to define counts for workers in a multi-tiered cluster.

Defaults to 3. Set to 0 to drain the cluster.

This is used by v4.8+ cluster pipelines to build out the cluster size. Check the specific cluster for details about whehter it is used.

For operational guidelines, see Clusters v4.8+

22.43.2.2.22. cluster/destroy

Used by cluster automation to destroy the cluster.

For operational guidelines, see Clusters v4.8+

22.43.2.2.23. cluster/machine-types

For v4.8 Cluster Pattern, this defines a list of machine types that will be added or removed by the plan.

22.43.2.2.24. cluster/machines

For v4.8 Cluster Pattern, this defines a map of machine types and their names. Each type can have additional parameters.

The purpose of this map is to uniquely identify the machines and allows the machines to be rotated in a controlled way.

For operational guidelines, see Clusters v4.8+

If omitted the following items will be set via the designated Param:

  • pipeline: sets the initial pipeline or assigns from broker/set-pipeline [has safe default]

  • workflow: set the initial workflow or assigns from broker/set-workflow [has safe default]

  • icon: sets the initial Meta.icon or assigns from broker/set-icon [has safe default]

  • tags: [] but automatically includes cluster/profile

  • meta: meta data to set on the machine by cluster-add-params, if meta.icon is set, it will over-write the icon.

  • Params: (note capitalization!): parameters to use in Terraform templates. Also set on machine during cluster-add-params

While most of these values are used only during terrafor-apply when building machines, meta and Params is used during cluster-add-params to update Meta and Params of the provisioned machines.

Example:

machine:
  names:
    - cl_awesome_1
    - cl_awesome_2
    - cl_awesome_3
  Params:
    broker/set-workflow: universal-application-application-base
    broker/set-pipeline: universal-start
    broker/set-icon: server

22.43.2.2.25. cluster/profile

Name of the profile used by the machines in the cluster for shared information. Typically, this is simply self-referential (it contains the name of the containing profile) to allow machines to know the shared profile.

Note: the default value has been removed in v4.7. In versions 4.3-4.6, the default value was added to make it easier for operators to use clusters without having to understand need to create a cluster/profile value. While helpful, it leads to nonobvious cluster errors that are difficult to troubleshoot.

For operational guidelines, see Clusters v4.8+

22.43.2.2.26. cluster/tags

For v4.8 Cluster Pattern, this defines tags added to the machine during construction by the broker.

22.43.2.2.27. cluster/use-single-wait

Flag to indicate that the cluster should do one wait

When true, the cluster will wait for the maximum timeout for the requested work order to complete. This can cause the the requesting task to appear hung.

If false, the cluster will loop using a 10 second wait period for requested work order(s) to complete.

22.43.2.2.28. cluster/wait-for-members

Used by cluster-provision task to inject cluster-wait-for-members into the cluster to be workflow complete.

Normally, pipeline designers can include cluster-wait-for-members one more more times in flexiflow, this Param provides a additional option to to make it built into cluster-provision.

For operational guidelines, see Clusters v4.8+

22.43.2.2.29. context/container-info

Used by docker-context to provide details on the currently running container.

22.43.2.2.30. context/image-exists-counts

Used by docker-context to determine if an image already exists.

22.43.2.2.31. context/image-name

Used by docker-context for the name of the image.

22.43.2.2.32. context/image-path

Used by docker-context to provide the path in the local filestore for the image to upload.

22.43.2.2.33. context/name

Used by context-set as the target for the Param.BaseContext value.

Defaults to “” (Machine context)

22.43.2.2.34. dr-server/initial-password

Defaults to r0cketsk8ts

22.43.2.2.35. dr-server/initial-user

Defaults to multi-site-manager

22.43.2.2.36. dr-server/initial-version

The version DRP to install.

Typically: stable or tip, can also be a specific v4.x.x version from the RackN catalog.

This defaults to stable.

22.43.2.2.37. dr-server/install-drpid

If not set, will use “site-[Machine.Name]”

22.43.2.2.38. dr-server/update-content

Boolean to indicate that the content should be updated.

22.43.2.2.39. drive-signatures

A map of drive to SHA1 signatures.

22.43.2.2.40. inventory/CPUCores

From inventory/data, used for indexable data.

22.43.2.2.41. inventory/CPUSpeed

From inventory/data, used for indexable data.

22.43.2.2.42. inventory/CPUType

From inventory/data, used for indexable data.

22.43.2.2.43. inventory/CPUs

From inventory/data, used for indexable data.

22.43.2.2.44. inventory/DIMMSizes

From inventory/data, used for indexable data.

22.43.2.2.45. inventory/DIMMs

From inventory/data, used for indexable data.

22.43.2.2.46. inventory/Family

From inventory/data, used for indexable data.

22.43.2.2.47. inventory/Hypervisor

From inventory/data, used for indexable data.

22.43.2.2.48. inventory/Manufacturer

From inventory/data, used for indexable data.

22.43.2.2.49. inventory/NICDescr

From inventory/data, used for indexable data.

22.43.2.2.50. inventory/NICInfo

From inventory/data, used for indexable data.

22.43.2.2.51. inventory/NICMac

From inventory/data, used for indexable data.

22.43.2.2.52. inventory/NICSpeed

From inventory/data, used for indexable data.

This reports the speed as a number and the duplex state as true/false.

22.43.2.2.53. inventory/NICs

From inventory/data, used for indexable data.

22.43.2.2.54. inventory/ProductName

From inventory/data, used for indexable data.

22.43.2.2.55. inventory/RAM

From inventory/data, used for indexable data.

22.43.2.2.56. inventory/RaidControllers

From inventory/data, used for indexable data.

22.43.2.2.57. inventory/RaidDiskSizes

From inventory/data, used for indexable data.

22.43.2.2.58. inventory/RaidDiskStatuses

From inventory/data, used for indexable data.

22.43.2.2.59. inventory/RaidDisks

From inventory/data, used for indexable data.

22.43.2.2.60. inventory/RaidTotalDisks

From inventory/data, used for indexable data.

22.43.2.2.61. inventory/SerialNumber

From inventory/data, used for indexable data.

22.43.2.2.62. inventory/TpmPublicKey

From inventory/data, used for indexable data.

This is the base64 encoded public key from the TPM.

If an error occurs, it will contain the error.

  • no device - no TPM detected

  • no tools - no tools were available to install

22.43.2.2.63. inventory/check

Using BASH REGEX, define a list of inventory data fields to test using Regular Expressions. Fields are tested in sequence, the first to fail will halt the script.

22.43.2.2.64. inventory/collect

Map of commands to run to collect Inventory Input Each group includes the fields with jq maps to store. For example; adding drpcli gohai will use gohai JSON as input. Then jq will be run with provided values to collect inventory into a inventory/data as a simple map.

To work correctly, Commands should emit JSON.

Special Options:

  • Change the command to parse JSON from other sources

  • Add JQargs to give hints into jq arguments before the parse string

Gohai example:

{
  gohai: {
    Command: "drpcli gohai",
    JQargs: ""
    Fields: {
      RAM: ".System.Memory.Total / 1048576 | floor",
      NICs: ".Networking.Interfaces | length"
    }
  }
}

22.43.2.2.65. inventory/cost

Tracks the estimated daily cost for the instance.

This value can be manually set per machine.

For machines with ‘cloud/provider’ set, ‘cloud-inventory’ will use the ‘cloud/cost-lookup’ table to set this value.

22.43.2.2.66. inventory/data

Stores the data collected by the fieds set in inventory/collect. If inventory/integrity is set to true, then used for comparision data.

22.43.2.2.67. inventory/flatten

Creates each inventory value as a top level Params This is needed if you want to filter machines in the API by inventory data. For example, using Terraform filters.

This behavior is very helpful for downstream users of the inventory params because it allows them to be individually retried and searched.

Note

This will create LOTS of Params on machines. We recommend that you define Params to match fields instead of relying on adhoc Params.

22.43.2.2.68. inventory/integrity

Allows operators to compare new inventory/data to stored inventory/data on the machine. If true and values not match (after first run) then Stage will fail.

22.43.2.2.69. inventory/tpm-device

The device to use to query the TPM.

Defaults to /dev/tpm0

22.43.2.2.70. inventory/tpm-fail-on-notools

If set to true, the system fail if the TPM tools are not present.

Defaults to false.

22.43.2.2.71. manager/allow-bootstrap-manager

This should be set to false in the global profile to turn off auto-updating.

22.43.2.2.72. manager/turn-on-manager

Boolean to indicate if the manager flag should be turned on

22.43.2.2.73. manager/update-catalog

Boolean to indicate if the catalog should be updated

22.43.2.2.74. migrate-machine/complete

This should not be set manually.

22.43.2.2.75. migrate-machine/new-endpoint-token

This is a token for the new endpoint that will create the machine.

22.43.2.2.76. migrate-machine/new-endpoint-url

More to come

22.43.2.2.77. migrate-machine/old-endpoint-token

This is used by the machine-migrate task to clean up the old endpoint after migrating to the new endpoint. Do not manually add the parameter.

22.43.2.2.78. migrate-machine/old-endpoint-url

This is used by the machine-migrate task to clean up the old endpoint after migrating to the new endpoint. Do not manually add the parameter.

22.43.2.2.79. migrate-machine/skip-content-check

Setting the parameter to true will skip the content check. It is recommended that both endpoints have the same content versions when migrating.

22.43.2.2.80. migrate-machine/skip-profiles

Setting the parameter to true will skip checking and creating profiles on the new endpoint. It is recommended that both endpoints have the same profiles when migrating.

22.43.2.2.81. network/firewall-ports

Map of ports to open for Firewalld including the /tcp or /udp filter.

Skip: An array to empty [] to skip this task. Disable Firewall: including “/” will disable the firewall

22.43.2.2.82. network/lldp-skip-types

Allows operators to skip LLDP operations for some machine types where LLDP is not commonly needed.

Add “always” to always skip (never run) LLDP Add “never” to never skip (always run) LLDP Other options machine/type are: machine, container, virtual, switch, storage.

Default skips are container and virtual

22.43.2.2.83. network/lldp-sleep

Sleep is required in LLDP process to ensure that switches have time to respond to the LLDP request. Depending on your environment, this could be drastically reduced.

If you want to SKIP LLDP, please use the lldp/skip-machine-types.

22.43.2.2.84. profile-cleanup-selection-pattern

This Param is an array of wildcard patterns of Profile names that should be deleted from a Machine object.

If no values are specified, no profiles will be removed from the system. The Task profile-cleanup can also optionally skip removing the Profiles if set to true. Setting the skip to true will still print the pattern matched profiles on the given machine before exiting with a success error code. This allows for development testing to determine (by the Job Log contents), what Profiles would have been removed from the system.

Regex patterns are PCRE as implemented in jq in the match() function. For documentation on them, see:

Please note that to explicitly match a single Profile name, you must anchor the pattern with a begin/end anchor, like:

  • ^profile-name-to-match$

Not doing so can result in the following selection pattern:

  • foo-bar

Incorrectly matching any profiles with that pattern contained in it, for example

  • baz-foo-bar

  • foo-bar-blatz

The correct exact match for a Profile named foo-bar, would be formed like:

  • ^foo-bar$

By default no Profile names or patterns are specified.

22.43.2.2.85. profile-cleanup-skip

This Param defines if the profile(s) specified by the pattern(s) in the Param profile-cleanup-selection-pattern should be removed or not. This is intended as a validity/verification check when adding new patterns for removing Profiles.

Set this Param to true to prevent the profiles that match specified by the profile-cleanup-selection-pattern Param from being removed from the system.

When set to true all matching Patterns will be output in the Job Log prior to the Task exiting with a success (zero) code. This allows for development/debug testing of the wildcard patterns being used, without actually removing the Profiles from the Machine object.

The default value of false will remove any Profiles that match on the system based on the Param profile-cleanup-selection-pattern wildcard matches.

22.43.2.2.86. reset-workflow

Workflow to set before rebooting system.

22.43.2.2.87. rsa/key-name

File name of the RSA key

22.43.2.2.88. rsa/key-private

Private SSH Key (secure)

No default is set.

22.43.2.2.89. rsa/key-public

Public SSH Key.

No default is set.

22.43.2.2.90. rsa/key-user

SSH Key User.

22.43.2.2.91. storage/mount-devices

22.43.2.2.92. Mount Attached Storage

Ordered list of of devices to attempt mounting from the OS.

storage/mount-devices task will attempt to mount all the drives in the list in in order. If the desired mount point is already in use then the code will skip attempting to assign it.

This design allows operators to specific multiple mount points or have a single point with multiple potential configurations.

  • rebuilt will wide and rebuild the mount

  • reset will rm -rf all files if UUID changes

example:

[
  {
    disk: "/dev/mmcblk0",
    partition: "/dev/mmcblkp1",
    mount: "/mnt/storage",
    type: "xfs",
    rebuild: true,
    reset: true,
    comment: "example"
  }
  {
    disk: "/dev/sda",
    partition: "/dev/sda1",
    mount: "/mnt/storage",
    type: "xfs",
    rebuild: true,
    reset: true,
    comment: "put something here"
  }
]

22.43.2.2.93. terraform/debug-plan

Captures the Plan generated by terraform-apply for the attached system and stores in terraform/debug-review-plan attach to the requesting the cluster.

If false (default) then terraform-apply will attempt to REMOVE terraform/debug-plan-review to avoid have stale or secure information saved.

WARNING: If true, exposes secure information in the Plan and should be used only for debug purposes.

22.43.2.2.94. terraform/debug-plan-review

If terrform/debug-plan is tru, then captures the Plan generated by terraform-apply for the attached system and stores in terraform/debug-review-plan attach to the requesting the cluster.

NOTE: Stored as BASE64 Encoded!

In all other cases, terraform-apply will attempt to REMOVE this Param to avoid have stale or secure information saved.

WARNING: This exposes secure information in the Plan and should be used only for debug purposes.

22.43.2.2.95. terraform/map-instance-id

Provides the TF lookup self.[name] reference when the .id field does not map to the providers true ID.

Uses the ${self.[path]} address space used from inside the Terraform resource so self. is required.

This allows users to create machines using the Terraform Instance ID from a Terraform run

Typical values:

  • most use: “.id”

  • google: “.instance_id”

22.43.2.2.96. terraform/map-instance-name

Provides the TF self.[name] reference that should be stored in the DRP Machine.Name field.

Uses the ${self.[path]} address space used from inside the Terraform resource so self. is required.

This allows users to create machines using the Terraform Instance Name from a Terraform run

Typical values:

  • aws: “self.private_dns”

  • azure: “self.name”

  • google: “self.name”

  • linode: “self.label”

  • digitalocean: “self.name”

  • pnap: “self.hostname”

22.43.2.2.97. terraform/map-ip-address

Provides the Terraform self.[name] reference that should be stored in the DRP Machine.Name field.

Uses the ${self.[path]} address space used from inside the Terraform resource so self. is required.

This allows users to create machines using the Terraform Instance Name from a Terraform run

Typical values: * aws: “self.public_ip” * azure: “self.public_ip_address” * google: “self.network_interface[0].access_config[0].nat_ip” * linode: “self.ip_address” * digitalocean: “self.ipv4_address” * pnap: “element(self.public_ip_addresses,0)”

22.43.2.2.98. terraform/map-private-ip-address

Provides the Terraform self.[name] reference that should be stored in the DRP Machine.Name field.

Uses the ${self.[path]} address space used from inside the Terraform resource so self. is required.

This allows users to create machines using the Terraform Instance Name from a Terraform run

If missing, uses the terraform/map-ip-address

Typical values: * aws: “self.private_ip” * azure: “self.private_ip_address” * google: “self.network_interface[0].network_ip” * linode: “self.private_ip_address” * digitalocean: “self.ipv4_address_private” * pnap: “tolist(self.private_ip_addresses)[0]” * oracle: “self.private_ip”

22.43.2.2.99. terraform/plan-action

Verb used with Terraform. Generally used to apply or destroy plans.

Defaults to apply.

22.43.2.2.100. terraform/plan-instance-resource-name

This is the name of the terraform resource instance.

For example, linode_instance.

22.43.2.2.101. terraform/plan-instance-template

This is the name of a template that gets placed inside the terraform instance resource.

22.43.2.2.102. terraform/plan-templates

This is an array of strings where each string is a template that renders an Terraform Plan. They are built in sequence and then run from a single terraform apply

Outputs from a plan will can be automatically saved on the Machine.

22.43.2.2.103. terraform/set-profiles

Since Terraform creates machines, we can add profiles to the machines during the create process.

An ordered list is provided to allow for multiple profiles

22.43.2.3. profiles

The content package provides the following profiles.

22.43.2.3.1. bootstrap-contexts

Bootstrap Digital Rebar server for advanced operation

  • Context Operations * Installs Docker and downloads context containers * Locks the endpoint to prevent accidential operations

This is designed to be extended or replaced with a site specific bootstrap that uses the base tasks but performs additional bootstrapping.

22.43.2.3.2. bootstrap-drp-endpoint

Bootstrap Digital Rebar server with the bootstrap operations for:

  • bootstrap-tools - install additional packages (*)

  • bootstrap-ipmi - install ipmitool package and ipmi plugin provider if needed

  • bootstrap-contexts - install docker-context plugin_providder, and contexts in installed content

Intended to be driven by a bootstrapping workflow on the DRP Endpoint (like universal-bootstrap) during DRP Endpoint installation.

Note

(*) The bootstrap-tools specification exists in the bootstrap-ipmi Profile definition. It is not explicitly called out here, as that would duplicate the pockage install process needlessly.

The bootstrap-ipmi Profile defines the Param bootstrap-tools to contain ipmitool. The Param is a composable Param, so all instances of the Param will be aggregated together in one list, instead of the regular order of precedence.

22.43.2.3.3. bootstrap-elasticsearch

Profile to bootstrap elasticsearch

22.43.2.3.4. bootstrap-ipmi

Bootstrap Digital Rebar server for with IPMI plugin provider, and install the ipmitool for IPMI protocol operations.

22.43.2.3.5. bootstrap-kibana

Profile to bootstrap kibana

22.43.2.3.6. bootstrap-manager

Bootstrap Digital Rebar server into a Manager

22.43.2.3.7. bootstrap-tools

Bootstrap Digital Rebar server with commonly required tools for content/plugins (eg ipmitool).

22.43.2.3.8. resource-context

Manages Context via Resource Broker

Learn more about:

22.43.2.3.9. resource-pool

Manages Pool via Resource Broker

Learn more about:

22.43.2.4. stages

The content package provides the following stages.

22.43.2.4.1. ansible-inventory

Collects ansible inventory data from ansible’s setup module. .. note:: This will attempt to install ansible if it is not already installed.

22.43.2.4.2. ansible-playbooks-local

Invoke ansible playbooks from git.

Note

ansible/playbooks is a Required Param - List of playbook git repos to run on the local machine.

22.43.2.4.3. bootstrap-advanced

DEPRECATED: Use unversal-bootstrap with the bootstrap-contexts profile Bootstrap stage to build out an advanced setup

This augments the bootstrap-base. It does NOT replace it.

Bootstrap Operations: * Install Docker & Building Contexts if defined * Lock the Machine

22.43.2.4.4. bootstrap-network

Installs specified network Subnets and Reservations found in the profile named bootstrap-network.

See the Task Documentation field for complete usage and example content pieces.

22.43.2.4.5. broker-provision

Using the cluster/machines, cluster/machine-types, cluster/profile parameters allocate or destroy machines.

22.43.2.4.6. cloud-inventory

Collect internal API information about a cloud instance

Requires cloud/provider to be set correctly. If not set then does nothing.

Depending on the cloud provider, sets discovered cloud/* data from the cloud’s discovery API including:

  • public-ipv4

  • public-hostname

  • instance-type

  • placement/availability-zone

NOTE: Will throw an error if the reported instance-id does not known cloud/instance-id

22.43.2.4.7. cluster-destroy

Forces the resources away via a “de-provisioning” operation.

Also performs an ophan sweet to ensure that the resource broker didn’t leave machines attached to the cluster before removal

22.43.2.4.8. cluster-provision

Using the specified parameters on the cluster, create, resize, or empty a cluster.

22.43.2.4.9. drive-signature

Builds a signature for each drive and stores that on the machine.

22.43.2.4.10. drive-signature-verify

Verifies signatures for drives.

22.43.2.4.11. inventory

Collects selected fields from Gohai into a simpler flat list.

The process uses JQ filters, defined in inventory/fields, to build inventory/data on each machine.

Also, applies the inventory/check map to the data and will fail if the checks do not pass.

22.43.2.4.12. inventory-minimal

Set some of the initial inventory pieces that could be useful for other task in the discover stages.

22.43.2.4.13. migrate-machine

Stage to migrate machine to new DRP endpoint.

22.43.2.4.14. runner-service

This stage has been deprecated. Use drp-agent from drp-community-content instead.

22.43.2.5. tasks

The content package provides the following tasks.

22.43.2.5.1. ansible-apply

Runs one or more Ansible Playbook templates as defined by the ansible/playbook-templates variable in the stage calling the task.

Requires an ansible context.

Expects to have rsa-key-create run before stage is called so that rsa/* params exist.

Information can be chained together by having the playbook write [Machine.Uuid].json as a file. This will be saved on the machine as Param.ansible/output. The entire Machine JSON is passed into the playbook as the digitalrebar variable so it is available.

22.43.2.5.2. ansible-inventory

Install ansible, if needed, and record the setup module ansible variables onto the machine as parameter named ansible-inventory.

22.43.2.5.3. ansible-join-up

Runs an embedded Ansible Playbook to run the DRP join-up process.

Requires an ansible context.

Expects to be in a Workflow that allows the joined machine to continue Discovery and configuration steps as needed.

Expects to have rsa-key-create run before stage is called and the public key MUST be on the target machine.

Idempotent - checks to see if service is installed and will not re-run join-up.

22.43.2.5.4. ansible-playbooks-local

A task to invoke a specific set of ansible playbooks pulled from git.

Sequence of operations (loops over all entries: 1. collect args if provided 1. git clone repo to name 1. git checkout commit if provided 1. cd to name & path 1. run ansible-playbook playbook and args if provided 1. remove the directories

Note

Requires Param ansible/playbooks - List of playbook git repos to run on the local machine.

22.43.2.5.5. bootstrap-container-engine

Install Pre-reqs for Docker-Context.

This task is idempotent.

Stage installs Podman and requires access to the internet

22.43.2.5.6. bootstrap-contexts

Download RackN containers defined in Contexts

This task is idempotent.

Attempts to upload images for all Docker-Contexts from RackN repo.

If Meta.Checksum exists, will perform checksum test. If checksums do not match then will remove the file! WARNING: No checksum test is performed if Meta.Checksum is missing.

If Meta.Imagepull provided, will use the URL to download image from provided address.

If not Meta.Imagepull is provided then will pull container from RackN get.rebar.digital/containers at https://get.rebar.digital/containers/[image].tar.gz.

DEV NOTE: ExtraClaims for resource_brokers also requires a matching entry for machines. This is an expected behavior due to Digital Rebar the backend implementation of Resource Brokers.

22.43.2.5.7. bootstrap-ipmi

This task bootstraps the DRP Endpoint to be functional for Baseboard Management Control capabilities by installing the ipmi plugin provider and installs the ipmitool package on the DRP Endpoint operating system.

This Task utilizes the external template bootstrap-tools.sh.tmpl, which must be configured with a list of packages. In this case, the bootstrap-tools array Param must be set to incldue ipmitool package.

Generally, the bootstrap process is controlled by a bootstrapping workflow (ege universal-bootstrap) which uses a Profile to expand the bootstrap workflow. This profile should contain the Param value setting. This is due to Tasks not carrying their own Param or Profile definitions.

22.43.2.5.8. bootstrap-manager

Turn the DRP Endpoint into a Manager and update cache

22.43.2.5.9. bootstrap-network

Used (primarily) during the DRP Endpoint bootstrap configuration process to install Subnets (DHCP “pools”) and Reservations (static DHCP assignments).

It uses the bootstrap-network-subnets and bootstrap-network-reservations Params to specify the appropriate objects to install during the boostrap stages.

If a Profile named bootstrap-network exists on the system, the values in it are used. The Profile must exaclty match this name. The Subnet and Reservations objects added in via the Profile must be valid Object types with appropriate required values (eg Reservations require Strategy: MAC).

This is often/typically used during DRP installation time with an installer command example like (NOTE that bootstrap-network.yaml is a Content Pack!!):

# you must create a Content Pack which should contain a valid Profile object
# which carries the network bootstrapping configuration data. The below
# example assumes the Content Pack has been bundled up and named "bootstrap-networ.yaml".
curl -fsSL get.rebar.digital/stable | bash -s -- install --universal --initial-contents="./bootstrap-network.yaml"

An example of the bootstrap-network.yaml Content Pack in the above command might look like the following (see the individual Params documentation for more details on use of them):

# NOTE - this is a DRP Content Pack, and is applied to the system
#        with 'drpcli contents update ./bootstrap-network.yaml'
# YAML example Subnets and Reservations for bootstrap-network task
---
meta:
  Name: bootstrap-network
  Version: v1.0.0
sections:
  profiles:
    bootstrap-network:

      Name: bootstrap-network
      Description: Profile for network bootstrap of subnets and reservations.
      Documentation: |
        Uses the bootstrap-network task for DRP Endpoint configuration.

        Installs 2 Subnets and 2 Reservations during DRP Endpoint bootstrap.

      Meta:
        color: blue
        icon: sitemap
        title: Digital Rebar Provision
      Params:
        bootstrap-network-reservations:
          - Addr: 192.168.124.222
            Token: "01:02:03:04:05:00"
            Strategy: MAC
          - Addr: 192.168.124.223
            Token: "01:02:03:04:05:01"
            Strategy: MAC
        bootstrap-network-subnets:
          - ActiveEnd: 10.10.10.254
            ActiveStart: 10.10.10.10
            Name: subnet-10
            OnlyReservations: false
            Options:
            - Code: 1
              Description: Netmask
              Value: 255.255.255.0
            - Code: 3
              Description: DefaultGW
              Value: 10.10.10.1
            - Code: 6
              Description: DNS Server
              Value: 1.1.1.1
            - Code: 15
              Description: Domain
              Value: example.com
            Subnet: 10.10.10.1/24
          - ActiveEnd: 10.10.20.254
            ActiveStart: 10.10.20.10
            Name: subnet-20
            OnlyReservations: false
            Options:
            - Code: 1
              Description: Netmask
              Value: 255.255.255.0
            - Code: 3
              Description: DefaultGW
              Value: 10.10.20.1
            - Code: 6
              Description: DNS Server
              Value: 1.1.1.1
            - Code: 15
              Description: Domain
              Value: example.com
            Subnet: 10.10.20.1/24

Note

This can be generated using the drpcli contents bundle operation. Simply create a minimal content pack on disk, with profiles/bootstrap-network.yaml Object containing the appropriate Param definition.

22.43.2.5.10. bootstrap-tools

If the Param bootstrap-tools contains an Array list of packages, then this task will install those packages on the DRP Endpoint, when used with one of the bootstrap workflows (eg universal-bootstrap).

By default, no packages are defined in the bootstrap-tools Param, so this task will no-op exit.

22.43.2.5.11. broker-context-manipulate

This task will use parameters to drive the machine construction.

22.43.2.5.12. broker-pool-manipulate

This task will use parameters to drive the machine construction.

22.43.2.5.13. broker-provision

This task will use parameters to drive the machine construction.

Detect Broker Type and injects tasks specific to the broker type:

  • cloud-terraform - uses terraform-apply to run Terraform plan

  • pool - uses DRP pooling to allocation machines

  • context - creates contexts

When using the context broker, pass broker/set-context in order to change the starting context.

22.43.2.5.14. cloud-detect-meta-api

Use various Cloud Meta API to infer cloud plugin_provider

Detects clouds based on cloud/meta-apis values

If detected, will set the cloud/provider

22.43.2.5.15. cloud-inventory

Collect internal API information about a cloud instance

Requires cloud/provider to be set correctly. Cloud provisioners should set this field automatically. You can use cloud-detect-meta-api to discover the cloud/provider by inspection.

If cloud/provider is not set then does nothing.

Depending on the cloud provider, sets discovered cloud/* data from the cloud’s discovery API including:

  • public-ipv4

  • public-hostname

  • instance-type

  • placement/availability-zone

NOTE: Will throw an error if the reported instance-id does not known cloud/instance-id

22.43.2.5.16. cluster-add-params

When building a cluster, this task will also add Params for each cluster/machine-type to the created machines.

This is needed because we do not pass Params through the Terraform-Apply task when creating machines.

22.43.2.5.17. cluster-empty

After de-provisioning, check for cluster assigned machines and clean them up if they still exist

22.43.2.5.18. cluster-provision

This task will use parameters to drive the creation of a cluster, resize the cluster, or remove elements from the cluster.

22.43.2.5.19. cluster-restart-pipeline

This task forces all machines in a cluster to restart the machine pipeline instead of just the newly added ones.

WARNING: Depending on the pipeline, this may be distruptive to the cluster operation!

This behavior is very handy if you are CHANGING the pipeline of an existing cluster. For exmaple, if you pre-provision systems into a waiting state then this task will ensure that the pre-provisioned machines will process the new pipeline.

22.43.2.5.20. cluster-set-destroy

Sets the cluster destroy flag to true to ensure that the cluster destroy operations are run in the workflow

22.43.2.5.21. cluster-to-pool

When building a cluster, this task will also add the unassigned cluster members into a pool using the pool allocate command.

The name of the pool is determined by the cluster/profile.

22.43.2.5.22. cluster-wait-for-members

This task makes the cluste pipeline wait until all the cluster members have completed their workflows. It does this by waiting on individual machines to create the WorkflowComplete = true state. This means that the loop is waithing events, not polling the API.

Since there is no “break” in the loop, you cannot stop this loop from the cluster pipeline; however, you can easily break the loop by clearning the cluster machines Workflow and marking them Ready.

22.43.2.5.23. context-set

This task allows stages to change the BaseContext for a machine as part of a larger workflow.

This is especially helpful creating a new machine using an endpoint context, such as Terraform to create a machine, and then allowing the machine to transition to a runner when it starts.

To code is written to allow both clearing BaseContext (set to “”) or setting to a known value. If setting to a value, code ensures that the Context exits.

22.43.2.5.24. dr-server-install

Installs DRP Server. Sets DRP-ID to Machine.Name

LIMITATIONS: * firewall features only available for Centos family

The primary use cases for this task are

  1. drp-server pipline

Will transfer the DRP license to the machine being created.

For operators,this feature makes it easy to create new edge sites using DRP Manager.

22.43.2.5.25. dr-server-install-ansible

Installs DRP Server using the Ansibile Context. Sets DRP-ID to Machine.Name

NOTE: All connections from FROM THE MANAGER, the provisioned site does NOT need to connect to the manager.

LIMITATIONS: * firewall features only available for Centos family

The primary use cases for this task are

1. Creating a remote site for Multi-Site-Manager 1. Building Development installations for rapid testing

Requires install.sh and zip artifacts to be bootstrap/*

Will transfer the DRP license to the machine being created.

If DRP is already installed, will restart DRP and update license

For operators,this feature makes it easy to create new edge sites using DRP Manager.

22.43.2.5.26. drive-signature

Generate a signature for each drive and record them on them machine.

22.43.2.5.27. drive-signature-verify

Using the signatures on the machine, validate each drive’s signature matches.

22.43.2.5.28. elasticsearch-setup

A task to install and setup elasticsearch. This is a very simple single instance.

22.43.2.5.29. git-build-content-push

This task is used to automatically import and synchronize content packages from a git repository. The process DOES NOT assume anything about the content; consequently, the repository MUST have scripts to facilitate building and uploading the content.

When used with the git-lab-trigger-webhook-push trigger, the repository param will be automatically populated when MergeData is set to true. This allows the webhook data to automically flow into the blueprint.

Note: At this time, the repository cannot require authentication.

The process is:

  1. clone the git repository named in params value

  2. run the tools/build_content.sh script

  3. optionally add the content to the system (if manager/update-catalog is true)

  4. optionally rebuild the manager catalogs (if dr-server/update-content` is true)

This assumes that the output of the content script will go into a directory structure like.

rebar-catalog/<cp-name>/<version>.json

22.43.2.5.30. inventory-check

Using the inventory/collect parameter, filters the JSON command output into inventory/data hashmap for rendering and validation.

Will apply inventory/check for field valation.

22.43.2.5.31. inventory-cost-calculator

Stores cost estimate in inventory/cost

For cloud costs (requires cloud/provider), will use the cloud/instance-type parameter and lookup cost models from cloud/cost-lookup

If no specific type match is found, will use the approximate using the following formula: [RAM of machine in Gb]*[ram_multiple]*[fallback]

At this time, no action for non-cloud machines

22.43.2.5.32. inventory-cost-summation

Sums inventory/cost for machines in clusters or brokers. If value is missing from machine it is assumed 0.

Stores cost estimate in inventory/cost

22.43.2.5.33. inventory-minimal

Sets the machine/type parameter for other tasks to use later.

22.43.2.5.34. kibana-setup

A task to install and setup kibana. This is a very simple single instance.

22.43.2.5.35. linux-package-enterprise

This task is designed to work on select Linux versions and families. This should be injected by Universal Workflow if needed.

Installs the EPEL libraries on Linux families that use them

This task performs that update step so that install tasks do not have to repeat that work.

22.43.2.5.36. linux-package-updates

This task is designed to work on multiple Linux versions and families. This should be injected by Universal Workflow if needed.

Before doing package manager install, Linux families expect that you have updated the packages.

This task performs that update step so that install tasks do not have to repeat that work.

22.43.2.5.37. manager-join-cluster-members

Walk the cluster members and add them to the manager(self) if not present.

22.43.2.5.38. manager-remove-cluster-members

Walk the cluster members and remove them from the manager(self) if not present.

22.43.2.5.39. migrate-machine

The migrate-machine task does the following:
  • gathers machine information on the old endpoint.

  • normally checks the contents and versions are the same. This can be skipped with the migrate-machineskip-content-check parameter.

  • normally checks for profiles on the machine that don’t exist and creates them on the new endpoint. This can be skipped with the migrate-machineskip-profiles parameter.

  • creates a new machine with the same UUID.

  • updates the machine’s parameter

  • updates the drp-agent config with new endpoint and machine token on the new endpoint.

  • restarts the agent

At this point the task restarts and catches the migrate-machinecomplete is set to true and runs the following on the new endpoint:
  • Deletes all jobs related to the machine on the old endpoint.

  • Removes the machine from the old endpoint.

22.43.2.5.40. network-firewall

Requires that firewall-cmd or ufs to be enabled system, will attempt to install if missing.

Will reset the firewall at the end of the task.

Including port / in network/firewall-ports will disable the firewall.

No action when running in a context

22.43.2.5.41. network-lldp

Assumes Sledgehammer has LLDP daemon installed so that we can collect data.

Also requires the system to have LLDP enabled on neighbors including switches to send the correct broadcast packets.

22.43.2.5.42. profile-cleanup

Removes Profiles on a Machine, based on the array of specific or wildcard matched Profile in the Param profile-cleanup-selection-pattern.

Use of the profile-cleanup-skip Param set to true will output the matching Profiles in the Job Log, but skip the actual removal of the profile(s) from the machine.

22.43.2.5.43. rsa-key-create

Uses ssh-keygen to create an RSA public/private key pair.

Stores keys in rsa/key-public and rsa/key-private on the cluster profile.

The public key (which contains newlines) is stored in single line format where newlines are removed.

Noop if the rsa/key-private exists.

This does NOT set the rsa/key-user (default: root) because this is often set specifically by the environment on each machine.

22.43.2.5.44. stage-chooser

This task uses the stage-chooser/next-stage and stage-chooser/reboot parameters to change the stage of the current machine and possibly reboot.

This is not intended for use in a stage chain or workflow. But a transient stage, that can be used on a machine that is idle with a runner executing.

22.43.2.5.45. storage-mount-devices

Uses array of devices from storage/mount-devices to attach storage to system.

If we’ve need a storage area, this task will mount the requested resources under /mnt/storage.

See Mount Attached Storage

22.43.2.5.46. terraform-apply

Runs one or more Terraform Plan templates as defined by the terraform/plan-templates variable in the stage calling the task.

Requires an terraform context with Terraform v1.0+. and plans must comply with v1.0 syntax

The terraform apply is only called once. All plans in the list are generated first. If sequential operations beyond the plan are needed, use multiple calls to this task.

Only DRP API, Provisioning URL, RSA Public Key and SSH user are automatically passed into the plan; however, the plans can use the .Param and .ParamExists template to pull any value needed.

Terraform State is stored as a Param ‘broker/tfinfo’ on the Cluster Machine after first execution. It is then retrieved for all subsequent runs so that Terraform is able to correctly use it’s state values. The ‘broker/tfinfo’ parameter is a map of brokers that can be used to track state. Anything can be stored in this parameter.

The synchronize.sh script is used by “local-exec” to connect/update/destroy machines from Terraform into Digital Rebar.

To match existing machines, cloud/instance-id and broker/name are used first. Name is used as a backup.

When updating/creating sets the Params for
  • cloud/instance-id

  • broker/name

  • cloud/provider

  • rsa/key-user (if available in broker)

Notes: * To create SSH keys, use the rsa-key-create generator task. * If creating cloud machines, use the cloud-init task for join or flexiflow to add ansible-join * When using the synchronize operations, you must define terraform/map-ip-address and terraform/map-instance-name for the created machines * Setting terraform/debug-plan to true will cause the TF plan to be written to terraform/debug-plan-review. This is UNSAFE and for debugging only.

22.43.2.5.47. workflow-pause

This task will set the machine to NOT runnable, but leave the runner running. This will act as a pause in the workflow for other system to restart.

22.43.2.6. triggers

The content package provides the following triggers.

22.43.2.6.1. manager-nightly-catalog-update

Trigger to nightly update the manager catalog at 1:10am everyday.

22.43.2.7. version_sets

The content package provides the following version_sets.

22.43.2.7.1. license

Synchronize RackN entitlements file (which is defined as a content pack) with the edge site.

This version set requires that the license file be saved in the manager’s files area under rebar-catalog/rackn-license/v1.0.0.json

The bootstrap-manager task will automatically place the license file in this location.

22.43.2.8. workflows

The content package provides the following workflows.

22.43.2.8.1. bootstrap-advanced

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations. Use universal-bootstrap with bootstrap-contexts profile

Bootstrap Digital Rebar server for advanced operation Includes the bootstrap-base!

REQUIRES that the Endpoint Agent has been enabled.

  • Basic Operations * Make sure Sledgehammer bootenvs are loaded for operation. * Set the basic default preferences. * Setup an ssh key pair and install it to the global profile. * Locks the endpoint to prevent accidential operations

  • Advanced Operations * Installs Docker and downloads context containers

This is designed to be extended or replaced with a site specific bootstrap that uses the base tasks but performs additional bootstrapping.

22.43.2.8.2. broker-provision

Requires that operator has created a Contexts for “runner”, “terraform” then starts the DRP Agent using Cloud-Init by default. Ansible Join-Up can be injected using Flexiflow for clouds that cannot inject Cloud-Init.

After v4.8, workflow operates on a cluser or resource broker only.

  • Machines are created/destroyed during provisioning and mapped to the cluster by adding the cluster/profile.

  • Machines use the linux-install application when created

22.43.2.8.3. centos-7-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in CentOS provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the CentOS 7 ISO as per the centos-7 BootEnv

22.43.2.8.4. centos-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in CentOS provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the CentOS ISO as per the centos-8 BootEnv

22.43.2.8.5. discover-joinup

Discover, run as a service and complete-nobootenv

This workflow is recommended for joining cloud machines instead of discover-base.

NOTE: You must set defaultBootEnv to sledgehammer in order to use join-up to discover machines.

Some operators may chooose to first create placeholder machines and then link with join-up.sh to the placeholder machine model using the UUID. See the ansible-joinup task for an example.

Complete-nobootenv ensures that Digital Rebar does not force the workflow into a bootenv (potentially rebooting) when finished.

22.43.2.8.6. fedora-33-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in Fedora 33 provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the Fedora ISO as per the fedora-33 BootEnv

22.43.2.8.7. fedora-34-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in Fedora 34 provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the Fedora ISO as per the fedora-34 BootEnv

22.43.2.8.8. fedora-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in Fedora provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the Fedora ISO as per the fedora-31 BootEnv

22.43.2.8.9. migrate-machine

This workflow will migrate a machine from one endpoint to another.

A profile called migrate-machine can be created. It will not be added to the machine on the new endpoint.

The following parameters are required: - migrate-machine/new-drp-server-api-url: URL to the endpoint (https://localhost:8092 as an example.) - migrate-machine/new-drp-server-token: An token for the new endpoint that can create/modify profiles and machines.

The following optional parameters will skip certain steps that are recommended. - migrate-machine/skip-content-check: If set to true, task will skip verifying if the content is the same between both endpoints. - migrate-machine/skip-profiles: If set to true, task will skip creating new profiles. Any profiles on the old machine will be lost when created on the new endpoint.

The following parameters should not be added or changed manually. - migrate-machine/new-drp-server-complete: The task will not migrate if set to true. It defaults to false. - migrate-machine/old-enpoint-url: Used to clean up the old endpoint after migration. - migrate-machine/old-endpoint-token: Used to clean up the old endpoint after migration.

The migrate-machine task does the following:
  • gathers machine information on the old endpoint.

  • normally checks the contents and versions are the same. This can be skipped with the migrate-machineskip-content-check parameter.

  • normally checks for profiles on the machine that don’t exist and creates them on the new endpoint. This can be skipped with the migrate-machineskip-profiles parameter.

  • creates a new machine with the same UUID.

  • updates the machine’s parameter

  • updates the drp-agent config with new endpoint and machine token on the new endpoint.

  • restarts the agent

At this point the task restarts and catches the migrate-machinecomplete is set to true and runs the following on the new endpoint:
  • Deletes all jobs related to the machine on the old endpoint.

  • Removes the machine from the old endpoint.

22.43.2.8.10. ubuntu-20.04-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows.

This workflow includes the DRP Runner in Ubuntu provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the Ubuntu-20.04 ISO as per the ubuntu-20.04 BootEnv

22.43.2.8.11. ubuntu-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in Ubuntu provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the Ubuntu-18.04 ISO as per the ubuntu-18.04 BootEnv