18.1.1. Example API Training Scenario¶
Contents
In this training API scenario, the developer will be driving the following behaviors of the Digital Rebar Platform (DRP) and features. All example API usage will be documented using curl
calls, so they can be simulated at the command line. The curl
calls should provide enough detail for a developer to adapt to their preferred language to manipulate the API.
This example API interaction scenario assumes that the developer has familiarity with the DRP solution, terminology, and basic operational patterns for managing Infrastructure with DRP. Additionally, it is assumed that the DRP Endpoint being manipulated is already fully setup and configured for use with Universal pipelines. DRP Should be at v4.7.0 or newer, with all associated content also at v4.7.0 or newer.
Primary API Documentation can be found at:
Digital Rebar Provision API documentation
Or via the Swagger UI on an installed DRP Endpoint for interactive exploration and use of the API, at:
https://DRP_ENDPOINT:8092/swagger-ui (replace DRP_ENDPOINT as appropriate)
In the Swagger UI, authenticate to the DRP Endpoint via the “Authorize” button.
RackN Intro to API training deck is at:
Using “curl” with the DRP API:
18.1.1.1. Objectives¶
The following objectives will be described in this document.
Authentication to the DRP Endpoint, and obtain a JWT Token
Use of the DRP “Universal” content system, enabling “Pipelines”
Pre-creation of a Machine object, as opposed to the traditional PXE Discovery mechanism
Transition a created machine through a Universal pipeline provisioning process
The machine provisioning process will utilize the “image-deploy” feature to deliver an Operating System payload to the Machine
Deprovisoin a machine and return it to a waiting state, after wiping the disks
Destroy the Machine object associated with a machine
Use the Callback system to send a customized Payload to an external RESTful API endpoint
18.1.1.1.1. DRP Endpoint Installation¶
Generally speaking, the following command line should set up DRP from scratch to a baseline Universal configuration, minus any OS ISOs, or Image Deploy artifact files.
# assumes a valid license file named "rackn-license.json" exists in current working directory # set the '--drp-id' appropriately for your DRP Endpoint ID curl -s get.rebar.digital/tip | bash -s -- install --universal --initial-contents=rackn-license.json --drp-id=api-test-endpoint --initial-profiles=bootstrap-drp-endpoint
More complete documentation on installation and setup of the DRP Endpoint can be found in the documentation at:
18.1.1.2. Detailed API Usage¶
Below are the detailed implementation training instructions as described in the Objectives section.
In all following code examples, code will heavily use Variables (shell) along with the curl
examples. The main Variables are defined as follows:
export RS_ENDPOINT=https://drp-endpoint.example.com:8092 export RS_USER=rocketskates export RS_PASSWORD=r0cketsk8ts export RS_KEY=$RS_USER:$RS_PASSWORD export HEADERS="-H 'Content-Type: application/json' -H 'Accept: application/json'"
In addition, after initial Authentication, the following will be set, and assumed to be used for all subsequent curl
command line calls for authentication.
export RS_TOKEN=<token> export HEADERS="$HEADERS -H 'Authentication: Bearer $RS_TOKEN" export CURL="curl -s -X GET -k -u $RS_KEY $HEADERS"
18.1.1.2.1. Authentication¶
18.1.1.2.1.1. Username / Password to Obtain a JWT Token¶
Initial authentication is with a traditional User/Password pair. However, a developer should request a Token with an appropriate time-to-live (TTL) value. Tokens can also be highly scoped to provide security controls around Tokens, should one “escape into the wild”. See more about authentication in DRP:
Authentication Models section of the documentation
Here is how to submit a User/Pass pair to obtain a JWT Token for subsequent use. This example obtains the token and assigns it to the RS_TOKEN
variable for future use in the shell.
TTL=86400 # token good for 24 hours ROLES=superuser # superuser role export RS_TOKEN=$(curl -s -X GET -k -u $RS_KEY $HEADERS "$RS_ENDPOINT/api/v3/users/drpadmin/token?ttl=$TTL&roles=$ROLES" | jq -r '.Token')
18.1.1.2.1.1.1. Example Response¶
# none - environment Variable RS_TOKEN gets set for auth use
Verify the token works with a quick “info” API call:
curl -k -s -X GET -H 'Content-Type: application/json' -H 'Accept: application/json' -H "Authorization: Bearer $RS_TOKEN" $RS_ENDPOINT/api/v3/info | jq .version
Should return a JSON string, like:
"v4.7.0"
18.1.1.2.1.2. Token Auth Use¶
Subsequent curl
calls in this documentation may be called out as follows; substitute variables as defined in the above section, where appropriate:
$CURL "$RS_ENDPOINT/api/v3/ ... "
Where “$CURL” expands to something like:
curl -s -X GET -k -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'Authorization: Bearer <...snip...>'
“<...snip...>
” would be replaced with a very long Token string.
18.1.1.3. Create Machine Object¶
Machine objects are JSON blobs that represent the last recorded state of a given physical device (called “Machine” in DRP parlance; as opposed to a “host”, “node”, “server” or similar). As a Machine passes through Workflow tasks in various pipelines, the state of the Machine is recorded on:
Machine object fields
Variable bits of information captured in the Params field of the Machine object (generally all Params are “type defined” for safety)
Profiles (collections of Param values), in the Profiles field of the Machine object
In addition, the Machine object carries the state of the DRP “Runner” or “Task System”. These are recorded in various Fields on the Machine object.
Machine objects are usually dynamically created when a Machine first PXE boots, and if it is unknown to the System, a new Machine object is created. As the Machine passes through a typical Inventory workflow (eg “universal-discover
”), a tremendous amount of detailed information about the state of the machine is recorded on the Machine object.
It is possible to “pre-create” a Machine object, which will match the physical characteristics of a device that subsequently PXE boots on the Network. Control of the machine occurs when details of the booting machine match the appropriate signatures in an existing Machine object. Typically the primary values that correlate a Machine object to a physical device is the Network Information Card (NIC) MAC (Media Access Controls) Address of the booting interface.
Creating a minimal Machine object can be done by generating a valid JSON structure with just the “Name
” field of a machine. All other fields will be dynamically filled with default values. HOWEVER, to correlate a Machine to a physical device, additional details (MAC addresses at a minimum) must be provided.
When the Machine object is first created a UUID value will be randomly generated and assigned to the Machine. It is possible during Machine object creation to pre-define a UUID, which can be used to match external infrastructure UUID representations for that machine (e.g. other asset management services UUID references).
Example of creating a minimally defined Machine object:
UUID=$(uuidgen | tr '[:upper:]' '[:lower:]') # must be lowercase, thank you MacOS X METHOD="PATCH" curl -k -s -X $METHOD -H 'Content-Type: application/json' -H 'Accept: application/json' -H "Authorization: Bearer $RS_TOKEN" \ -d ' { "Name": "test-machine", "Uuid": "'$UUID'", "Description": "machine description", "HardwareAddrs": [ "00:00:00:99:99:00", "00:00:00:99:99:01", "00:00:00:99:99:02", "00:00:00:99:99:02" ] } ' "$RS_ENDPOINT/api/v3/machines"
Note that the HardwareAddrs
list is the critical connection between the Machine object and the booting physical device via DHCP/PXE. These must be correct, and they must be unique.
In the sample above, the UUID is dynamically generated, but as long as it is unique, and a validly formed UUID, any value can be used (e.g. existing UUID reference in external asset management database).
reference: Universally Unique Identifier
18.1.1.3.1. Example Response¶
A JSON response with the newly created Machine object will be returned along with an HTTP 200 success code. For error codes (HTTP Response Codes) that you may encounter, see the Swagger UI or API Documentation.
{ "Validated": true, "Available": true, "Errors": [], "ReadOnly": false, "Meta": { "feature-flags": "change-stage-v2" }, "Endpoint": "", "Bundle": "", "Partial": false, "Name": "test-machine", "Description": "machine description", "Uuid": "1382ae65-5d95-4ba4-a728-68b0d4eadb46", "CurrentJob": "", "Address": "", "Stage": "discover", "BootEnv": "sledgehammer", "Profiles": [], "Params": {}, "Tasks": [ "stage:discover", "bootenv:sledgehammer", "enforce-sledgehammer", "set-machine-ip-in-sledgehammer", "reserve-dhcp-address", "ssh-access", "stage:sledgehammer-wait" ], "CurrentTask": -1, "RetryTaskAttempt": 0, "TaskErrorStacks": [], "Runnable": true, "Secret": "uKu8c_b1-gvZmHe-", "OS": "", "HardwareAddrs": [ "00:00:00:99:99:00", "00:00:00:99:99:01", "00:00:00:99:99:02", "00:00:00:99:99:02" ], "Workflow": "discover-base", "Arch": "amd64", "Locked": false, "Context": "", "Fingerprint": { "SSNHash": "", "CSNHash": "", "SystemUUID": "", "MemoryIds": [] }, "Pool": "default", "PoolAllocated": false, "PoolStatus": "Free", "WorkflowComplete": false }
Note that the machine object will be placed in the defined defaultWorkflow
according to the DRP Endpoint Info & Preferences setting. Once the physical machine that matches this Machine object definition PXE boots, it will immediately transition into this Workflow. If this behavior is not desired, the Developer should reset the Machine runner state and Workflow settings accordingly, before the Machine PXE boots.
18.1.1.4. Configuring the Machine¶
Customization of the Machine object is generally performed by manipulating one of three components of the Machine JSON data structure that represents the state and configuration of the device:
JSON Fields - Generally contain identifying details of the machine, along with the management and control of the Workflow operations
Params (a top level JSON field on the machine object) - Params are used to record state information and to provide configuration details that Workflow tasks/jobs consume
Profiles (also a top level JSON field on the machine object) - provides convenient group of Param values
18.1.1.4.1. Setting the Universal Application¶
Universal workflow transitions allow for complete zero touch operations, encompassing Discovery, Inventory, Classification, Hardware Lifecycle Management, OS installations, Cluster Configuration, and final application/system configuration. In addition, integrating with external infrastructure services is baked into the Universal workflows.
There are Param values that must be set to define the zero touch journey that will guide the machine’s transition to it’s final target destination state. Generally they are:
universal/application
- defines OS selection and optional additional configuration details that will be added to the system based on automatic Profile matching
linux/install-bootenv
- for Kickstart/Preseed Linux OS deployments; defines the Distro and version of Linux to install
image-deploy/*
- these params define which single artifact Image and accompanying configuration details for a non-Kickstart/Preseed installations; generally a single image is described in a Profile containing the Param values
The following example will assume Universal is being driven to perform the following tasks:
Perform an image deployment of a Linux operating system
Installing Ubuntu 18.04
Using the
universal/application
Param, and the Profile describing theimage-deploy
configuration exists on the systemMETHOD="POST" UUID=<MACHINE_ID> PARAM=$(printf "%s" ""universal/application" | jq -sRr @uri) # URL encodes the string VALUE=${VALUE:-"image-deploy"} curl -k -s -X $METHOD -H 'Content-Type: application/json' -H 'Accept: application/json' -H "Authorization: Bearer $RS_TOKEN" \ -d "\"$VALUE\"" \ "$RS_ENDPOINT/api/v3/machines/$UUID/params/$PARAM" | jq '.'
18.1.1.4.1.1. Example Response¶
"image-deploy"
18.1.1.4.2. Create Profile for Image Deploy¶
In addition to specifying use of the Image Deploy workflow pipeline, a profile describing the image deployment location and configuration details need to be specified. Generally these profiles should be maintained in a curated content pack describing the associated images and configurations for given deployment scenarios.
This example creates an ad-hoc Profile with the Param values necessary to drive deployment in the Universal workflows and pipelines. Normally image profiles should be maintained as Infrastructure-as-Code (IaC) content packs with the descriptive configuration information necessary to deploy the image. The Profiles can be added dynamically by a Classifier Stage/Task in the workflow, or the operator can set the Profile on the Machine prior to transitioning it to an installation/provisioning Workflow.
METHOD="POST" curl -k -s -X $METHOD -H 'Content-Type: application/json' -H 'Accept: application/json' -H "Authorization: Bearer $RS_TOKEN" -d '{ "Name": "id-ubu-18-v1", "Description": "profile id-ubu-18-v1", "Meta": { "color": "blue", "icon": "linux" }, "Params": { "image-deploy/image-file": "/files/images/ubuntu-18.04-5c58265397.tgz", "image-deploy/image-os": "linux", "image-deploy/image-os-subtype": "ubuntu", "image-deploy/image-type": "tgz", "image-deploy/use-cloud-init": true } }' $RS_ENDPOINT/api/v3/profiles | jq '.'
18.1.1.4.2.1. Example Response¶
{ "Validated": true, "Available": true, "Errors": [], "ReadOnly": false, "Meta": { "color": "blue", "icon": "linux" }, "Endpoint": "", "Bundle": "", "Partial": false, "Name": "id-ubu-18-v1", "Description": "profile id-ubu-18-v1", "Documentation": "", "Params": { "image-deploy/image-os": "linux", "image-deploy/image-os-subtype": "ubuntu", "image-deploy/image-type": "tgz", "image-deploy/image-file": "/files/images/ubuntu-18.04-5c58265397.tgz", "image-deploy/use-cloud-init": true }, "Profiles": [] }
18.1.1.4.3. Add the Profile to the Machine¶
Manipulating Profiles on a Machine object requires use of the PATCH operation, and also requires getting the current set of Profiles on the Machine. This method, although a bit more cumbersome to implement, enforces strong guarantees on atomicity and correctness in a multi-writer API environment.
Profile order does matter, if Profiles contain the same Params; based on the kb-00057: Parameter Precedence, the Param value may differ. The PATCH
operation allows for setting an explicit order in the list of Profiles and applying that to the machine.
A PATCH
process to modify the list of Profiles on a Machine generally follows this flow:
get the current set of Profiles
insert the new Profile to add to the Machine in the order desired from the original Profiles list
use PATCH with the correct test and replace operations
modify the Machine
PROFILE="id-ubu-18-v1" CURRENT=$(curl -k -s -X GET -H 'Content-Type: application/json' -H 'Accept: application/json' -H "Authorization: Bearer $RS_TOKEN" \ "$RS_ENDPOINT/api/v3/machines/$UUID?slim=true" | jq -c '.Profiles') NEW=$(echo "$CURRENT" | jq -c ' .+ ["'$PROFILE'"]')
The above variables CURRENT
will become our test operation to compare against for the PATCH
, while NEW will be assigned the current Profiles, plus our additional new Profile ($PROFILE
).
The variables will contain a JSON array/list structure (eg ‘[ "foo", "bar", "baz" ]
’). To create the very first Profile on a machine, set the test operation to an empty array/list (’[]
’).
Once the new Profile assignment array/list has been constructed, apply it to the Machine:
METHOD=PATCH UUID=<MACHINE_ID> curl -k -s -X $METHOD -H 'Content-Type: application/json' -H 'Accept: application/json' -H "Authorization: Bearer $RS_TOKEN" \ -d '[{"op":"test","path":"/Profiles","value":'$CURRENT'},\ {"op":"replace","path":"/Profiles","value":'$NEW'}]' \ "$RS_ENDPOINT/api/v3/machines/$UUID" | jq '.'
18.1.1.4.3.1. Example Response¶
A complete Machine object JSON response will be generated on success. Note that this could be a very large blob, if Inventory, BIOS, RAID, or other large data structures have been generated/applied to the Machine previously.
{ "Validated": true, "Available": true, "Errors": [], "ReadOnly": false, "Meta": { "color": "black", "feature-flags": "change-stage-v2", "icon": "server" }, "Endpoint": "", "Bundle": "", "Partial": false, "Name": "test-machine", "Description": "machine description", "Uuid": "1382ae65-5d95-4ba4-a728-68b0d4eadb46", "CurrentJob": "", "Address": "", "Stage": "none", "BootEnv": "sledgehammer", "Profiles": [ "id-ubu-18-v1" ], "Params": { "universal/application": "image-deploy" }, "Tasks": [], "CurrentTask": 0, "RetryTaskAttempt": 0, "TaskErrorStacks": [], "Runnable": true, "Secret": "uKu8c_b1-gvZmHe-", "OS": "", "HardwareAddrs": [ "00:00:00:99:99:00", "00:00:00:99:99:01", "00:00:00:99:99:02", "00:00:00:99:99:02" ], "Workflow": "", "Arch": "amd64", "Locked": false, "Context": "", "Fingerprint": { "SSNHash": "", "CSNHash": "", "SystemUUID": "", "MemoryIds": [] }, "Pool": "default", "PoolAllocated": false, "PoolStatus": "Free", "WorkflowComplete": true }
18.1.1.4.4. Configure BMC (baseboard management controller)¶
The BMC (baseboard management controller) is responsible for implementing the IPMI, Redfish, and vendor proprietary hardware protocols on the system (eg “reboot”, “power on”, “power off”, “nextbootpxe”, “nextbootdisk”). It is important to be able to regain control of a system if the state of the Runner (Agent) becomes unavailable; either intentionally (dissolving the Agent/Runner in the OS), or unintentionally (it dies). This allows setting the DRP Endpoint state to boot the system to Sledgehammer (via some Workflow - like “discover-base
”), and then rebooting the system.
The “universal-discover
” and “universal-hardware
” workflows will inventory and configure the BMC. With this in mind; if the BMC has been pre-configured with IP Address, default GW, etc. information, then the only remaining configuration necessary is setting the Username and Password for authentication of the BMC actions.
If Digital Rebar is going to be responsible for configuration of the BMC values, please consult the documentation for details on what needs to be set. Documentation is in the IPMI Plugin documentation (it’s unfortunately named “ipmi
”, as the Plugin actually controls the BMC via IPMI, Redfish, or Vendor specific protocols, not just IPMI functions).
Setting the Username/Password can be performed in a Profile that is ultimately attached to the system automatically (assuming a group of systems share user/pass authentication credentials). The “ipmi/password
” value must be a properly encrypted (using NACL) value, not a clear text string. There are several methods to perform the encryption, but ultimately it will boil down to your language implementation. See the Documentation Knowledge Base article on kb-00060: Working with Secure Params and the API for additional information. This document assumes you have set the Secure Param JSON object correctly with the Key, Nonce, and Payload.
Setting the ipmi/username:
METHOD="POST" UUID=<MACHINE_ID> BMC_USER="root" BMC_PASS="calvin" PARAM=$(printf "%s" "$BMC_USER" | jq -sRr @uri | sed 's/%0A//') # strip linefeed on param curl -k -s -X POST -H 'Content-Type: application/json' -H 'Accept: application/json' -H "Authorization: Bearer $RS_TOKEN"-d '"root"' $RS_ENDPOINT/api/v3/machines/1382ae65-5d95-4ba4-a728-68b0d4eadb46/params/ipmi%2Fusername
Setting the ipmi/password
- using a pre-generated Secure Param object:
PARAM=$(printf "%s" "$BMC_PASS" | jq -sRr @uri | sed 's/%0A//') # strip line feed on param curl -k -s -X POST -H 'Content-Type: application/json' -H 'Accept: application/json' --H "Authorization: Bearer $RS_TOKEN" -d ' { "Key": "sAlABZy90If04Rx0VI2uVZ4HbyIavnEPXOH8EoX4MWA=", "Nonce": "AEn9DTt+gbtJkmiWteOEdKT8R5xeUT8c", "Payload": "1w+6NI48KLmm9iJpFS2cM2tE58UFwu/3" }' \ $RS_ENDPOINT/api/v3/machines/1382ae65-5d95-4ba4-a728-68b0d4eadb46/params/ipmi%2Fpassword
18.1.1.4.4.1. Example Responses¶
For setting the ipmi/username
, the username is returned as a string:
"root"
For setting the ipmi/password
the encrypted object is returned on success:
{ "Key": "sAlABZy90If04Rx0VI2uVZ4HbyIavnEPXOH8EoX4MWA=", "Nonce": "AEn9DTt+gbtJkmiWteOEdKT8R5xeUT8c", "Payload": "1w+6NI48KLmm9iJpFS2cM2tE58UFwu/3" }
18.1.1.5. Provision the Machine¶
Provision operations are performed by transitioning the machine to an appropriate workflow. For systems using Universal content; all provisioning operations generally should start with the “universal-discover
” workflow. Newer Universal content systems may have a new decorator starting Workflow (possibly named “universal-start
”).
Note
Image Deploy chain map support was only added in the Universal content pack at version v4.7.1
or later. If you do not have this version of Universal installed, please upgrade, or create an appropriate universal/workflow-chain-map-override
as necessary.
Note
If a Machine is in the same workflow that is being set again, then there is a “no-op” action. To re-run the same workflow, remove the existing workflow (set Workflow:
field to empty ""
value) from the system, then set the same workflow again.
Provisioning example:
METHOD="PATCH" WF="universal-discover" UUID=<MACHINE_ID> curl -k -s -X $METHOD -H 'Content-Type: application/json' -H 'Accept: application/json' -H "Authorization: Bearer $RS_TOKEN" -d '[{"op": "replace", "path": "/Workflow", "value": "$WF"}]' https://66.165.231.2:8092/api/v3/machines/$UUID
This results in setting the new Workflow system which will reset several Fields on the JSON object, including a composed list of the Tasks from the Workflow specification. Once the Machine boots, it will boot into the in-memory Sledgehammer OS, start the Agent (Runner), and begin executing the tasks.
18.1.1.5.1. Example Response¶
{ "Validated": true, "Available": true, "Errors": [], "ReadOnly": false, "Meta": { "feature-flags": "change-stage-v2" }, "Endpoint": "", "Bundle": "", "Partial": false, "Name": "d00-00-00-51-01-00.pve-lab.local", "Description": "", "Uuid": "715819a9-3c09-445e-86cb-71afa3b0f4bf", "CurrentJob": "c17af8e6-48ac-414a-a412-7ab99b1c47e9", "Address": "192.168.1.5", "Stage": "discover", "BootEnv": "sledgehammer", "Profiles": [ "id-ubu-18-v1" ], "Params": { "< ... snip ... >": "< ... redacted for brevity ... >" }, "Tasks": [ "stage:discover", "bootenv:sledgehammer", "enforce-sledgehammer", "set-machine-ip-in-sledgehammer", "reserve-dhcp-address", "ssh-access", "stage:universal-discover-start-callback", "callback-task", "stage:universal-discover-pre-flexiflow", "flexiflow-start", "flexiflow-stop", "stage:universal-discover", "universal-discover", "stage:raid-reset", "raid-reset", "stage:raid-enable-encryption", "raid-tools-install", "raid-enable-encryption", "stage:shred", "shred", "stage:raid-reset", "raid-reset", "stage:universal-discover-post-flexiflow", "flexiflow-start", "flexiflow-stop", "stage:universal-discover-classification", "classify-stage-list-start", "classify-stage-list-stop", "stage:universal-discover-post-validation", "validation-start", "validation-stop", "stage:universal-discover-complete-callback", "callback-task", "stage:universal-chain-workflow", "universal-chain-workflow", "stage:complete-nobootenv" ], "CurrentTask": -1, "RetryTaskAttempt": 0, "TaskErrorStacks": [], "Runnable": true, "Secret": "tQXvI8wWV2am27fq", "OS": "centos-8", "HardwareAddrs": [ "00:00:00:51:01:00", "3e:6a:da:eb:d8:76" ], "Workflow": "universal-discover", "Arch": "amd64", "Locked": false, "Context": "", "Fingerprint": { "SSNHash": "", "CSNHash": "", "CloudInstanceID": "", "SystemUUID": "", "MemoryIds": [] }, "Pool": "default", "PoolAllocated": false, "PoolStatus": "Free", "WorkflowComplete": false }
18.1.1.6. Deprovision the Machine¶
Deprovision/destroy operations are handled by the “universal-decommission
” workflow. Similar to provisioning, simply set the machine Workflow to “universal-decommission
”.
METHOD="PATCH" WF=universal-decommission UUID=<MACHINE_ID> curl -k -s -X $METHOD -H 'Content-Type: application/json' -H 'Accept: application/json' -H "Authorization: Bearer $RS_TOKEN" -d '[{"op": "replace", "path": "/Workflow", "value": "'$WF'"}]' https://66.165.231.2:8092/api/v3/machines/$UUID
18.1.1.6.1. Example Response¶
The following example response has had the “Params” redacted for brevity.
{ "Validated": true, "Available": true, "Errors": [], "ReadOnly": false, "Meta": { "feature-flags": "change-stage-v2" }, "Endpoint": "", "Bundle": "", "Partial": false, "Name": "d00-00-00-51-01-00.pve-lab.local", "Description": "", "Uuid": "715819a9-3c09-445e-86cb-71afa3b0f4bf", "CurrentJob": "c17af8e6-48ac-414a-a412-7ab99b1c47e9", "Address": "192.168.1.5", "Stage": "discover", "BootEnv": "sledgehammer", "Profiles": [ "id-ubu-18-v1" ], "Params": { "< ... snip ... >": "< ... redacted for brevity ... >" }, "Tasks": [ "stage:discover", "bootenv:sledgehammer", "enforce-sledgehammer", "set-machine-ip-in-sledgehammer", "reserve-dhcp-address", "ssh-access", "stage:universal-decommission-start-callback", "callback-task", "stage:universal-decommission-pre-flexiflow", "flexiflow-start", "flexiflow-stop", "stage:universal-decommission", "universal-decommission", "stage:raid-reset", "raid-reset", "stage:raid-enable-encryption", "raid-tools-install", "raid-enable-encryption", "stage:shred", "shred", "stage:raid-reset", "raid-reset", "stage:universal-decommission-post-flexiflow", "flexiflow-start", "flexiflow-stop", "stage:universal-decommission-classification", "classify-stage-list-start", "classify-stage-list-stop", "stage:universal-decommission-post-validation", "validation-start", "validation-stop", "stage:universal-decommission-complete-callback", "callback-task", "stage:universal-chain-workflow", "universal-chain-workflow", "stage:complete-nobootenv" ], "CurrentTask": -1, "RetryTaskAttempt": 0, "TaskErrorStacks": [], "Runnable": true, "Secret": "tQXvI8wWV2am27fq", "OS": "centos-8", "HardwareAddrs": [ "00:00:00:51:01:00", "3e:6a:da:eb:d8:76" ], "Workflow": "universal-decommission", "Arch": "amd64", "Locked": false, "Context": "", "Fingerprint": { "SSNHash": "", "CSNHash": "", "CloudInstanceID": "", "SystemUUID": "", "MemoryIds": [] }, "Pool": "default", "PoolAllocated": false, "PoolStatus": "Free", "WorkflowComplete": false }
18.1.1.7. Configuring the Callback System¶
Please see:
Please see the callback - Callback content pack documentation for further details