Skip to content

Automatic Machine Classification in Infrastructure Pipelines


This document is designed to help understand the Automatic Classification rules that will be invoke on a DRP Endpoint system that work in cooperation with Infrastructure Pipelines.

Effectively, a few key bits of data will allow systems to dynamically select specially named and prepared Profiles to fill in the details for important configuration aspecs of a Machine. These generally include:

  • Hardware Lifecycle Management (HWLC includes firmware/flash, BIOS, and HW RAID)
  • Operating System configuration
  • Application stack deployment and configuration
  • Other customer or desired interactions as provided by Infrastructure Pipelines

This process describes how the base classifier builds up unique names to search an operator supplied Profile to add configuration details to a system. Other means of classifying systems can be used cooperatively with or to replace this system.

What are Infrastructure Pipelines?

RackN defines an Infrastructure Pipelines as a declarative end state to consistently drive different hardware platforms, operating systems, and application usages to a final destination. These are the major steps that are considered in that process:

  • Inventory of Machine
  • Classify Machines for Zero Touch Automation to final destination, including:
    • handle Hardware Lifecycle Management (HWLC: Firmware/flash, BIOS, HW RAID)
    • handle OS selection and appropriate configuration
  • a consistent point to inject Application configuration and customizations

Each of these steps are designed to flexibly support different hardware Vendor platforms, Operating Systems, and Application stacks. In addition, you have full control to inject customization at all of the steps both in the Workflows that execute local Tasks and allowing interaction with external infrastructure services to provide Classification driven actions.

Our experience has shown that most large organizations usually have one or more teams that define standards and configuration details for these steps; generally broken down into three primary groups:

  1. Application definition group - provides specific config/app deployment details
  2. OS definition group - provides the approved "golden" approved image baseline
  3. Hardware definition group - provides the configuration and flash state of approved hardware state

In some organizations, these groups may be combined to include multiple roles.

The default set of Classification rules handled in the Infrastructure Pipelines operates along these principles. The Classifier builds up a set of predetermined naming structures to support dynamically adding configuration values for each of these three major configuration boundaries.

These are loosely referred to as the "application", "bom" (bill-of-materials), and "hardware" details. Generally speaking, the "applicatin" and "bom" values are set by an operator or zero touch classification rules. The "hardware" value is dynamically generated from Machine inventory Params and are created during the universal-discover Workflow run.

A common usage pattern is an operations team that deploys an OS via a standard Infrastructure Pipeline as the basis for their deliverable, and Profiles are matched and added to the system based on the Classifier rules to deliver a fully configured Machine. The Profile re-use pattern allows other teams to collaborate based on their domain of expertise/authority and ultimately build a complete system delivered with zero or minimal operator intervention.

Reference Example

The remainder of this document will utilize a reference system to illustrate the details of how this works. Below are the details of the hardware platform and operator defined values to help move the system through the Infrastructure Pipeline automatically with a rich set of configuration values. This should help to illustrate the Profile naming patterns that are supported.

Hardware Platform Details

The physical machine is a Supermicro platform with the following details:

  • Hardware vendor: Supermicro
  • Platform name: SYS-E300-9D-8CN8TP
  • BIOS mode: Legacy BIOS
  • Hardware RAID controllers: 0

Operator Driven Definitions

In this example we are mingling a combination of both the Operating System configuration and application configuration. In reality, these elements can be separately defined so systems can be built to a given standard, and then different application configuraitons completed after that.

Most of the reference "Infrastructure Pipelines" are concerned with driving systems to a completed Operating System state, with hooks to add post-boot configuration enhancements.

Infrastructure Pipelines can be extended to embrace much more complex scenarios around applications beyond the Operating System deployment.

Application Declarative State

In our example, we define the Param universal/application as the declarative OS/application state we are driving our system towards. An example of this is as follows (defining a Proxmox 8 KVM based hypervisor build):

  • universal/application: proxmox-8

In our reference example; this value is actually set indirectly by adding the "Infrastructure Pipeline" named proxmox-8 to the system, which ultimately has the Param value definition we are working with.

This can be set manually on a Machine independent of an Infrastructure Pipeline; however, we do not recommend that as the Infastructure Pipelines are designed to cooperatively work with these values across the different chained Workflows that the Machine passes through.

Platform Specification - the Bill-of-Materials (BoM)

The "BoM" (bill-of-materials) is an operator defined hardware scenario for matching Profiles, generally used to drive operations to group similarly configured Machines together to apply configuration details.

The "Bill-of-Materials" (the physical configuration scenario for this system) is defined in the Param "universal/bom" as a smgen1; for SuperMicro Gen 1 configuration:

  • universal/bom: smgen1

This value can be added by an operator prior to starting an Infrastructure Pipeline install process; or, this can be added dynamically via extending the Classify rules to identify the Machine (perhaps based on physical configuraiton, or via external service query).

RackN Automation Components at Play

In this scenario, a machine is booted and passed in to the default defined Workflow universal-discover. The system is fully inventoried. An operator adds the appropriate values to move the system through a build. In more advanced usage scenarios, these steps can be perfmored automatically for complete Zero Touch Provisionion operations on vast fleets of systems.

The componets in the system that are used for this are defined below, and are set by the operator.

The Infrastructure Pipeline has been set to proxmox-8. Note that technically this is adding the Profile named universal-application-proxmox-8 to the system.

The Hardware and BoM definitions are set by adding the Param as follows:

  • universal/bom: smgen1

The system is currently in the universal-discover Workflow and finished successfully. The operator will now restart the Workflow to re-run the universal-discover workflow, and since the Infrastructure Pipeline definition is set on the Machine, it will automatically start chaining through the following Workflows to complete it's build:

  • universal-discover
  • universal-hardware
  • universal-burnin
  • universal-linux-install
  • universal-runbook

Note that universal-burnin is skipped by default as this process can take many many hours (sometimes well over 24 hours!) to complete. Hardware Lifecycle Management (HWLC) does require appropriate extra setup steps beyond the Getting Started install guide.

Classification and Profile Matching

Ultimately, this process relies on the default behaviors that occur for Classifying the system. In the initial universal-discover Workflow run, the inventory Params were not set, so the system did not invoke any automatic behaviors. On the second run of the universal-workflow, the values have been set and the system will chain through workflows.

The primary controlling Classify rules are processed in the universal-discover Workflow, in the Stage named universal-discover-classification.

The folowing outline defines how our reference example described here will behave for dynamically adding Configuration for our three many operational groups defined above.

Technical Process

The Stage universal-discover-classification utilizes the reference Param universal/discover-classification-list. This Param defines what actual Stage to use as the Classifier rules for processing our described behavior.

Ultimately, this points to the Stage universal-discover-classification-base which defines the rules. The level of indirection allows multiple systems to set different values to get different behaviors.

The ultimate Param values that drive our operations for the Classifier is:

  • universal/discover-classification-base-data

Please review the Params "default" Schema settings to see the Classifier rules for technical details. The below discussion outlines this behavior.

Automatic Profile Names

The following Params heavily influence the automatic Profile mapping rules that will be discussed below:

  • universal/bom
  • universal/hardware
  • universal/application

The Workflow universal-discover will automatically Inventory the machine, and based on the machine inventory values build up a universal/hardware Param definition by assembling the discovered Inventory values as follows:

  • {{ .Param inventory/Manufacturer }}-{{ .Param inventory/ProductName }}-{{ .Param detected-bios-mode }}-rc{{ .Param inventory/RaidControllers }}

These values are generated/derived during the universal-discover Workflows normal Inventory process.


Some hardware manufacturers or specialized (demo, pre-production, etc) hardware may not correctly iterate values that are recorded in the DMIDecode fields.

Example based on our reference system in this document; noting that the ordering below is the order that Profiles will be searched for and added to the system. This implies how standard Param order of precedence rules may settle any discrepancies based on multiple values from different Profiles.

  • universal/hardware: Supermicro-SYS-E300-9D-8CN8TP-legacy-bios-rc0

Note that "rc0" means RAID Controller count of 0 found.


If the universal/bom value is NOT set on the system, then the example definitions of smgen1 would be replaced by the literal string null - as the Params value returned a null response. Generally speaking, do not try to create and use null named Profiles, set an appropriate BoM value if matching at this level is desired.

The following Profiles will be searched for on the DRP Endpoint system and added to the Machine object if found. A comment is provided with the major Param (but not all pieces) of information specific to the Profile that is about to be processed.

# Generated Profile names that Classifier looks for
# These examples assume Param value settins as outlined above

# from universal/hardware

# from universal/bom; if not set "smgen1" would be the literal string "null"

# without specific universal/bom definitions

# without the universal/application (Pipeline name)

# with bom specific for universal/application
# with hw specific for universal/application

Using the above combinations, it is possible for different groups/teams within an Organization to specify different aspects of the systems configuration independently from each other (i.e. maintained in group/team specific Infrastructure-as-Code / IaC content bundles). The final result is a system built to a specification that allows either single teams to manage the complete lifecycle definition; or multiple teams cooperating together.

Generating a Task to Test the Profile Naming

It is possible to model a machine based on Params set on it, and determine the exact Profile names that will be searched for.


If profiles that match the test Task run exist; they will be added to the Machine object. Only perform this on a non-production system, or create a context Machine backed with the drpcli-runner container, to model the behavior.

The general testing pattern is as follows:

  • Ensure appropriate Params are set for what you want to test
  • Remove the Pipeline
  • Remove the Workflow
  • Set the Stage to universal-discover-classification
  • Evaluate the Job Log for the classify Task

If your DRP Endpoint was correctly bootstrapped with either Docker or Podman installed, and you have the drpcli-runner Context container installed; you can simulate this process by creating a Context backed Machine.

The following script simulates the behaviors, using this documents information. You can manipulate the values in the Params section to obtain desired results. You may need to adjust the Endpoint, Username, and Password t authenticate/connect to the appropriate DRP Endpoint.

#!/usr/bin/env bash
# generate a temporary context container backed machine to evaluate base classifier

if drpcli machines exists Name:$NAME
  echo "Machine '$NAME' already exists, exiting"
  exit 1

cat <<EOF | drpcli machines create -
Name: $NAME
Context: drpcli-runner
  BaseContext: drpcli-runner
Workflow: ""
Runnable: false
  detected-bios-mode: legacy-bios
  universal/application: proxmox-8
  universal/bom: smgen1
  inventory/Manufacturer: Supermicro
  inventory/ProductName: SYS-E300-9D-4CN8TP
  inventory/RaidControllers: 0

drpcli machines workflow Name:$NAME ""
drpcli machines run Name:$NAME
drpcli machines stage Name:$NAME universal-discover-classification

# examine the Machines Activty job log for the "classify" task that just ran

echo "to destroy the Machine: 'drpcli machines destroy Name:$NAME'"

Once the Stage named universal-discover-classification has run, observe the Job Logs for the Machine, and review the Task named classify. There will be several outputs (many or all of which may be red "errors") indicating the name of a Profile that was searched for on the system.

A Note about the "Rack" Classifier

There is a special object type caleld "Racks" which define pre-build batch Classify and Validation rulesets that can be invoked to process entire Racks of servers in a single data structure definition.

This is often used to "pre-prime" the system in preparation for many Racks of equipment to show up and be automatically classified. The rules can be injected via a spreadsheet CSV, or a JSON data structure that is marshalled in as a Rack object type.

This document does not discuss the Rack specific Classify rules which are invoked and add more Profiles that could be added to the systems.

Manual Classification Stages

All Infrastructure Pipeline based Workflows contain a standard set of "pre" and "post" operations bracketing each Workflows segment of work. Each workflow includes at least one Classifier that an operator may define additional Classify rules to provide custom Machine state changes throughout the Infrastructure Pipeline run.

The description in this document does not preclude the use of Manual/Pre-defined Classify rules from running alongside this system. However, it is important that an operator not meddle with any of the define Params values defined here unless they understand the implications of changine specific Params.

Additional Reference Information

The following additional documentation information will be helpful in understanding Infrastructure Pipelines.