Skip to content

Deploying OpenShift with Digital Rebar Platform (DRP)

This tutorial walks you through deploying a production OpenShift cluster using Digital Rebar Platform (DRP). It is organized into four parts:

  1. Stage DRP Content — Get the right bits onto your DRP server
  2. Configure a Tracking Repo — Store cluster state in Git for auditing and GitOps
  3. Deploy an OpenShift Cluster — Create the cluster and let automation do the work
  4. Access the Cluster — Retrieve credentials and verify health

Multi-cluster (hub/spoke) deployments

For a centrally-managed multi-cluster topology with an ACM/OCM hub and registered spoke clusters, see Deploying Hub-Spoke OpenShift Clusters.

Contents


Prerequisites

Before starting, confirm you have the following:

  • DRP installed with a reachable static IP address accessible by all nodes
  • If needed, add --static-ip <IP> to /etc/systemd/system/dr-provision.service
  • drpcli installed and authenticated against your DRP endpoint
  • A valid Red Hat account with an active OpenShift subscription
  • Machines that meet minimum specs for your chosen cluster topology:

OpenShift supports three deployment topologies — choose the one that fits your environment:

Topology Control Plane Nodes Worker Nodes Typical Use
Single Node (SNO) 1 0 Testing, edge, resource-constrained
Compact (3-node) 3 0 HA control plane, workloads on control plane nodes
Full cluster 3 2+ Production, separate worker pool

Minimum hardware specs per node:

Role vCPUs RAM Disk
Control Plane (full cluster) 4 16 GB 100 GB
Worker 2 8 GB 100 GB
Control Plane (SNO or compact — combined role) 8 32 GB 100 GB

Hardware Uniformity

Control plane nodes must have identical hardware specifications. Worker nodes may vary.


Firewall Requirements

The following outbound HTTPS connections must be allowed from your DRP server and cluster nodes. All connections use port 443.

DRP Server

Hostname Used by Purpose
get.rebar.digital DRP server Download DRP content packs and context container images
mirror.openshift.com DRP server Download OpenShift installer, oc, and oc-mirror binaries
rhcos.mirror.openshift.com DRP server Download RHCOS live ISO

Cluster Nodes (during and after installation)

Hostname Purpose
quay.io Pull OpenShift release images (openshift-release-dev/*)
registry.redhat.io Pull Red Hat base images (ubi8, openshift4, operators)
registry.connect.redhat.com Pull certified operator and marketplace images
cloud.openshift.com Cluster telemetry and subscription management

Conditional

Hostname Condition Purpose
Your GitOps repo host If openshift/gitops-repo-url is set ArgoCD pulls cluster config
Your external registry Airgap deployments only Target for openshift/external-registry

Human Browser Access Only

console.redhat.com is accessed by a human to download the pull secret. No firewall rule is needed for DRP or cluster nodes to reach it.


Part 1: Stage DRP Content

This section downloads and stages the OpenShift binaries, CoreOS ISO, and content pack onto your DRP server. If your DRP server can reach the internet this is largely automated. Airgap differences are called out inline.

Step 1: Obtain Your Pull Secret

OpenShift requires a pull secret to authenticate with Red Hat's container registries. You must obtain this before deploying.

  1. Log in to Red Hat OpenShift Cluster Manager
  2. Click Copy pull secret or Download pull secret
  3. Save the JSON to a file on your DRP server, e.g. ~/pull_secret

The pull secret is a JSON object containing credentials for the following registries (example below):

JSON
{
  "auths": {
    "cloud.openshift.com": { "auth": "...", "email": "you@example.com" },
    "quay.io": { "auth": "...", "email": "you@example.com" },
    "registry.connect.redhat.com": { "auth": "...", "email": "you@example.com" },
    "registry.redhat.io": { "auth": "...", "email": "you@example.com" }
  }
}

Step 2: Set the Pull Secret in the Global Profile

By storing the pull secret in DRP's global profile, it is automatically available to all cluster deployments without needing to paste it every time.

Bash
drpcli profiles set global param openshift/pull-secret to - < ~/pull_secret

Verify it was saved:

Bash
drpcli profiles get global param openshift/pull-secret | head -5

You should see the auths JSON structure returned.

Parameter: openshift/pull-secret

This parameter stores the authentication secret required to pull container images from Red Hat's container registries. It is mandatory for cluster deployment and is automatically used by the OpenShift installer.


Step 3: Install DRP Content Packs

Four base content packs must be loaded into DRP before staging OpenShift artifacts. These provide the core tasks, boot environments, and OpenShift automation templates.

Connected environment:

Bash
drpcli catalog item install universal
drpcli catalog item install drp-community-content
drpcli catalog item install coreos
drpcli catalog item install openshift

Airgap

In an airgapped environment, obtain the YAML files from your internal catalog or from a connected system and upload them directly:

Bash
drpcli contents upload universal.yaml
drpcli contents upload drp-community-content.yaml
drpcli contents upload coreos.yaml
drpcli contents upload openshift.yaml

Verify all four are loaded:

Bash
drpcli contents list | jq -r '.[].meta.Name' | grep -E 'universal|community|coreos|openshift'

Step 4: Stage the OpenShift Version Artifacts

The OpenShift content bundle provides DRP with the tasks, profiles, and templates needed to deploy a cluster. It also requires version-specific artifacts: the installer binary, CLI tools, and the CoreOS ISO.

Option A: Bootstrap via Self-Runner (Easiest)

If your DRP server can reach the internet, the easiest method is to use the DRP self-runner machine's bootstrap workflow. This downloads everything and installs the context container automatically.

Bash
SELF_RUNNER=drp-demo   # Replace with your self-runner machine name

drpcli machines update Name:$SELF_RUNNER '{ "Locked": false }'
drpcli machines addprofile Name:$SELF_RUNNER bootstrap-contexts
drpcli machines addprofile Name:$SELF_RUNNER bootstrap-openshift-client-runner
drpcli machines addprofile Name:$SELF_RUNNER bootstrap-openshift-contents
drpcli machines set Name:$SELF_RUNNER param openshift/bootstrap-versions to '["latest-4.21"]'

# Run the rebootstrap-drp blueprint
drpcli machines work_order add Name:$SELF_RUNNER rebootstrap-drp

Parameter: openshift/bootstrap-versions

Specifies which OpenShift versions to download and stage. Accepts a list of version strings such as latest-4.21 (resolves to the current stable 4.21.x release) or exact versions like 4.21.1. Multiple versions can be staged for environments that deploy different OpenShift releases.

Option B: Script-Based (Connected, More Control)

From the directory containing your OpenShift content repo, run the artifact generation script to download and stage all required files:

Bash
OS_VERSION=latest-4.21
./generate_openshift_artifacts.sh --version $OS_VERSION

This script: - Resolves latest-4.21 to the current stable release (e.g. 4.21.1) - Downloads the OpenShift installer, CLI tools, and mirror tool - Generates the version-specific content pack - Prints the path to the generated install.sh

Run the install.sh printed by the script to upload everything to DRP:

Bash
# The script output will show the actual version path, e.g.:
oc_content/4.21.1/install.sh

Retrieving the script from DRP

If you don't have the local repo, you can render the script directly from DRP:

Bash
drpcli machines create fake-machine
drpcli templates render generate-openshift-artifacts.sh.tmpl Name:fake-machine \
  > generate_openshift_artifacts.sh
drpcli machines destroy Name:fake-machine
chmod +x generate_openshift_artifacts.sh

Option C: Airgap (DRP cannot reach the internet)

Use the --download flag to create an upload bundle on a system that can reach the internet, then transfer it to your DRP server.

On the internet-connected system:

Bash
OS_VERSION=latest-4.21
./generate_openshift_artifacts.sh --version $OS_VERSION --download
# Creates oc_content/bundles/openshift-upload-bundle-<VERSION>.tgz

Copy the bundle to your DRP server, then upload:

Bash
OS_VERSION=4.21.1   # Use the resolved version printed above
drpcli files upload openshift-upload-bundle-${OS_VERSION}.tgz --explode
drpcli files download redhat/openshift/openshift-cluster-${OS_VERSION}.yaml \
  as openshift-cluster-${OS_VERSION}.yaml
drpcli contents upload openshift-cluster-${OS_VERSION}.yaml

Note

In the airgap path, openshift-cluster-<VERSION>.yaml is created and uploaded manually. The auto-bootstrap profile (used in Option A) is not available, but the content pack functions identically once uploaded.

The openshift-client-runner context container image must also be downloaded and imported separately — see Step 5, Option C.

Verify the Content is Loaded

Confirm the version-specific profile is present:

Bash
drpcli profiles list Name re 'openshift-cluster-4*' | jq -r '.[].Name'

You should see openshift-cluster-4.21.1 (or your resolved version) in the output.


Step 5: Install the openshift-client-runner Context

DRP uses a container context (openshift-client-runner) to run the OpenShift CLI tools (oc, kubectl) during cluster operations. This is a Fedora-based container image that is not included in the install.sh script — it must be installed separately.

Confirm whether it is already installed:

Bash
drpcli contexts list | jq -r '.[].Name' | grep openshift

Expected output: openshift-client-runner

If it is missing, choose one of the following methods:

Option A: Bootstrap (DRP server can reach internet)

Follow the bootstrap steps from Step 4, Option A. The bootstrap-contexts and bootstrap-openshift-client-runner profiles both handle context installation.

Option B: Manual install via CLI (connected)

Bash
IMAGE=$(drpcli contexts show openshift-client-runner 2>/dev/null | jq -re '.Image' \
  || echo 'openshift-client-runner_v1.2.22')

# Download from RackN, stage in DRP file store, and load into the docker-context plugin
drpcli files upload https://get.rebar.digital/containers/${IMAGE}.tar.gz \
  as contexts/docker-context/${IMAGE}
drpcli plugins runaction docker-context imageUpload \
  context/image-name $IMAGE \
  context/image-path files/contexts/docker-context/$IMAGE

Option C: Airgap — download on a connected system, import on DRP

On a system that has internet access (does not need drpcli):

Bash
# Determine the image name — check your openshift content pack version or use the
# default below (update the version number if deploying a different release)
IMAGE=$(drpcli contexts show openshift-client-runner 2>/dev/null | jq -re '.Image' \
  || echo 'openshift-client-runner_v1.2.22')

wget "https://get.rebar.digital/containers/${IMAGE}.tar.gz"

Transfer the .tar.gz to your DRP server, then upload and load it:

Bash
IMAGE=openshift-client-runner_v1.2.22   # set to the filename you downloaded

drpcli files upload ${IMAGE}.tar.gz as contexts/docker-context/${IMAGE}
drpcli plugins runaction docker-context imageUpload \
  context/image-name $IMAGE \
  context/image-path files/contexts/docker-context/$IMAGE

Verify it loaded:

Bash
drpcli contexts list | jq -r '.[].Name' | grep openshift

get.rebar.digital

Context images (~130 MB) are served from https://get.rebar.digital/containers/. The filename is always <image-name>.tar.gz where the image name includes the version (e.g. openshift-client-runner_v1.2.22). Check the installed context for the exact version: drpcli contexts show openshift-client-runner | jq -r '.Image'


Part 2: Deploy an OpenShift Cluster

With content staged, you are ready to deploy. This section walks through assigning machine roles, adding hardware profiles, creating a resource pool, and launching the cluster pipeline.

Step 6: Identify Your Machines

List the machines available for deployment:

Bash
drpcli machines list | jq -r '.[].Name'

Choose the correct number of machines for your cluster topology (see Prerequisites): - Single Node: 1 machine (acts as both control plane and worker) - Compact (3-node): 3 machines (control plane nodes also run workloads) - Full cluster: 3 control plane + 2 or more workers

Note their names — you will use them in the following steps.

Step 7: Apply Hardware Profiles

Before assigning OpenShift roles, apply any site-specific hardware profiles to your machines. These profiles configure BMC/BIOS settings, RAID, network bonding, and disk selection appropriate for your hardware vendor and environment.

Bash
# Example: apply your site hardware profile to each machine
drpcli machines addprofile Name:<machine-name> <your-hardware-profile>

Common hardware profile categories: - Disk selection: Use the openshift/install-rootDeviceHints parameter to specify which disk OpenShift installs to (e.g. by serial number, size, or model). This is especially important on servers with multiple disks. - Network bonding/VLAN: Add profiles that configure interface bonding or VLAN tagging on the nodes. - BIOS/RAID: Vendor-specific profiles (e.g. for Dell, HPE, or Lenovo) that configure hardware before the OS boots.

Note

If your machines have a single disk and no special hardware configuration, you can skip this step. The OpenShift installer will use the first available disk by default.

Step 8: Configure Network Data (Optional)

If your cluster nodes require custom network configuration — bonded interfaces, VLANs, static IP addresses, or jumbo frames — set the network-data parameter on each machine before deployment. This drives the NMState configuration on both install paths: the initial agent-based install (via the agent boot ISO) and the direct-boot ignition path used when additional machines are later allocated to the cluster from the pool. The same network-data schema applies on both paths.

Parameter: network-data

This parameter is optional. If omitted, CoreOS uses DHCP on the first available network interface. Set it when your nodes need:

  • Static IP addresses
  • Bonded interfaces or LACP aggregation
  • VLAN tagging
  • Custom MTU (e.g. jumbo frames)
  • Specific DNS servers or gateway

Custom networking is honored equally for initial-build machines (agent-based install) and for machines added later via the cluster pool.

The network-data parameter is a map with a prod key that describes the primary machine network interface used by OpenShift.

Example 1: DHCP (default — no configuration needed)

If your nodes get IP addresses via DHCP, skip this step entirely.

Example 2: Static IP, single NIC

Bash
drpcli machines set Name:<machine-name> param network-data to - <<EOF
prod:
  address: 192.168.1.101
  prefix: 24
  gateway: 192.168.1.1
  dns-servers:
    - 192.168.1.1
  interface: eth0
  dhcp: 'false'
EOF

Example 3: Bonded interfaces with VLAN and static IP

Bash
drpcli machines set Name:<machine-name> param network-data to - <<EOF
prod:
  address: 10.102.147.24
  prefix: 26
  gateway: 10.102.147.1
  dns-servers:
    - 10.80.1.222
    - 10.80.2.222
  interface: eno12399np0,ens1f0np0
  bond: bond0
  vlan: '206'
  dhcp: 'false'
  mtu: '9000'
  link-aggregation:
    mode: 802.3ad
    options:
      lacp_rate: '1'
      miimon: '100'
      xmit_hash_policy: layer2+3
EOF

Bonded VLAN interface naming

When both bond and vlan are set, the template creates a bond interface (e.g. bond0) with a VLAN sub-interface on top (e.g. bond0.206). The interface field takes a comma-separated list of physical NICs to include in the bond.

Repeat for each machine in the cluster, substituting the correct IP address per machine.

!!! info "Two-phase ignition extras (direct-boot path)" { #rs_openshift_tutorial_two_phase_ignition_extras }

Text Only
For machines added later via the cluster pool (the direct-boot ignition path
used by [`openshift-direct-pool-operations`](./../../../../resources/objects/tasks/openshift-direct-pool-operations.md#rs_tasks_openshift-direct-pool-operations)),
NetworkManager keyfiles derived from `network-data` are applied in **both**
halves of the CoreOS install:

- **Installer phase** — before pivot, while the CoreOS live installer is
  running. The task stores the keyfiles on the machine's
  `coreos/ignition-extra-files` parameter, and the coreos content pack's
  `basic-ign.tmpl` appends them to the installer ignition `storage.files`
  list. This is what makes early network configuration (bonding, VLAN,
  static IP) available to the installer itself — required when the DRP
  endpoint or the agent image can only be reached over the configured
  network.

- **In-OS phase** — after pivot, in the installed system. The task also
  stores the keyfiles on `coreos/in-os-ignition-extra-files`, and
  `basic-in-os-ign.tmpl` appends them to the in-OS ignition so the running
  OS picks up the same NetworkManager configuration.

The pool-operations task sets both parameters automatically — operators do
not need to touch them when using `network-data`. The parameters are
documented here for debugging and for cases where an advanced user wants
to override the per-machine keyfile list directly.

Step 9: Assign Machine Roles

Assign each machine an OpenShift role by adding the appropriate profile. Control plane and worker roles are mutually exclusive.

Bash
# Assign control plane role to each control plane machine
drpcli machines addprofile Name:<cp-machine-1> openshift-controlplane
drpcli machines addprofile Name:<cp-machine-2> openshift-controlplane  # skip for SNO
drpcli machines addprofile Name:<cp-machine-3> openshift-controlplane  # skip for SNO

# Assign worker role — only needed for full clusters with separate worker nodes
drpcli machines addprofile Name:<worker-machine-1> openshift-worker
drpcli machines addprofile Name:<worker-machine-2> openshift-worker
drpcli machines addprofile Name:<worker-machine-3> openshift-worker

Workers are optional

For Single Node and Compact (3-node) deployments, skip the worker role assignment entirely. Control plane nodes in these topologies also accept workloads.

Profile: openshift-controlplane / openshift-worker

These profiles set the openshift/role parameter on the machine automatically. Valid values for openshift/role are controlplane and worker. You can also set the parameter directly:

Bash
drpcli machines set Name:<machine-name> param openshift/role to controlplane

Verify the roles were set (using aggregate=true to resolve values set via profiles):

Bash
drpcli machines list aggregate=true | \
  jq -r '.[] | select(.Params | has("openshift/role")) | "\(.Name): \(.Params["openshift/role"])"'

Step 10: Create a Pool and Add Machines

The cluster pipeline discovers machines through a DRP pool. Create a pool named after your cluster and add all cluster machines to it.

Bash
CLUSTER_NAME=tutorial

# Add control plane machines
drpcli pools manage add $CLUSTER_NAME "Name=<cp-machine-1>"
drpcli pools manage add $CLUSTER_NAME "Name=<cp-machine-2>"
drpcli pools manage add $CLUSTER_NAME "Name=<cp-machine-3>"

# Add worker machines
drpcli pools manage add $CLUSTER_NAME "Name=<worker-machine-1>"
drpcli pools manage add $CLUSTER_NAME "Name=<worker-machine-2>"
drpcli pools manage add $CLUSTER_NAME "Name=<worker-machine-3>"

Verify all machines are in the pool:

Bash
drpcli machines list | jq -r \
  '.[] | select(.Pool == "'$CLUSTER_NAME'") | "\(.Name): \(.Params["openshift/role"])"'

Step 11: Gather Network Information

Before creating the cluster, collect the following network details for your environment. All three networks must not overlap.

Parameter Required Description Example
openshift/cluster-domain Yes Base DNS domain for the cluster k8s.local
openshift/network/machineNetwork Yes CIDR for node IP addresses 10.0.0.0/20
openshift/network/serviceNetwork Optional CIDR for Kubernetes services (internal) 172.30.0.0/16
openshift/network/clusterNetwork Optional CIDR for pod networking (internal) 10.128.0.0/14
openshift/api-vip Internal LB only Virtual IP for the Kubernetes API (from Machine Network) 10.0.1.55
openshift/ingress-vip Internal LB only Virtual IP for application routes (from Machine Network) 10.0.1.56

Default Networks

The service and cluster networks use defaults that work for most deployments (172.30.0.0/16 and 10.128.0.0/14). Only change these if they conflict with your existing network infrastructure.

DNS / Load Balancer Configuration

Two parameters control how DNS is resolved for the cluster:

Parameter Default Description
openshift/enable-dns-zone false When true, DRP creates and manages a DNS zone for the cluster domain
openshift/enable-internal-lb false When true, OpenShift deploys keepalived to manage the API and Ingress VIPs

External DNS (default) — Leave both parameters at their defaults when you have existing DNS infrastructure (Active Directory, BIND, Route 53, etc.) and an external load balancer or upstream proxy handling the API endpoint. You are responsible for creating the required DNS records before the cluster boots:

  • api.<cluster-name>.<domain> → your API load balancer IP
  • api-int.<cluster-name>.<domain> → your API load balancer IP
  • *.apps.<cluster-name>.<domain> → your Ingress load balancer IP

DRP-managed DNS — Set openshift/enable-dns-zone: true when no external DNS is available and DRP is the DNS resolver for the cluster network. DRP will create and maintain all required records automatically as nodes join or are replaced. When enabled, openshift/api-vip and openshift/ingress-vip must both be set.

OpenShift internal load balancer — Set openshift/enable-internal-lb: true for bare-metal or on-premises environments without an external load balancer. OpenShift will deploy keepalived across the control plane nodes to provide VIP failover. When enabled, openshift/api-vip and openshift/ingress-vip must both be set to free IPs within the Machine Network CIDR.

VIP Requirements (internal LB only)

When openshift/enable-internal-lb is true, the API VIP and Ingress VIP must be within the Machine Network CIDR and must not be assigned to any existing machine. VIPs are not required when using an external load balancer.

Common combinations

Environment enable-dns-zone enable-internal-lb VIPs needed?
Corporate DC with external DNS + LB false false No (LB handles them)
On-prem, DRP as DNS, no external LB true true Yes
On-prem, external DNS, no external LB false true Yes
On-prem, DRP as DNS, external LB true false No

Step 11a: Configure NTP (Optional)

If your cluster nodes need a specific time source — corporate NTP infrastructure, an air-gapped time server, or multiple redundant servers — set the ntp-servers parameter. When this parameter has a non-empty value, the cluster pipeline:

  1. Renders a chrony configuration via the chrony.conf.tmpl template and injects it as a MachineConfig for both master and worker roles during the agent-based install. Every node gets the same chrony config at first boot.
  2. Populates additionalNTPSources in the agent installer's agent-config.yaml so nodes have working time sync during install as well as after.

Example — set on the cluster profile:

Bash
drpcli profiles set <cluster-name> param ntp-servers to '["10.0.0.1","10.0.0.2"]'

Alternatively, add it to the cluster-config YAML (Step 12) under Params.

Parameter: ntp-servers

List of NTP servers. Behavior is two-phase: install-time (additionalNTPSources in the agent config) and runtime (chrony MachineConfig for both roles). Omit this parameter entirely to fall back to the install defaults.

Customizing the chrony configuration

To supply a custom chrony config (for example, to set specific driftfile paths or restricted access rules), create your own DRP template producing valid chrony.conf syntax and point openshift/chrony-config-template at it:

Bash
drpcli profiles set <cluster-name> param openshift/chrony-config-template to my-chrony.conf.tmpl

The template can use full DRP template features (parameters, Go template logic) or be a static file; the only requirement is that it produces valid chrony config.

Parameter: openshift/chrony-config-template

Name of the DRP template that renders chrony.conf. Defaults to chrony.conf.tmpl. Only override when the stock template does not meet your environment's needs.

Verifying NTP on a running cluster

After Step 16 (Verify Cluster Health):

Bash
oc debug node/<node-name> -- chroot /host chronyc sources

Look for your configured servers in the output with a ^* (selected) or ^+ (reachable) marker.


Step 11b: Inject Extra Manifests (Optional)

Two related parameters let you add custom OpenShift manifests that are rendered against the cluster machine and written to /root/cluster/openshift/ before openshift-install runs. Any file in that directory is picked up by the installer and applied to the cluster at install time.

openshift/extra-manifests — a list of DRP template names. Each listed template is rendered against the cluster machine and written into /root/cluster/openshift/<template-name>. Use this to add MachineConfigs, Namespaces, OperatorSubscriptions, or any other cluster-scoped resource you want installed at day zero.

YAML
openshift/extra-manifests:
  - my-custom-machineconfig.yaml.tmpl
  - priority-class.yaml.tmpl

openshift/disable-autodhcp — when true, openshift-cluster-prep automatically adds the stock template machineconfig-disable-auto-dhcp.yaml.tmpl to the extra-manifests list. This disables NetworkManager auto-DHCP on all interfaces, preventing secondary NICs from acquiring unexpected leases. Use this on hardware with multiple NICs where only the primary interface should do DHCP.

YAML
openshift/disable-autodhcp: true

Parameters

openshift/extra-manifests is additive: you can set it and openshift/disable-autodhcp: true together, and both contribute manifests. Your templates are rendered with access to the cluster machine's full parameter scope (including the cluster profile), so they can reference openshift/cluster-domain, openshift/network/machineNetwork, etc.

Step 11c: Label Machines and Apply the Infrastructure-Node Pattern

OpenShift workloads are often scheduled onto dedicated infrastructure nodes (ingress, monitoring, registry) to keep them isolated from user workloads. DRP makes this pattern declarative via two mechanisms that cooperate:

  1. openshift/machine-labels — a compose-expanded map of Kubernetes labels (key: value) contributed by any profile applied to a machine. After the cluster finishes installing, the openshift-label-machines task runs on the cluster coordinator and applies each member's merged label map to its node via oc label node <name> <key>=<value> --overwrite.

  2. openshift-infrastructure profile — a ready-made profile that contributes:

    YAML
    openshift/machine-labels:
      node-role.kubernetes.io/infra: ""
    

    Apply it to any worker machine you want promoted to an infrastructure node:

    Bash
    drpcli machines addprofile <machine-uuid> openshift-infrastructure
    
  3. openshift-cluster-infrastructure profile — a cluster-level profile that adds the stock infrastructure-ingress-controller.yaml.tmpl template to openshift/extra-manifests. The rendered IngressController pins the default ingress pods to nodes labeled node-role.kubernetes.io/infra. Add the profile the cluster at creation time to enable the functionality at install time:

YAML
Profiles:
  - universal-application-openshift-cluster
  - openshift-cluster-${OS_VERSION}
  - openshift-cluster-infrastructure

Profile: openshift-cluster-<version>

Contains the version-specific installer binary URL, CoreOS ISO reference, and client tool URLs. Created by generate_openshift_artifacts.sh in Step 4. Use the exact resolved version printed by that script (e.g. 4.21.1) as $OS_VERSION above.

When drpcli clusters create runs, it auto-creates a cluster-specific profile named after your cluster (e.g. tutorial) and assigns the version profile to it:

cluster machinetutorial profileopenshift-cluster-4.21.1 profile

Verify with:

Bash
drpcli profiles show $CLUSTER_NAME | jq '{Name: .Name, Profiles: .Profiles}'

GitOps Repository

To track cluster configuration in a Git repository using ArgoCD, add openshift/gitops-repo-url to the cluster's Params:

YAML
openshift/gitops-repo-url: https://git.example.com/my-org/my-cluster-config.git

If this parameter is not set, the GitOps setup step is skipped automatically.

External Registry (Airgap)

For disconnected registry deployments, add the openshift-config-container-registry

Apply all three together to deploy a cluster that automatically promotes selected workers to infrastructure nodes and reroutes ingress pods onto them.

Custom labels

You can author your own label-contributing profiles — simply set openshift/machine-labels with whatever keys you need. Because the map is compose-expanded, multiple layered profiles merge their keys, so team-specific, environment-specific, and role-specific labels can be composed independently.

Node classification caveat

Applying the node-role.kubernetes.io/infra label alone is sufficient for scheduling (including the default IngressController wired up by this profile), but does not by itself create a separate infra MachineConfigPool or make the node subscription-exempt under Red Hat's Infrastructure Node licensing. Users needing either of those should additionally add an infra MachineConfig and a matching pool via openshift/extra-manifests.

Step 12: Create the Cluster

Create a cluster configuration YAML file. Replace the placeholder values with your actual network information. A reference copy of this file is at openshift-deploy-tutorial-cluster-config.yaml.

Bash
CLUSTER_NAME=tutorial
CLUSTER_DOMAIN=k8s.local
MACHINE_CIDR=<your-machine-cidr>
OS_VERSION=${OS_VERSION:-4.21.1} # Use the resolved version from Step 4

cat > cluster-config.yaml <<EOF
---
Name: $CLUSTER_NAME
Profiles:
  - universal-application-openshift-cluster
  - openshift-cluster-${OS_VERSION}
Workflow: universal-start
Meta:
  BaseContext: openshift-client-runner
Params:
  broker/name: pool-broker
  broker-pool/pool: $CLUSTER_NAME
  openshift/cluster-domain: $CLUSTER_DOMAIN
  openshift/network/machineNetwork:
    - cidr: $MACHINE_CIDR
  openshift/network/serviceNetwork:
    - 172.30.0.0/16
  openshift/network/clusterNetwork:
    - hostPrefix: 23
      cidr: 10.128.0.0/14
  # DNS and load balancer options — see Step 11 for guidance
  openshift/enable-dns-zone: false    # set true to let DRP manage DNS records
  openshift/enable-internal-lb: false # set true to use OpenShift keepalived VIPs
EOF

drpcli clusters create - < cluster-config.yaml

Parameter: openshift/cluster-domain

The base domain used for all cluster DNS records. The installer creates api.<name>.<domain> and *.apps.<name>.<domain> from this value. Default: k8s.local.

Parameter: openshift/enable-dns-zone

When true, DRP automatically creates and maintains a DNS zone for the cluster. The zone contains A records for the API endpoint, internal API, wildcard ingress (*.apps), and every cluster node. Leave false when an external DNS provider already has the required records. Default: false.

Parameter: openshift/enable-internal-lb

When true, OpenShift deploys keepalived across the control plane nodes to provide high-availability VIP failover for the API and Ingress endpoints. You must also set openshift/api-vip and openshift/ingress-vip to unused IPs in the machine network. Leave false when an external load balancer is already directing traffic to the cluster. Default: false.

Enabling the internal load balancer — if you set openshift/enable-internal-lb: true, add the VIP addresses to the Params block:

YAML
  openshift/enable-internal-lb: true
  openshift/api-vip: <your-api-vip>         # unused IP within machine network
  openshift/ingress-vip: <your-ingress-vip>  # unused IP within machine network

Profile: openshift-cluster-<version>

Contains the version-specific installer binary URL, CoreOS ISO reference, and client tool URLs. Created by generate_openshift_artifacts.sh in Step 4. Use the exact resolved version printed by that script (e.g. 4.21.1) as $OS_VERSION above.

When drpcli clusters create runs, it auto-creates a cluster-specific profile named after your cluster (e.g. tutorial) and assigns the version profile to it:

cluster machinetutorial profileopenshift-cluster-4.21.1 profile

Verify with:

Bash
drpcli profiles show $CLUSTER_NAME | jq '{Name: .Name, Profiles: .Profiles}'

GitOps Repository

To track cluster configuration in a Git repository using ArgoCD, add openshift/gitops-repo-url to the cluster's Params:

YAML
openshift/gitops-repo-url: https://git.example.com/my-org/my-cluster-config.git

If this parameter is not set, the GitOps setup step is skipped automatically.

External Registry (Airgap)

For disconnected registry deployments, add the openshift-config-container-registry profile to the Profiles list and configure openshift/external-registry.

Confirm the cluster was created:

Bash
drpcli clusters show Name:$CLUSTER_NAME | jq '{Name: .Name, Stage: .Stage, Workflow: .Workflow}'

Step 13: Monitor Deployment

The cluster deployment runs through three automated phases:

  1. Pre-provisioning — Generates install-config.yaml, agent configs, and NMState network configuration
  2. Resource provisioningMachines boot the agent ISO and begin joining the cluster
  3. Post-provisioning — Waits for bootstrap, then installation, then installs the NMState operator

Monitor cluster pipeline progress:

Bash
drpcli clusters show Name:$CLUSTER_NAME | \
  jq '{Stage: .Stage, CurrentTask: .CurrentTask, JobState: .JobState}'

Watch member machine progress:

Bash
# $CLUSTER_NAME must be set in your shell before running this
watch -n 30 'drpcli machines list | jq -r ".[] | select(.Pool == \"'$CLUSTER_NAME'\") | \"\(.Name): \(.Stage)\""'

Deployment typically takes 30–60 minutes depending on hardware and network speed.

Troubleshooting

Check a machine's current task log:

Bash
drpcli machines show Name:<machine-name> | jq '{Stage: .Stage, CurrentTask: .CurrentTask}'

Once credentials are retrieved (Step 14), use these for deeper diagnosis:

Bash
oc get nodes
oc get clusteroperators
oc get clusterversion

Pull secret not set

If the cluster fails early with:

Text Only
Parameter openshift/pull-secret not in scope
The openshift/pull-secret has not been set. Return to Step 2 and set it, then re-run the cluster workflow.


Part 3: Access the Cluster

Once the deployment completes, the cluster credentials are stored as parameters on the cluster object in DRP.

Step 14: Retrieve the kubeconfig

The openshift/kubeconfig parameter is set automatically when installation completes.

Bash
CLUSTER_NAME=tutorial

# Save the kubeconfig to a local file
mkdir -p ~/.kube
drpcli clusters get Name:$CLUSTER_NAME param openshift/kubeconfig --aggregate \
  > ~/.kube/${CLUSTER_NAME}-config

# Point kubectl/oc at the cluster
export KUBECONFIG=~/.kube/${CLUSTER_NAME}-config

Step 15: Retrieve the kubeadmin Password

Bash
drpcli clusters get Name:$CLUSTER_NAME param openshift/kubeadmin-password --aggregate

Security Note

The kubeadmin user is a temporary bootstrap admin. Red Hat recommends configuring an identity provider and removing the kubeadmin user once you have established another cluster administrator.

Step 16: Verify Cluster Health

Confirm all nodes have joined and the cluster operators are healthy:

Bash
# All nodes should show Ready
oc get nodes

# All cluster operators should show Available=True, Progressing=False, Degraded=False
oc get clusteroperators

# Confirm the installed version
oc get clusterversion

Expected output for oc get nodes on a 3+3 cluster:

Text Only
NAME              STATUS   ROLES                  AGE   VERSION
cp-machine-1      Ready    control-plane,master   71m   v1.34.2
cp-machine-2      Ready    control-plane,master   71m   v1.34.2
cp-machine-3      Ready    control-plane,master   43m   v1.34.2
worker-machine-1  Ready    worker                 44m   v1.34.2
worker-machine-2  Ready    worker                 44m   v1.34.2
worker-machine-3  Ready    worker                 45m   v1.34.2

Step 17: Access the Web Console

Retrieve the console URL from DRP:

Bash
drpcli clusters get Name:$CLUSTER_NAME param openshift/console --aggregate

This returns the full URL, for example:

Text Only
https://console-openshift-console.apps.tutorial.k8s.local

Log in with username kubeadmin and the password retrieved in Step 15.

Parameter: openshift/console

The openshift/console parameter is set automatically on the cluster object when installation completes. You can also construct the URL manually: https://console-openshift-console.apps.<CLUSTER_NAME>.<CLUSTER_DOMAIN>

DNS for the Console

Your client must be able to resolve *.apps.<CLUSTER_NAME>.<CLUSTER_DOMAIN>.

External DNS (default) — if your DNS infrastructure manages the cluster domain, ensure your client's resolver can reach it. No additional DRP configuration is needed.

DRP-managed DNS — if you set openshift/enable-dns-zone: true, DRP holds the authoritative zone. Point your client at DRP for the cluster subdomain:

Windows:

PowerShell
Add-DnsClientNrptRule -Namespace ".k8s.local" -NameServers "<DRP-IP>"
Clear-DnsClientCache

Linux — add to your resolver config for the subdomain, or test directly:

Bash
dig @<DRP-IP> console-openshift-console.apps.tutorial.k8s.local


Part 4: Day-2 Cluster Operations

Once your cluster is running, DRP provides blueprints for ongoing lifecycle operations: adding and removing nodes, checking health, refreshing DNS, and eventually destroying the cluster. Each is invoked via drpcli clusters work_order add Name:<cluster> <blueprint>.

Step 18: Adding and Removing Nodes

The openshift-cluster-add-nodes blueprint reconciles pool membership with per-machine roles. For each machine in the cluster's pool:

  • Add: if the machine has openshift/role set (controlplane or worker) and is not yet a member of the cluster, the blueprint runs the join flow.
  • Remove: if the machine is a cluster member but is no longer eligible (its openshift/role has been cleared, or it has been removed from the pool), the blueprint is a candidate for removing it — but by default removals are skipped as a safety measure.

To allow removals, set openshift/allow-node-removal: true on the cluster for the next blueprint run. Reset it to false afterward so subsequent runs cannot remove nodes unintentionally.

Adding a worker:

Bash
# 1) Identify the spare machine and put it in the cluster's pool with a role
drpcli machines addprofile Name:<new-worker> openshift-worker
drpcli pools manage add <cluster-name> "Name=<new-worker>"

# 2) Run the add-nodes blueprint
drpcli clusters work_order add Name:<cluster-name> openshift-cluster-add-nodes

Removing a worker (requires the safety flag):

Bash
# 1) Remove the role from the machine, or remove it from the pool
drpcli machines removeprofile Name:<old-worker> openshift-worker

# 2) Allow the removal for the next blueprint run
drpcli clusters set Name:<cluster-name> param openshift/allow-node-removal to true

# 3) Run the blueprint
drpcli clusters work_order add Name:<cluster-name> openshift-cluster-add-nodes

# 4) Reset the safety flag
drpcli clusters set Name:<cluster-name> param openshift/allow-node-removal to false

Always reset openshift/allow-node-removal

Leaving this parameter at true means every subsequent run of the add-nodes blueprint can remove nodes. Clearing it after each intentional removal is a defense against accidental cluster shrink during routine maintenance.

Step 19: Checking Cluster Status

The openshift-cluster-status blueprint runs oc commands against the cluster to report node and operator health. Use it to produce a point-in-time snapshot without shelling into a node.

Bash
drpcli clusters work_order add Name:<cluster-name> openshift-cluster-status

Results are written to the blueprint work order log — retrieve with:

Bash
drpcli jobs list State=finished Uuid=$(drpcli clusters show Name:<cluster-name> \
  | jq -re '.CurrentJob') | less

Step 20: Refreshing DNS

If you are using DRP-managed DNS (openshift/enable-dns-zone: true) and nodes have been renamed, moved to different IPs, or added/removed, the openshift-cluster-dns-refresh blueprint rebuilds the zone's A records:

Bash
drpcli clusters work_order add Name:<cluster-name> openshift-cluster-dns-refresh

Not needed when DNS is externally managed.

Step 21: Destroying the Cluster

To tear down the cluster and release its machines back to the pool:

Bash
drpcli clusters destroy Name:<cluster-name>

The destroy pipeline does three things:

  1. Runs openshift-install destroy cluster on the cluster's tooling machine (only relevant for IPI — in agent-based installs this is a no-op).
  2. Removes the cluster-specific profile (named after the cluster) from every member machine and resets each machine's openshift/role parameter. Machines are returned to the pool as eligible for future clusters.
  3. Destroys the cluster object in DRP. The per-cluster profile is retained but unassigned; delete it separately with drpcli profiles destroy <cluster-name> if you do not plan to reuse it.

Data retained vs. released

Cluster-scoped params (like openshift/kubeconfig and the kubeadmin password) live on the cluster object and are destroyed with it. Save these to a secure location before destroying if you need them for forensic work. Machine-level artifacts (the agent ISO, join files) are cleaned up during the workflow.


Quick Verification: Deploy a Test Application

Once your cluster is healthy, deploy a simple test application to confirm workloads run:

Bash
# Create a new project
oc new-project hello-openshift

# Deploy a test pod
kubectl create deployment hello-node \
  --image=registry.k8s.io/e2e-test-images/agnhost:2.43 \
  -- /agnhost serve-hostname

# Expose the service
oc expose deployment hello-node --port=9376
oc expose service hello-node

# Test it (substitute your cluster name and domain)
curl hello-node-hello-openshift.apps.tutorial.k8s.local

# Clean up
oc delete project hello-openshift

Reference: Key Parameters

All parameters are documented in the OpenShift content pack. The most important ones for initial deployment:

Parameter Required Description
openshift/pull-secret Yes Red Hat registry authentication (set in global profile)
openshift/cluster-domain Yes Base DNS domain for the cluster
broker/name Yes Resource broker (typically pool-broker)
openshift/role Yes (per machine) Node role: controlplane or worker
openshift/network/machineNetwork Recommended Node IP CIDR
openshift/network/serviceNetwork Optional Service IP CIDR (default: 172.30.0.0/16)
openshift/network/clusterNetwork Optional Pod network CIDR (default: 10.128.0.0/14)
openshift/enable-dns-zone Optional true to have DRP create and manage cluster DNS records (default: false)
openshift/enable-internal-lb Optional true to deploy OpenShift keepalived for VIP failover (default: false)
openshift/api-vip Internal LB only Virtual IP for API access (required when enable-internal-lb: true)
openshift/ingress-vip Internal LB only Virtual IP for application ingress (required when enable-internal-lb: true)
openshift/install-rootDeviceHints Optional Specify install target disk
network-data Optional Per-machine NMState network config (bonding, VLAN, static IP)
openshift/gitops-repo-url Optional GitOps repo URL for ArgoCD setup
ntp-servers Optional List of NTP servers — triggers chrony MachineConfig and additionalNTPSources injection
openshift/chrony-config-template Optional Custom chrony template name (default: chrony.conf.tmpl)
openshift/extra-manifests Optional List of DRP templates to render as extra OpenShift manifests
openshift/disable-autodhcp Optional Disable NetworkManager auto-DHCP on all interfaces (default: false)
openshift/machine-labels Optional Map of Kubernetes labels applied to this machine's OpenShift node after cluster install. Compose-expanded across layered profiles; applied by the openshift-label-machines task
openshift/external-registry Airgap only Disconnected registry configuration
openshift/bootstrap-versions Bootstrap only Versions to download via bootstrap
openshift/allow-node-removal Day 2 only Set to true on the cluster to allow openshift-cluster-add-nodes to remove nodes (default false)

Reference: Deployment Tasks

The universal-application-openshift-cluster pipeline runs these tasks automatically:

Task Description
openshift-cluster-prep Generates install-config.yaml and agent config
openshift-cluster-update-zone Creates/updates DRP DNS zone records (no-op unless openshift/enable-dns-zone: true)
openshift-cluster-join Builds agent boot ISO, boots nodes
openshift-cluster-wait-for-bootstrap-complete Waits for control plane bootstrap
openshift-cluster-transition-to-installation Signals nodes to proceed
openshift-cluster-wait-for-install-complete Waits for full installation
openshift-cluster-nmstate-operator Installs NMState operator
openshift-cluster-enable-gitops Sets up ArgoCD (skipped if no repo URL set)
openshift-label-machines Final post-install task. Iterates cluster members, applies openshift/machine-labels to each corresponding node via oc label node

Reference: Administrative Blueprints

After deployment, use these DRP blueprints for ongoing management:

Blueprint Description
openshift-cluster-status Check cluster health and component status
openshift-cluster-add-nodes Reconcile pool membership with openshift/role; removals require openshift/allow-node-removal: true. See Step 18.
openshift-cluster-dns-refresh Refresh DNS configuration