Deploying OpenShift with Digital Rebar Platform (DRP)¶
This tutorial walks you through deploying a production OpenShift cluster using Digital Rebar Platform (DRP). It is organized into four parts:
- Stage DRP Content — Get the right bits onto your DRP server
- Configure a Tracking Repo — Store cluster state in Git for auditing and GitOps
- Deploy an OpenShift Cluster — Create the cluster and let automation do the work
- Access the Cluster — Retrieve credentials and verify health
Contents¶
- Prerequisites
- Firewall Requirements
- Part 1: Stage DRP Content
- Part 2: Deploy an OpenShift Cluster
- Part 3: Access the Cluster
- Reference: Key Parameters
- Reference: Deployment Tasks
- Reference: Administrative Blueprints
Prerequisites¶
Before starting, confirm you have the following:
- DRP installed with a reachable static IP address accessible by all nodes
- If needed, add
--static-ip <IP>to/etc/systemd/system/dr-provision.service drpcliinstalled and authenticated against your DRP endpoint- A valid Red Hat account with an active OpenShift subscription
- Machines that meet minimum specs for your chosen cluster topology:
OpenShift supports three deployment topologies — choose the one that fits your environment:
| Topology | Control Plane Nodes | Worker Nodes | Typical Use |
|---|---|---|---|
| Single Node (SNO) | 1 | 0 | Testing, edge, resource-constrained |
| Compact (3-node) | 3 | 0 | HA control plane, workloads on control plane nodes |
| Full cluster | 3 | 2+ | Production, separate worker pool |
Minimum hardware specs per node:
| Role | vCPUs | RAM | Disk |
|---|---|---|---|
| Control Plane (full cluster) | 4 | 16 GB | 100 GB |
| Worker | 2 | 8 GB | 100 GB |
| Control Plane (SNO or compact — combined role) | 8 | 32 GB | 100 GB |
Hardware Uniformity
Control plane nodes must have identical hardware specifications. Worker nodes may vary.
Firewall Requirements¶
The following outbound HTTPS connections must be allowed from your DRP server and cluster nodes. All connections use port 443.
DRP Server¶
| Hostname | Used by | Purpose |
|---|---|---|
get.rebar.digital |
DRP server | Download DRP content packs and context container images |
mirror.openshift.com |
DRP server | Download OpenShift installer, oc, and oc-mirror binaries |
rhcos.mirror.openshift.com |
DRP server | Download RHCOS live ISO |
Cluster Nodes (during and after installation)¶
| Hostname | Purpose |
|---|---|
quay.io |
Pull OpenShift release images (openshift-release-dev/*) |
registry.redhat.io |
Pull Red Hat base images (ubi8, openshift4, operators) |
registry.connect.redhat.com |
Pull certified operator and marketplace images |
cloud.openshift.com |
Cluster telemetry and subscription management |
Conditional¶
| Hostname | Condition | Purpose |
|---|---|---|
| Your GitOps repo host | If openshift/gitops-repo-url is set |
ArgoCD pulls cluster config |
| Your external registry | Airgap deployments only | Target for openshift/external-registry |
Human Browser Access Only
console.redhat.com is accessed by a human to download the pull secret.
No firewall rule is needed for DRP or cluster nodes to reach it.
Part 1: Stage DRP Content¶
This section downloads and stages the OpenShift binaries, CoreOS ISO, and content pack onto your DRP server. If your DRP server can reach the internet this is largely automated. Airgap differences are called out inline.
Step 1: Obtain Your Pull Secret¶
OpenShift requires a pull secret to authenticate with Red Hat's container registries. You must obtain this before deploying.
- Log in to Red Hat OpenShift Cluster Manager
- Click Copy pull secret or Download pull secret
- Save the JSON to a file on your DRP server, e.g.
~/pull_secret
The pull secret is a JSON object containing credentials for the following registries (example below):
{
"auths": {
"cloud.openshift.com": { "auth": "...", "email": "you@example.com" },
"quay.io": { "auth": "...", "email": "you@example.com" },
"registry.connect.redhat.com": { "auth": "...", "email": "you@example.com" },
"registry.redhat.io": { "auth": "...", "email": "you@example.com" }
}
}
Step 2: Set the Pull Secret in the Global Profile¶
By storing the pull secret in DRP's global profile, it is automatically available to all
cluster deployments without needing to paste it every time.
Verify it was saved:
You should see the auths JSON structure returned.
Parameter: openshift/pull-secret
This parameter stores the authentication secret required to pull container images from Red Hat's container registries. It is mandatory for cluster deployment and is automatically used by the OpenShift installer.
Step 3: Install DRP Content Packs¶
Four base content packs must be loaded into DRP before staging OpenShift artifacts. These provide the core tasks, boot environments, and OpenShift automation templates.
Connected environment:
drpcli catalog item install universal
drpcli catalog item install drp-community-content
drpcli catalog item install coreos
drpcli catalog item install openshift
Airgap
In an airgapped environment, obtain the YAML files from your internal catalog or from a connected system and upload them directly:
Verify all four are loaded:
Step 4: Stage the OpenShift Version Artifacts¶
The OpenShift content bundle provides DRP with the tasks, profiles, and templates needed to deploy a cluster. It also requires version-specific artifacts: the installer binary, CLI tools, and the CoreOS ISO.
Option A: Bootstrap via Self-Runner (Easiest)¶
If your DRP server can reach the internet, the easiest method is to use the DRP self-runner machine's bootstrap workflow. This downloads everything and installs the context container automatically.
SELF_RUNNER=drp-demo # Replace with your self-runner machine name
drpcli machines update Name:$SELF_RUNNER '{ "Locked": false }'
drpcli machines addprofile Name:$SELF_RUNNER bootstrap-contexts
drpcli machines addprofile Name:$SELF_RUNNER bootstrap-openshift-client-runner
drpcli machines addprofile Name:$SELF_RUNNER bootstrap-openshift-contents
drpcli machines set Name:$SELF_RUNNER param openshift/bootstrap-versions to '["latest-4.21"]'
# Run the rebootstrap-drp blueprint
drpcli machines work_order add Name:$SELF_RUNNER rebootstrap-drp
Parameter: openshift/bootstrap-versions
Specifies which OpenShift versions to download and stage. Accepts a list of version
strings such as latest-4.21 (resolves to the current stable 4.21.x release) or
exact versions like 4.21.1. Multiple versions can be staged for environments that
deploy different OpenShift releases.
Option B: Script-Based (Connected, More Control)¶
From the directory containing your OpenShift content repo, run the artifact generation script to download and stage all required files:
This script:
- Resolves latest-4.21 to the current stable release (e.g. 4.21.1)
- Downloads the OpenShift installer, CLI tools, and mirror tool
- Generates the version-specific content pack
- Prints the path to the generated install.sh
Run the install.sh printed by the script to upload everything to DRP:
Retrieving the script from DRP
If you don't have the local repo, you can render the script directly from DRP:
Option C: Airgap (DRP cannot reach the internet)¶
Use the --download flag to create an upload bundle on a system that can reach the
internet, then transfer it to your DRP server.
On the internet-connected system:
OS_VERSION=latest-4.21
./generate_openshift_artifacts.sh --version $OS_VERSION --download
# Creates oc_content/bundles/openshift-upload-bundle-<VERSION>.tgz
Copy the bundle to your DRP server, then upload:
OS_VERSION=4.21.1 # Use the resolved version printed above
drpcli files upload openshift-upload-bundle-${OS_VERSION}.tgz --explode
drpcli files download redhat/openshift/openshift-cluster-${OS_VERSION}.yaml \
as openshift-cluster-${OS_VERSION}.yaml
drpcli contents upload openshift-cluster-${OS_VERSION}.yaml
Note
In the airgap path, openshift-cluster-<VERSION>.yaml is created and uploaded
manually. The auto-bootstrap profile (used in Option A) is not available, but the
content pack functions identically once uploaded.
The openshift-client-runner context container image must also be downloaded
and imported separately — see Step 5, Option C.
Verify the Content is Loaded¶
Confirm the version-specific profile is present:
You should see openshift-cluster-4.21.1 (or your resolved version) in the output.
Step 5: Install the openshift-client-runner Context¶
DRP uses a container context (openshift-client-runner) to run the OpenShift CLI tools
(oc, kubectl) during cluster operations. This is a Fedora-based container image
that is not included in the install.sh script — it must be installed separately.
Confirm whether it is already installed:
Expected output: openshift-client-runner
If it is missing, choose one of the following methods:
Option A: Bootstrap (DRP server can reach internet)¶
Follow the bootstrap steps from Step 4, Option A. The bootstrap-contexts and
bootstrap-openshift-client-runner profiles both handle context installation.
Option B: Manual install via CLI (connected)¶
IMAGE=$(drpcli contexts show openshift-client-runner 2>/dev/null | jq -re '.Image' \
|| echo 'openshift-client-runner_v1.2.22')
# Download from RackN, stage in DRP file store, and load into the docker-context plugin
drpcli files upload https://get.rebar.digital/containers/${IMAGE}.tar.gz \
as contexts/docker-context/${IMAGE}
drpcli plugins runaction docker-context imageUpload \
context/image-name $IMAGE \
context/image-path files/contexts/docker-context/$IMAGE
Option C: Airgap — download on a connected system, import on DRP¶
On a system that has internet access (does not need drpcli):
# Determine the image name — check your openshift content pack version or use the
# default below (update the version number if deploying a different release)
IMAGE=$(drpcli contexts show openshift-client-runner 2>/dev/null | jq -re '.Image' \
|| echo 'openshift-client-runner_v1.2.22')
wget "https://get.rebar.digital/containers/${IMAGE}.tar.gz"
Transfer the .tar.gz to your DRP server, then upload and load it:
IMAGE=openshift-client-runner_v1.2.22 # set to the filename you downloaded
drpcli files upload ${IMAGE}.tar.gz as contexts/docker-context/${IMAGE}
drpcli plugins runaction docker-context imageUpload \
context/image-name $IMAGE \
context/image-path files/contexts/docker-context/$IMAGE
Verify it loaded:
get.rebar.digital
Context images (~130 MB) are served from https://get.rebar.digital/containers/.
The filename is always <image-name>.tar.gz where the image name includes the
version (e.g. openshift-client-runner_v1.2.22). Check the installed context for
the exact version: drpcli contexts show openshift-client-runner | jq -r '.Image'
Part 2: Deploy an OpenShift Cluster¶
With content staged, you are ready to deploy. This section walks through assigning machine roles, adding hardware profiles, creating a resource pool, and launching the cluster pipeline.
Step 6: Identify Your Machines¶
List the machines available for deployment:
Choose the correct number of machines for your cluster topology (see Prerequisites): - Single Node: 1 machine (acts as both control plane and worker) - Compact (3-node): 3 machines (control plane nodes also run workloads) - Full cluster: 3 control plane + 2 or more workers
Note their names — you will use them in the following steps.
Step 7: Apply Hardware Profiles¶
Before assigning OpenShift roles, apply any site-specific hardware profiles to your machines. These profiles configure BMC/BIOS settings, RAID, network bonding, and disk selection appropriate for your hardware vendor and environment.
# Example: apply your site hardware profile to each machine
drpcli machines addprofile Name:<machine-name> <your-hardware-profile>
Common hardware profile categories:
- Disk selection: Use the openshift/install-rootDeviceHints parameter to specify
which disk OpenShift installs to (e.g. by serial number, size, or model). This is
especially important on servers with multiple disks.
- Network bonding/VLAN: Add profiles that configure interface bonding or VLAN
tagging on the nodes.
- BIOS/RAID: Vendor-specific profiles (e.g. for Dell, HPE, or Lenovo) that configure
hardware before the OS boots.
Note
If your machines have a single disk and no special hardware configuration, you can skip this step. The OpenShift installer will use the first available disk by default.
Step 8: Configure Network Data (Optional)¶
If your cluster nodes require custom network configuration — bonded interfaces, VLANs,
static IP addresses, or jumbo frames — set the network-data parameter on each machine
before deployment. This drives the NMState configuration injected into each node's agent
boot ISO.
Parameter: network-data
This parameter is optional. If omitted, CoreOS uses DHCP on the first available network interface. Set it when your nodes need:
- Static IP addresses
- Bonded interfaces or LACP aggregation
- VLAN tagging
- Custom MTU (e.g. jumbo frames)
- Specific DNS servers or gateway
The network-data parameter is a map with a prod key that describes the primary
machine network interface used by OpenShift.
Example 1: DHCP (default — no configuration needed)
If your nodes get IP addresses via DHCP, skip this step entirely.
Example 2: Static IP, single NIC
drpcli machines set Name:<machine-name> param network-data to - <<EOF
prod:
address: 192.168.1.101
prefix: 24
gateway: 192.168.1.1
dns-servers:
- 192.168.1.1
interface: eth0
dhcp: 'false'
EOF
Example 3: Bonded interfaces with VLAN and static IP
drpcli machines set Name:<machine-name> param network-data to - <<EOF
prod:
address: 10.102.147.24
prefix: 26
gateway: 10.102.147.1
dns-servers:
- 10.80.1.222
- 10.80.2.222
interface: eno12399np0,ens1f0np0
bond: bond0
vlan: '206'
dhcp: 'false'
mtu: '9000'
link-aggregation:
mode: 802.3ad
options:
lacp_rate: '1'
miimon: '100'
xmit_hash_policy: layer2+3
EOF
Bonded VLAN interface naming
When both bond and vlan are set, the template creates a bond interface (e.g.
bond0) with a VLAN sub-interface on top (e.g. bond0.206). The interface field
takes a comma-separated list of physical NICs to include in the bond.
Repeat for each machine in the cluster, substituting the correct IP address per machine.
Step 9: Assign Machine Roles¶
Assign each machine an OpenShift role by adding the appropriate profile. Control plane and worker roles are mutually exclusive.
# Assign control plane role to each control plane machine
drpcli machines addprofile Name:<cp-machine-1> openshift-controlplane
drpcli machines addprofile Name:<cp-machine-2> openshift-controlplane # skip for SNO
drpcli machines addprofile Name:<cp-machine-3> openshift-controlplane # skip for SNO
# Assign worker role — only needed for full clusters with separate worker nodes
drpcli machines addprofile Name:<worker-machine-1> openshift-worker
drpcli machines addprofile Name:<worker-machine-2> openshift-worker
drpcli machines addprofile Name:<worker-machine-3> openshift-worker
Workers are optional
For Single Node and Compact (3-node) deployments, skip the worker role assignment entirely. Control plane nodes in these topologies also accept workloads.
Profile: openshift-controlplane / openshift-worker
These profiles set the openshift/role parameter on the machine automatically.
Valid values for openshift/role are controlplane and worker. You can also
set the parameter directly:
Verify the roles were set (using aggregate=true to resolve values set via profiles):
drpcli machines list aggregate=true | \
jq -r '.[] | select(.Params | has("openshift/role")) | "\(.Name): \(.Params["openshift/role"])"'
Step 10: Create a Pool and Add Machines¶
The cluster pipeline discovers machines through a DRP pool. Create a pool named after your cluster and add all cluster machines to it.
CLUSTER_NAME=tutorial
# Add control plane machines
drpcli pools manage add $CLUSTER_NAME "Name=<cp-machine-1>"
drpcli pools manage add $CLUSTER_NAME "Name=<cp-machine-2>"
drpcli pools manage add $CLUSTER_NAME "Name=<cp-machine-3>"
# Add worker machines
drpcli pools manage add $CLUSTER_NAME "Name=<worker-machine-1>"
drpcli pools manage add $CLUSTER_NAME "Name=<worker-machine-2>"
drpcli pools manage add $CLUSTER_NAME "Name=<worker-machine-3>"
Verify all machines are in the pool:
drpcli machines list | jq -r \
'.[] | select(.Pool == "'$CLUSTER_NAME'") | "\(.Name): \(.Params["openshift/role"])"'
Step 11: Gather Network Information¶
Before creating the cluster, collect the following network details for your environment. All three networks must not overlap.
| Parameter | Required | Description | Example |
|---|---|---|---|
openshift/cluster-domain |
Yes | Base DNS domain for the cluster | k8s.local |
openshift/network/machineNetwork |
Yes | CIDR for node IP addresses | 10.0.0.0/20 |
openshift/network/serviceNetwork |
Optional | CIDR for Kubernetes services (internal) | 172.30.0.0/16 |
openshift/network/clusterNetwork |
Optional | CIDR for pod networking (internal) | 10.128.0.0/14 |
openshift/api-vip |
Internal LB only | Virtual IP for the Kubernetes API (from Machine Network) | 10.0.1.55 |
openshift/ingress-vip |
Internal LB only | Virtual IP for application routes (from Machine Network) | 10.0.1.56 |
Default Networks
The service and cluster networks use defaults that work for most deployments
(172.30.0.0/16 and 10.128.0.0/14). Only change these if they conflict with your
existing network infrastructure.
DNS / Load Balancer Configuration¶
Two parameters control how DNS is resolved for the cluster:
| Parameter | Default | Description |
|---|---|---|
openshift/enable-dns-zone |
false |
When true, DRP creates and manages a DNS zone for the cluster domain |
openshift/enable-internal-lb |
false |
When true, OpenShift deploys keepalived to manage the API and Ingress VIPs |
External DNS (default) — Leave both parameters at their defaults when you have existing DNS infrastructure (Active Directory, BIND, Route 53, etc.) and an external load balancer or upstream proxy handling the API endpoint. You are responsible for creating the required DNS records before the cluster boots:
api.<cluster-name>.<domain>→ your API load balancer IPapi-int.<cluster-name>.<domain>→ your API load balancer IP*.apps.<cluster-name>.<domain>→ your Ingress load balancer IP
DRP-managed DNS — Set openshift/enable-dns-zone: true when no external DNS is
available and DRP is the DNS resolver for the cluster network. DRP will create and
maintain all required records automatically as nodes join or are replaced. When enabled,
openshift/api-vip and openshift/ingress-vip must both be set.
OpenShift internal load balancer — Set openshift/enable-internal-lb: true for
bare-metal or on-premises environments without an external load balancer. OpenShift will
deploy keepalived across the control plane nodes to provide VIP failover. When enabled,
openshift/api-vip and openshift/ingress-vip must both be set to free IPs within the
Machine Network CIDR.
VIP Requirements (internal LB only)
When openshift/enable-internal-lb is true, the API VIP and Ingress VIP must be
within the Machine Network CIDR and must not be assigned to any existing machine.
VIPs are not required when using an external load balancer.
Common combinations
| Environment | enable-dns-zone |
enable-internal-lb |
VIPs needed? |
|---|---|---|---|
| Corporate DC with external DNS + LB | false |
false |
No (LB handles them) |
| On-prem, DRP as DNS, no external LB | true |
true |
Yes |
| On-prem, external DNS, no external LB | false |
true |
Yes |
| On-prem, DRP as DNS, external LB | true |
false |
No |
Step 12: Create the Cluster¶
Create a cluster configuration YAML file. Replace the placeholder values with your actual network information. A reference copy of this file is at openshift-deploy-tutorial-cluster-config.yaml.
CLUSTER_NAME=tutorial
CLUSTER_DOMAIN=k8s.local
MACHINE_CIDR=<your-machine-cidr>
OS_VERSION=${OS_VERSION:-4.21.1} # Use the resolved version from Step 4
cat > cluster-config.yaml <<EOF
---
Name: $CLUSTER_NAME
Profiles:
- universal-application-openshift-cluster
- openshift-cluster-${OS_VERSION}
Workflow: universal-start
Meta:
BaseContext: openshift-client-runner
Params:
broker/name: pool-broker
broker-pool/pool: $CLUSTER_NAME
openshift/cluster-domain: $CLUSTER_DOMAIN
openshift/network/machineNetwork:
- cidr: $MACHINE_CIDR
openshift/network/serviceNetwork:
- 172.30.0.0/16
openshift/network/clusterNetwork:
- hostPrefix: 23
cidr: 10.128.0.0/14
# DNS and load balancer options — see Step 11 for guidance
openshift/enable-dns-zone: false # set true to let DRP manage DNS records
openshift/enable-internal-lb: false # set true to use OpenShift keepalived VIPs
EOF
drpcli clusters create - < cluster-config.yaml
Parameter: openshift/cluster-domain
The base domain used for all cluster DNS records. The installer creates
api.<name>.<domain> and *.apps.<name>.<domain> from this value.
Default: k8s.local.
Parameter: openshift/enable-dns-zone
When true, DRP automatically creates and maintains a DNS zone for the cluster.
The zone contains A records for the API endpoint, internal API, wildcard ingress
(*.apps), and every cluster node. Leave false when an external DNS provider
already has the required records. Default: false.
Parameter: openshift/enable-internal-lb
When true, OpenShift deploys keepalived across the control plane nodes to provide
high-availability VIP failover for the API and Ingress endpoints. You must also set
openshift/api-vip and openshift/ingress-vip to unused IPs in the machine network.
Leave false when an external load balancer is already directing traffic to the cluster.
Default: false.
Enabling the internal load balancer — if you set openshift/enable-internal-lb: true,
add the VIP addresses to the Params block:
openshift/enable-internal-lb: true
openshift/api-vip: <your-api-vip> # unused IP within machine network
openshift/ingress-vip: <your-ingress-vip> # unused IP within machine network
Profile: openshift-cluster-<version>
Contains the version-specific installer binary URL, CoreOS ISO reference, and client
tool URLs. Created by generate_openshift_artifacts.sh in Step 4. Use the exact
resolved version printed by that script (e.g. 4.21.1) as $OS_VERSION above.
When drpcli clusters create runs, it auto-creates a cluster-specific profile named
after your cluster (e.g. tutorial) and assigns the version profile to it:
cluster machine → tutorial profile → openshift-cluster-4.21.1 profile
Verify with:
GitOps Repository
To track cluster configuration in a Git repository using ArgoCD, add
openshift/gitops-repo-url to the cluster's Params:
If this parameter is not set, the GitOps setup step is skipped automatically.
External Registry (Airgap)
For disconnected registry deployments, add the openshift-config-container-registry
profile to the Profiles list and configure openshift/external-registry.
Confirm the cluster was created:
drpcli clusters show Name:$CLUSTER_NAME | jq '{Name: .Name, Stage: .Stage, Workflow: .Workflow}'
Step 13: Monitor Deployment¶
The cluster deployment runs through three automated phases:
- Pre-provisioning — Generates
install-config.yaml, agent configs, and NMState network configuration - Resource provisioning — Machines boot the agent ISO and begin joining the cluster
- Post-provisioning — Waits for bootstrap, then installation, then installs the NMState operator
Monitor cluster pipeline progress:
drpcli clusters show Name:$CLUSTER_NAME | \
jq '{Stage: .Stage, CurrentTask: .CurrentTask, JobState: .JobState}'
Watch member machine progress:
# $CLUSTER_NAME must be set in your shell before running this
watch -n 30 'drpcli machines list | jq -r ".[] | select(.Pool == \"'$CLUSTER_NAME'\") | \"\(.Name): \(.Stage)\""'
Deployment typically takes 30–60 minutes depending on hardware and network speed.
Troubleshooting
Check a machine's current task log:
Once credentials are retrieved (Step 14), use these for deeper diagnosis:
Pull secret not set
If the cluster fails early with:
Theopenshift/pull-secret has not been set. Return to
Step 2 and set it, then re-run
the cluster workflow.
Part 3: Access the Cluster¶
Once the deployment completes, the cluster credentials are stored as parameters on the cluster object in DRP.
Step 14: Retrieve the kubeconfig¶
The openshift/kubeconfig parameter is set automatically when installation completes.
CLUSTER_NAME=tutorial
# Save the kubeconfig to a local file
mkdir -p ~/.kube
drpcli clusters get Name:$CLUSTER_NAME param openshift/kubeconfig --aggregate \
> ~/.kube/${CLUSTER_NAME}-config
# Point kubectl/oc at the cluster
export KUBECONFIG=~/.kube/${CLUSTER_NAME}-config
Step 15: Retrieve the kubeadmin Password¶
Security Note
The kubeadmin user is a temporary bootstrap admin. Red Hat recommends configuring
an identity provider and removing the kubeadmin user once you have established
another cluster administrator.
Step 16: Verify Cluster Health¶
Confirm all nodes have joined and the cluster operators are healthy:
# All nodes should show Ready
oc get nodes
# All cluster operators should show Available=True, Progressing=False, Degraded=False
oc get clusteroperators
# Confirm the installed version
oc get clusterversion
Expected output for oc get nodes on a 3+3 cluster:
NAME STATUS ROLES AGE VERSION
cp-machine-1 Ready control-plane,master 71m v1.34.2
cp-machine-2 Ready control-plane,master 71m v1.34.2
cp-machine-3 Ready control-plane,master 43m v1.34.2
worker-machine-1 Ready worker 44m v1.34.2
worker-machine-2 Ready worker 44m v1.34.2
worker-machine-3 Ready worker 45m v1.34.2
Step 17: Access the Web Console¶
Retrieve the console URL from DRP:
This returns the full URL, for example:
Log in with username kubeadmin and the password retrieved in Step 15.
Parameter: openshift/console
The openshift/console parameter is set automatically on the cluster object when
installation completes. You can also construct the URL manually:
https://console-openshift-console.apps.<CLUSTER_NAME>.<CLUSTER_DOMAIN>
DNS for the Console
Your client must be able to resolve *.apps.<CLUSTER_NAME>.<CLUSTER_DOMAIN>.
External DNS (default) — if your DNS infrastructure manages the cluster domain, ensure your client's resolver can reach it. No additional DRP configuration is needed.
DRP-managed DNS — if you set openshift/enable-dns-zone: true, DRP holds the
authoritative zone. Point your client at DRP for the cluster subdomain:
Windows:
Add-DnsClientNrptRule -Namespace ".k8s.local" -NameServers "<DRP-IP>"
Clear-DnsClientCache
Linux — add to your resolver config for the subdomain, or test directly:
Quick Verification: Deploy a Test Application¶
Once your cluster is healthy, deploy a simple test application to confirm workloads run:
# Create a new project
oc new-project hello-openshift
# Deploy a test pod
kubectl create deployment hello-node \
--image=registry.k8s.io/e2e-test-images/agnhost:2.43 \
-- /agnhost serve-hostname
# Expose the service
oc expose deployment hello-node --port=9376
oc expose service hello-node
# Test it (substitute your cluster name and domain)
curl hello-node-hello-openshift.apps.tutorial.k8s.local
# Clean up
oc delete project hello-openshift
Reference: Key Parameters¶
All parameters are documented in the OpenShift content pack. The most important ones for initial deployment:
| Parameter | Required | Description |
|---|---|---|
openshift/pull-secret |
Yes | Red Hat registry authentication (set in global profile) |
openshift/cluster-domain |
Yes | Base DNS domain for the cluster |
broker/name |
Yes | Resource broker (typically pool-broker) |
openshift/role |
Yes (per machine) | Node role: controlplane or worker |
openshift/network/machineNetwork |
Recommended | Node IP CIDR |
openshift/network/serviceNetwork |
Optional | Service IP CIDR (default: 172.30.0.0/16) |
openshift/network/clusterNetwork |
Optional | Pod network CIDR (default: 10.128.0.0/14) |
openshift/enable-dns-zone |
Optional | true to have DRP create and manage cluster DNS records (default: false) |
openshift/enable-internal-lb |
Optional | true to deploy OpenShift keepalived for VIP failover (default: false) |
openshift/api-vip |
Internal LB only | Virtual IP for API access (required when enable-internal-lb: true) |
openshift/ingress-vip |
Internal LB only | Virtual IP for application ingress (required when enable-internal-lb: true) |
openshift/install-rootDeviceHints |
Optional | Specify install target disk |
network-data |
Optional | Per-machine NMState network config (bonding, VLAN, static IP) |
openshift/gitops-repo-url |
Optional | GitOps repo URL for ArgoCD setup |
openshift/external-registry |
Airgap only | Disconnected registry configuration |
openshift/bootstrap-versions |
Bootstrap only | Versions to download via bootstrap |
Reference: Deployment Tasks¶
The universal-application-openshift-cluster pipeline runs these tasks automatically:
| Task | Description |
|---|---|
openshift-cluster-prep |
Generates install-config.yaml and agent config |
openshift-cluster-update-zone |
Creates/updates DRP DNS zone records (no-op unless openshift/enable-dns-zone: true) |
openshift-cluster-join |
Builds agent boot ISO, boots nodes |
openshift-cluster-wait-for-bootstrap-complete |
Waits for control plane bootstrap |
openshift-cluster-transition-to-installation |
Signals nodes to proceed |
openshift-cluster-wait-for-install-complete |
Waits for full installation |
openshift-cluster-nmstate-operator |
Installs NMState operator |
openshift-cluster-enable-gitops |
Sets up ArgoCD (skipped if no repo URL set) |
Reference: Administrative Blueprints¶
After deployment, use these DRP blueprints for ongoing management:
| Blueprint | Description |
|---|---|
openshift-cluster-status |
Check cluster health and component status |
openshift-cluster-add-nodes |
Add or remove nodes from a running cluster |
openshift-cluster-dns-refresh |
Refresh DNS configuration |