Skip to content

Deploying Hub-Spoke OpenShift Clusters with DRP

This tutorial covers DRP's hub/spoke OpenShift topology: a single hub cluster running Red Hat Advanced Cluster Management (ACM) or Open Cluster Management (OCM), and one or more spoke clusters that register with the hub for centralized policy, observability, and lifecycle management.

DRP owns the infrastructure provisioning side:

  • Provisioning the hub cluster (including invoking your ACM/OCM init playbook)
  • Provisioning each spoke cluster
  • Carrying the hub's kubeconfig into each spoke so the spoke's registration task can apply the ACM auto-import manifests

What DRP does not own:

  • The ACM/OCM init playbook. The hub-init task runs a customer-specific ansible-playbook from a git repo you supply. The repo format is Red Hat specific and is typically derived from redhat-cop/gitops-standards-repo-template. You will typically work with Red Hat services to generate the playbook — this tutorial treats it as external.
  • The cluster provisioning flow — the basics (hardware, pull secret, network configuration, VIPs) are identical to the single-cluster tutorial. This tutorial only covers the hub-spoke deltas; read Deploying OpenShift with DRP first.

Contents


Prerequisites

Before starting:

  • Complete Steps 1–5 of Deploying OpenShift with DRPDRP is installed, the pull secret is in the global profile, and the OpenShift content packs plus version artifacts are staged. The openshift-client-runner context is available.
  • Enough hardware for two clusters: one hub (minimum 3 control plane nodes) and one spoke (minimum 1 control plane node for SNO, 3 for compact, 3+2 for full). Hardware minimums per cluster match the base tutorial's prerequisites table.
  • A git repository (SSH URL form — e.g. git@github.com:your-org/hub-init.git) containing an ACM/OCM init playbook authored to bootstrap your hub. If you do not yet have one, see the Red Hat template linked above.
  • An SSH private key with read access to that repository. You will store this key in DRP as a parameter — treat it as a secret.

Firewall Addendum

In addition to the firewall requirements in the base tutorial, hub/spoke deployments require:

Source Destination Port Purpose
Hub cluster machines Your git hosting provider (github.com, gitlab.com, self-hosted) 22 SSH clone of the ACM/OCM init repo
Spoke cluster machines Hub cluster's API endpoint (api.<hub>.<domain>) 6443 oc apply the ACM registration manifests
Spoke cluster machines Hub cluster's ingress endpoint (*.apps.<hub>.<domain>) 443 ACM agent communication after registration

Part 1: Deploy the Hub Cluster

The hub deploys exactly like a standard OpenShift cluster from the base tutorial, with two differences: the cluster-config uses the hub profile, and two additional parameters carry the init repo URL and SSH key.

Step 1: Store the init repo URL and SSH key on DRP

Storing these in DRP means any hub cluster you deploy will automatically pick them up without needing to paste the key every time.

Bash
# URL — SSH form
drpcli profiles set global param openshift/hub-cluster-init-repo to 'git@github.com:your-org/hub-init.git'

# SSH private key — pass stdin
drpcli profiles set global param openshift/hub-cluster-init-repo-ssh-key to - < ~/.ssh/hub_init_key

Parameter: openshift/hub-cluster-init-repo

SSH URL of the git repository containing the ACM/OCM init playbook. Must be SSH form — https:// URLs are not supported because the hub init task uses SSH key auth.

Parameter: openshift/hub-cluster-init-repo-ssh-key

SSH private key used to clone the init repo. The key is written to /root/.ssh/hub_init_key on the hub cluster's tooling machine for the duration of the init task. Treat this parameter as a secret.

Step 2: Deploy the hub

Follow Steps 6–11 of the base tutorial to identify machines, apply hardware profiles, set network data, assign roles, create the pool, and gather network information. These steps are identical for the hub.

Then, create the cluster with the hub pipeline profile (differs from the base tutorial — use universal-application-openshift-hub-cluster instead of universal-application-openshift-cluster):

Bash
HUB_NAME=hub
HUB_DOMAIN=k8s.local
MACHINE_CIDR=<your-machine-cidr>
OS_VERSION=${OS_VERSION:-4.21.1}

cat > hub-config.yaml <<EOF
---
Name: $HUB_NAME
Profiles:
  - universal-application-openshift-hub-cluster
  - openshift-cluster-${OS_VERSION}
Workflow: universal-start
Meta:
  BaseContext: openshift-client-runner
Params:
  broker/name: pool-broker
  broker-pool/pool: $HUB_NAME
  openshift/cluster-domain: $HUB_DOMAIN
  openshift/network/machineNetwork:
    - cidr: $MACHINE_CIDR
  openshift/network/serviceNetwork:
    - 172.30.0.0/16
  openshift/network/clusterNetwork:
    - hostPrefix: 23
      cidr: 10.128.0.0/14
  openshift/enable-dns-zone: false
  openshift/enable-internal-lb: false
EOF

drpcli clusters create - < hub-config.yaml

Profile: universal-application-openshift-hub-cluster

Inherits the entire base cluster pipeline and adds the openshift-hub-cluster-init task to the pre-flexiflow list. Everything else — agent-based install, network config, role assignment — works identically to the base profile.

Step 3: Monitor the hub deployment

Watch the cluster pipeline:

Bash
drpcli clusters show Name:$HUB_NAME | \
  jq '{Stage: .Stage, CurrentTask: .CurrentTask, JobState: .JobState}'

The hub adds one extra task on top of the base flow:

  • openshift-hub-cluster-init — runs after cluster install completes. Writes the SSH key to /root/.ssh/hub_init_key, clones the init repo, and runs ansible-playbook from the checked-out tree.

Common failure modes for the init task:

Failure Check
SSH key rejected by git host Key is correct and has read access to the repo (drpcli profiles get global param openshift/hub-cluster-init-repo-ssh-key returns the expected key)
Repo URL unreachable Hub machine can resolve DNS and reach port 22 on the git host (see firewall addendum)
ansible-playbook fails drpcli jobs list Machine=<hub-machine> Task=openshift-hub-cluster-init and inspect the log for playbook output

The playbook itself is customer-specific. Its failures are outside DRP scope — consult the playbook's own documentation or your Red Hat services contact.

Step 4: Verify ACM is installed

After the pipeline finishes, verify that the ACM operator is running on the hub:

Bash
export KUBECONFIG=~/.kube/${HUB_NAME}-config
oc get mch -A
oc get csv -n open-cluster-management | grep -iE 'acm|advanced-cluster-management'

The MultiClusterHub CR (mch) should exist, and the ACM ClusterServiceVersion should be Succeeded. If not, re-run the init task after fixing your playbook or inspect the hub-init job log.


Part 2: Capture the Hub Kubeconfig

Each spoke needs the hub's kubeconfig to register itself. Retrieve it from DRP:

Bash
HUB_KUBECONFIG=$(drpcli clusters get Name:$HUB_NAME param openshift/kubeconfig --aggregate)

The kubeconfig is an embedded YAML/JSON value. In Part 3, you will inline it into the spoke's cluster-config. You do not need to save it to a file unless you want to use it locally with oc.

Parameter: openshift/hub-cluster-kubeconfig

This is the spoke-side parameter that holds the hub's kubeconfig. The spoke's join task writes it to /root/hub-cluster-kubeconfig only for the duration of the registration task, then removes it.


Part 3: Deploy a Spoke Cluster

Spokes deploy exactly like standard OpenShift clusters, with two differences: the cluster-config uses the spoke profile and carries the hub kubeconfig.

Step 1: Standard pre-flight

Follow Steps 6–11 of the base tutorial for the spoke's machines — identify machines, hardware profiles, network data, assign roles, create a pool, gather network information. Identical to the base tutorial; nothing spoke-specific yet.

Step 2: Create the spoke cluster with the hub kubeconfig inlined

Bash
SPOKE_NAME=spoke-a
SPOKE_DOMAIN=k8s.local
MACHINE_CIDR=<spoke-machine-cidr>
OS_VERSION=${OS_VERSION:-4.21.1}

cat > spoke-config.yaml <<EOF
---
Name: $SPOKE_NAME
Profiles:
  - universal-application-openshift-spoke-cluster
  - openshift-cluster-${OS_VERSION}
Workflow: universal-start
Meta:
  BaseContext: openshift-client-runner
Params:
  broker/name: pool-broker
  broker-pool/pool: $SPOKE_NAME
  openshift/cluster-domain: $SPOKE_DOMAIN
  openshift/network/machineNetwork:
    - cidr: $MACHINE_CIDR
  openshift/enable-dns-zone: false
  openshift/enable-internal-lb: false
  openshift/hub-cluster-kubeconfig: |
$(drpcli clusters get Name:$HUB_NAME param openshift/kubeconfig --aggregate | sed 's/^/    /')
EOF

drpcli clusters create - < spoke-config.yaml

The heredoc uses sed 's/^/ /' to indent the hub kubeconfig so it becomes a valid YAML block-literal value under openshift/hub-cluster-kubeconfig. Verify with drpcli clusters get Name:$SPOKE_NAME param openshift/hub-cluster-kubeconfig before proceeding.

Profile: universal-application-openshift-spoke-cluster

Inherits the entire base cluster pipeline and adds the openshift-spoke-cluster-join-hub task to the pre-flexiflow list.

Step 3: Monitor the spoke deployment

The spoke pipeline adds one extra task:

  • openshift-spoke-cluster-join-hub — after install completes, writes the hub kubeconfig to disk, renders five ACM manifests into register-cluster/, runs oc apply -k ./register-cluster/, then removes the hub kubeconfig.

The five manifests are:

Manifest Role
namespace.yaml Creates a namespace on the hub for this spoke
managedcluster.yaml The ManagedCluster CR
klusterletaddonconfig.yaml ACM agent configuration
auto-import-secret.yaml Secret containing the spoke's kubeconfig (base64-JSON) so the hub can auto-import
kustomization.yaml Kustomize entry point

Step 4: Verify registration from the hub

Bash
export KUBECONFIG=~/.kube/${HUB_NAME}-config

# The spoke should appear within a few minutes of the join task finishing
oc get managedclusters

# Check the detailed status
oc get managedcluster $SPOKE_NAME \
  -o jsonpath='{.status.conditions[?(@.type=="ManagedClusterConditionAvailable")].status}'

Expected: the Available condition reports True. If it reports False or the ManagedCluster does not appear at all, check:

  • Spoke pipeline actually ran the join task: drpcli jobs list Machine=<spoke-machine> Task=openshift-spoke-cluster-join-hub
  • Hub API reachable from the spoke: see the firewall addendum
  • Hub kubeconfig was written correctly: drpcli clusters get Name:$SPOKE_NAME param openshift/hub-cluster-kubeconfig returns the expected YAML

Part 4: Adding More Spokes

Each additional spoke repeats Part 3 with:

  • A new SPOKE_NAME (for example spoke-b, spoke-c)
  • A new pool
  • Separate machines

The same hub kubeconfig is reused — no updates to the hub are needed between spoke deploys. You can deploy spokes in parallel: the hub's ACM auto-import accepts registrations independently.

Removing a spoke

To detach a spoke from the hub: first destroy the spoke cluster with drpcli clusters destroy Name:<spoke-name>, then remove its ManagedCluster from the hub:

Bash
export KUBECONFIG=~/.kube/${HUB_NAME}-config
oc delete managedcluster <spoke-name>

DRP's destroy pipeline does not know about the hub — the ACM cleanup is a separate hub-side step.


Reference: Parameters

Parameter Scope Description
openshift/hub-cluster-init-repo Hub cluster SSH URL of the ACM/OCM init playbook git repo
openshift/hub-cluster-init-repo-ssh-key Hub cluster SSH private key for the init repo (sensitive)
openshift/hub-cluster-kubeconfig Spoke cluster Hub kubeconfig used for registration

Reference: Pipeline Profiles

Profile Description
universal-application-openshift-hub-cluster Base OpenShift cluster pipeline + hub init task
universal-application-openshift-spoke-cluster Base OpenShift cluster pipeline + spoke join task

Reference: Tasks

Task Runs on Description
openshift-hub-cluster-init Hub Clones the init repo and runs ansible-playbook to install ACM/OCM
openshift-spoke-cluster-join-hub Spoke Writes the hub kubeconfig, renders five ACM manifests, applies via kustomize, removes kubeconfig