Skip to content

Openshift

OpenShift Content Bundle

This content pack provides comprehensive tooling and automation for deploying and managing OpenShift clusters through Digital Rebar Platform (DRP). It handles the complete lifecycle of OpenShift clusters including installation, configuration, node management, and advanced features like OpenShift Virtualization (OCPV).

For Advanced Cluster Management (ACM), see the OpenShift Advanced Cluster Management content bundle.

Design Philosophy

The content bundle is designed around several key principles:

  1. Pipeline-Driven Deployment: The main cluster deployment is handled through a specialized profile (pipeline) that orchestrates the entire process. This ensures consistency and reduces human error.

  2. Task-Based Management: Individual administrative tasks are packaged as blueprints, allowing for targeted operations to manage cluster.

  3. Flexible Infrastructure: Support for both DRP-managed and external DNS, disconnected installations, and various infrastructure configurations.

  4. Automated Coordination: Tasks like node approval and cluster joining are automatically synchronized to ensure proper cluster formation.

Architecture

The content bundle uses agent-based installation, which is part of the Installer Provisioned Infrastructure (IPI) approach. This method uses the OpenShift installer to generate agent boot images that configure and join nodes to the cluster automatically.

Node Types

The content bundle supports two distinct node types:

  1. Control Plane Nodes
  2. Manage cluster's core services (API server, scheduler, etcd)
  3. Minimum 4 vCPUs, 16GB RAM, 100GB disk per node
  4. Requires exactly three nodes for production
  5. Must have identical hardware specifications

  6. Worker Nodes

  7. Run application workloads and containers
  8. Minimum 2 vCPUs, 8GB RAM, 100GB disk
  9. Scalable based on workload demands
  10. Can have varying hardware specifications

Network Architecture

The cluster uses three distinct network segments that MUST NOT overlap:

  1. Machine Network (Default: 172.21.0.0/20)
  2. Used for node IP addresses
  3. Must be routable within infrastructure
  4. Hosts API endpoints and load balancers
  5. For IPI, Virtual IPs (VIPs) must come from this region.

  6. Service Network (Default: 172.30.0.0/16)

  7. Used for Kubernetes services
  8. Internal cluster communications
  9. Not routable outside cluster

  10. Cluster Network (Default: 10.128.0.0/14)

  11. Pod networking
  12. Configurable host prefix (default: /23 - 512 pods per node)
  13. Internal container communication

Prerequisites

Infrastructure Requirements

  • DNS configuration (two options - they are not mutually exclusive):
  • DRP-managed DNS (default): DRP automatically manages required DNS records
  • External DNS: Must manually configure DNS records as detailed in the DNS configuration section
  • Network connectivity between all nodes
  • Internet access or configured disconnected registry
  • Valid Red Hat OpenShift subscription
  • Sufficient network capacity for cluster traffic
  • dr-provision running with a Static IP that is reachable by all nodes. (Optionally configured by adding --static-ip 1.2.3.4 to /etc/systemd/system/dr-provision.service)

Contexts/Containers

Digital Rebar uses contexts (containers) to provide the necessary tools for OpenShift deployment and management. The following context is required and the cluster will not deploy without it:

  • openshift-client-runner - Fedora based container with OpenShift CLI tools (oc, kubectl, etc.) for cluster operations.

Here are two ways to download and install the required context:

  • In the Rackn Portal from the Contexts view using the openshift-client-runner row's download button, or following the above link and clicking the download button toward the bottom of the editor.
  • When running the rebootstrap-drp self-runner blueprint by adding the bootstrap-contexts and bootstrap-openshift-client-runner profiles to the self-runner machine.

Required Parameters

The following parameters must be set for the cluster deployment:

  • openshift/pull-secret: Red Hat registry authentication (obtain from Red Hat OpenShift Cluster Manager)
  • openshift/cluster-domain: Base domain for cluster DNS
  • broker/name: Resource broker name (typically "pool-broker" for pool-based deployments)

The following parameter must be set on each machine:

  • openshift/role: (On each Machine) Node role assignment (controlplane or worker)

Optional Parameters

The following parameters can be set to customize the cluster deployment:

  • openshift/api-vip: VIP for API access. Must come from Machine Network
  • openshift/ingress-vip: VIP for Ingress access. Must come from Machine Network
  • openshift/network/machineNetwork: Node IP address ranges
  • openshift/network/serviceNetwork: Service IP ranges
  • openshift/network/clusterNetwork: Pod network configuration
  • openshift/external-registry: Disconnected registry configuration

Required Files

The following files must be accessible to DRP. These are version-specific and can be automatically downloaded using the generate_openshift_artifacts.sh script (see Custom Bundles).

Note

The versions shown below are examples. Replace with the appropriate version for your deployment.

  • OpenShift Installer:
  • Source URL: https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/stable-<VERSION>/
  • File: openshift-install-linux.tar.gz or version-specific openshift-install-linux-<VERSION>.tar.gz
  • DRP Location: /files/redhat/openshift/openshift-install-linux-<VERSION>.tar.gz
  • Param: openshift/installer-url

  • OpenShift Client Tools:

  • Source URL: https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/stable-<VERSION>/
  • File: openshift-client-linux.tar.gz or version-specific openshift-client-linux-<VERSION>.tar.gz (includes oc and kubectl)
  • DRP Location /files/redhat/openshift/oc-<VERSION>-linux.tar.gz
  • Param: openshift/oc-url

  • OpenShift Mirror Tool (for disconnected installations):

  • Source URL: https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/stable-<VERSION>/
  • File: oc-mirror.tar.gz
  • DRP Location: /files/redhat/openshift/oc-mirror.rhel9.tar.gz
  • Param: openshift/oc-mirror-url

ISOs

  • RedHat CoreOS - RHCOS
  • Source URL: https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/
  • File: rhcos-<VERSION>-x86_64-live.x86_64.iso
  • DRP Location: ISOs

Version Customization

The bootstrap-openshift-contents bootstrap profile can be applied to a self-runner machine along with the bootstrap-versions param to specify which versions of openshift to download artifacts for and install. This bootstrap operation will automatically run a script similar to the custom bundles below.

Proxies can be used by setting the standard http_proxy/https_proxy/no_proxy shell variables.

Custom Bundles

The generate_openshift_artifacts.sh tool in the repo generates a content bundle containing a cluster profile for a desired version of openshift. The install.sh script installs the files to the DRP server.

Bash
OS_VERSION=latest-4.19
./generate_openshift_artifacts.sh --version $OS_VERSION
# This assumes you have appropriate DRP credentials already in place
# You will need use the actual version discovered.  The output of the command has the full path.
oc_content/4.19.7/install.sh

To uninstall, use the uninstall.sh script in the same directory.

The openshift-cluster-<version> profile can be added during cluster creation to specify which version of openshift and coreos to install.

Airgap

All required files can be downloaded using the --download option on the generate_openshift_artifacts.sh tool.

This tool also provides the DRPCLI file and iso commands needed to upload the required files.

Bash
OS_VERSION=latest-4.19
./generate_openshift_artifacts.sh --version $OS_VERSION --download
# This assumes you have appropriate DRP credentials already in place
# You will need use the actual version discovered.  The output of the command has the full path.
oc_content/4.19.7/install.sh

Note

There is not a bootstrap profile when download is used.

Note

This assumes that the DRP endpoint can be reached from the download system.

Airgap when downloading system can not access DRP

When the --download flag is added to generate_openshift_artifacts.sh, an upload bundle is also created. The bundle will be in oc_content/bundles by default. The bundle name will include the version of the openshift tools, e.g. openshift-upload-bundle-4.19.9.tgz.

Copy the version or versions you wish to deploy from the DRP endpoint to a machine that can access the API port of the server. Setting up credentials to access the DRP endpoint as appropriate for your environment, run:

Bash
OS_VERSION=4.19.9
drpcli files upload openshift-upload-bundle-${OS_VERSION}.tgz --explode
drpcli files download redhat/openshift/openshift-cluster-${OS_VERSION}.yaml as openshift-cluster-${OS_VERSION}.yaml
drpcli contents upload openshift-cluster-${OS_VERSION}.yaml

Note

Change the OS_VERSION as appropriate.

Note

Setup credentials through environment variables, .drpcli file, or command line flags.

Custom Bundles Script from DRP Server

The custom bundles script can be retrieved from the DRP server after the openshift content pack is loaded into the server.

Bash
drpcli machines create fake-machine
drpcli templates render generate-openshift-artifacts.sh.tmpl Name:fake-machine > generate_openshift_artifacts.sh
drpcli machines destroy Name:fake-machine
chmod +x generate_openshift_artifacts.sh

The temporary fake machine allows for rendering the file from the template. Follow the steps above for usage.

Deployment Process

The deployment is orchestrated by the universal-application-openshift-cluster pipeline, which is implemented as a specialized DRP profile. The process can be initiated through either the DRP web interface or CLI.

Prerequisites

  1. Create a pool for the cluster. The name can be anything, but matching the cluster name is useful.
  2. Move machines into the pool.
  3. For each machine:
    • Add the openshift/role parameter to indicate the role of this machine. Valid values are: controlplane, worker. OR assign the openshift-controlplane or openshift-worker profiles to the machine to set the parameter automatically.
    • Add the networking configuration needed for this node. Bonding, VLANs, static IPs, ...

Web Interface Deployment

  1. Navigate to the cluster wizard
  2. Click "Add +" to create a new cluster
  3. Select "openshift-cluster" as the Cluster Pipeline
  4. Add the "openshift-cluster-\<Version>" profile
  5. Add the "openshift-config-container-registry" profile if there is an external registry
  6. Select "openshift-client-runner" as the context
  7. Select appropriate broker (typically "pool-broker")
  8. Paste your pull secret
  9. Fill out any other optional or required parameters
  10. Click "Save"

CLI Deployment

The following assumes your pull-secret is stored in the global profile.

Add the machines to the pool (named demo in this example) and set the openshift/role parameter on each machine.

Bash
# Using machine UUID:
drpcli machines set $id param openshift/role to controlplane
drpcli pools manage add demo "Uuid=$id"

# OR using machine Name:
drpcli machines set Name:$name param openshift/role to controlplane
drpcli pools manage add demo "Name=$name"

Then create the cluster configuration file:

Bash
# Create cluster configuration
cat > cluster-config.yaml <<EOF
---
Name: demo
Profiles:
  - universal-application-openshift-cluster
  - openshift-cluster-4.19.7
Workflow: universal-start
Meta:
  BaseContext: openshift-client-runner
Params:
  broker/name: pool-broker
  broker-pool/pool: demo
  openshift/api-vip: 10.111.1.55
  openshift/cluster-domain: os.eng.rackn.dev
  openshift/ingress-vip: 10.111.1.56
  openshift/network/clusterNetwork:
    - hostPrefix: 23
      cidr: 172.21.0.0/16
  openshift/network/serviceNetwork:
    - 172.22.0.0/16
  openshift/network/machineNetwork:
    - cidr: 10.111.0.0/20
EOF

# Create the cluster
drpcli clusters create - < cluster-config.yaml

Note

In this example, the cluster name demo is used for the pool containing the tagged machines.

For external registries, there are additional steps required.

Deployment Stages

The deployment process consists of three main phases:

  1. Pre-provisioning Tasks:

    YAML
    universal/cluster-provision-pre-flexiflow:
      - openshift-cluster-prep         # Generate cluster configuration
    

  2. Resource Provisioning:

  3. Using the pool named after the cluster or a pool named by parameter broker-pool/pool, machines are gathered.
  4. Machines start hardware configuration process.
  5. Nodes wait at the approval stage for orchestrated deployment

  6. Post-provisioning Tasks:

    YAML
    universal/cluster-provision-post-flexiflow:
      - openshift-cluster-update-zone              # Update DNS zone
      - openshift-cluster-join                     # Build agent images and start nodes
      - openshift-cluster-wait-for-bootstrap-complete  # Wait for bootstrap
      - openshift-cluster-transition-to-installation   # Transition nodes
      - openshift-cluster-wait-for-install-complete    # Wait for install
      - openshift-cluster-nmstate-operator             # Install NMState operator
    

The pipeline ensures these phases execute in the correct order and handles all necessary synchronization between nodes.

Advanced Network Configuration

To inject machine specific networking information, you will need to update the machines' parameters to contain additional information.

Testing OpenShift

Deploy Test Application

Bash
# Create a new project
oc new-project hello-openshift

# Create the deployment
kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.43 -- /agnhost serve-hostname

# Expose the service
oc expose deployment hello-node --port=9376
oc expose service hello-node

# Test the deployment
curl hello-node-hello-openshift.apps.demo.k8s.local

# Scale the deployment
oc scale deployment hello-node --replicas=3

# Cleanup (removes all resources in the project)
oc delete project hello-openshift

Advanced Features

Disconnected Installations

Support for air-gapped environments through: - External registry configuration - Image mirroring capabilities - Certificate management - Custom catalog sources

Load Balancer Configuration

With agent-based installation, OpenShift manages its own load balancing through the API and Ingress VIPs. For production deployments using external load balancers, the following ports must be configured:

  • API server (port 6443)
  • Machine config server (port 22623)
  • HTTP ingress (port 80)
  • HTTPS ingress (port 443)

The load balancer configuration works in conjunction with the DNS configuration to provide access to cluster services.

Administrative Tasks

The content bundle includes several blueprints for common administrative tasks:

  • openshift-cluster-status: Check cluster health and components
  • openshift-cluster-update-zone: Update DNS and load balancer configuration
  • openshift-cluster-add-nodes: Add or remove nodes from the cluster
  • Adding nodes: Set openshift/role parameter on machines in the pool, then run the blueprint
  • Removing nodes: Remove the openshift/role parameter from machines, then run the blueprint to cordon, drain, and release them
  • Note: Removing control plane nodes is not officially supported and may cause cluster instability

Troubleshooting

Common Commands

Bash
# Check node status
oc get nodes

# View cluster operators
oc get clusteroperators

# Monitor pod status
oc get pods --all-namespaces

# Check events
oc get events --sort-by='.metadata.creationTimestamp'

# View cluster version
oc get clusterversion

# List available upgrade versions
oc adm upgrade

# Initiate upgrade
oc adm upgrade --to=<version-number>
# Example: oc adm upgrade --to=4.15.36

Resource Cleanup

Dedicated tasks for cleanup operations: - openshift-cluster-cleanup: General cluster cleanup

DNS Configuration

When using external DNS, the following records must be configured (example for cluster "demo.k8s.local"). All records should use TTL of 0.

Name Type Value
ns1 A \
smtp A \
helper A \
helper.demo A \
api.demo A \
api-int.demo A \
*.apps.demo A \
cp1.demo A \
cp2.demo A \
cp3.demo A \
worker1.demo A \
worker2.demo A \
worker3.demo A \

Local DNS Configuration

When using DRP as the DNS host, configure a client to use DRP as the DNS host for the kubernetes doamin:

Text Only
Add-DnsClientNrptRule -Namespace ".k8s.local" -NameServers "192.168.100.1"
Clear-DnsClientCache

To remove the rule

Text Only
# Get the rule ID first
$rules = Get-DnsClientNrptRule | Where-Object {$_.Namespace -eq ".k8s.local"}
# Remove the rule using its ID
Remove-DnsClientNrptRule -Name $rules[0].Name

Support

For issues or questions: - Check the Digital Rebar documentation - Review the OpenShift documentation - Review the troubleshooting section - Contact RackN support

License

RackN License - See documentation for details.

.. Release v4.15.0 Start