Openshift¶
OpenShift Content Bundle¶
This content pack provides comprehensive tooling and automation for deploying and managing OpenShift clusters through Digital Rebar Platform (DRP). It handles the complete lifecycle of OpenShift clusters including installation, configuration, node management, and advanced features like OpenShift Virtualization (OCPV).
For Advanced Cluster Management (ACM), see the OpenShift Advanced Cluster Management content bundle.
Design Philosophy¶
The content bundle is designed around several key principles:
-
Pipeline-Driven Deployment: The main cluster deployment is handled through a specialized profile (pipeline) that orchestrates the entire process. This ensures consistency and reduces human error.
-
Task-Based Management: Individual administrative tasks are packaged as blueprints, allowing for targeted operations to manage cluster.
-
Flexible Infrastructure: Support for both DRP-managed and external DNS, disconnected installations, and various infrastructure configurations.
-
Automated Coordination: Tasks like node approval and cluster joining are automatically synchronized to ensure proper cluster formation.
Architecture¶
The content bundle supports two modes of deployment:
- User Provisioned Infrastructure (UPI)
- Installer Provisioned Infrastructure (IPI)
Depending upon your needs and interactions with Red Hat will dictate your choice.
Node Types¶
The content bundle supports four distinct node types:
- Bootstrap Node (UPI-Only)
- Temporary node that initializes the cluster
- Minimum 2 vCPUs, 8GB RAM, 100GB disk
- Converts to worker node after cluster initialization
-
Provides initial control plane services
-
Control Plane Nodes
- Manage cluster's core services (API server, scheduler, etcd)
- Minimum 4 vCPUs, 16GB RAM, 100GB disk per node
- Requires exactly three nodes for production
-
Must have identical hardware specifications
-
Worker Nodes
- Run application workloads and containers
- Minimum 2 vCPUs, 8GB RAM, 100GB disk
- Scalable based on workload demands
-
Can have varying hardware specifications
-
Load Balancer Nodes (UPI-Only)
- HAProxy-based traffic distribution
- Minimum 2 vCPUs, 4GB RAM, 20GB disk
- Multiple nodes recommended for HA
- Handles API and application ingress
Network Architecture¶
The cluster uses three distinct network segments that MUST NOT overlap:
- Machine Network (Default: 172.21.0.0/20)
- Used for node IP addresses
- Must be routable within infrastructure
- Hosts API endpoints and load balancers
-
For IPI, Virtual IPs (VIPs) must come from this region.
-
Service Network (Default: 172.30.0.0/16)
- Used for Kubernetes services
- Internal cluster communications
-
Not routable outside cluster
-
Cluster Network (Default: 10.128.0.0/14)
- Pod networking
- Configurable host prefix (default: /23 - 512 pods per node)
- Internal container communication
Prerequisites¶
Infrastructure Requirements¶
- DNS configuration (two options - they are not mutually exclusive):
- DRP-managed DNS (default): DRP automatically manages required DNS records
- External DNS: Must manually configure DNS records as detailed in the DNS configuration section
- Network connectivity between all nodes
- Internet access or configured disconnected registry
- Valid Red Hat OpenShift subscription
- Sufficient network capacity for cluster traffic
Required Parameters¶
openshift/pull-secret: Red Hat registry authentication (obtain from Red Hat OpenShift Cluster Manager)openshift/cluster-domain: Base domain for cluster DNS
UPI Required Parameters¶
broker/name: Resource broker name (typically "pool-broker" for pool-based deployments)
IPI Required Parameters¶
openshift/api-vip: VIP for API access. Must come from Machine Networkopenshift/ingress-vip: VIP for Ingress access. Must come from Machine Network
Optional Parameters¶
openshift/external-registry: Disconnected registry configuration
UPI Optional Parameters¶
openshift/workers/names: Worker node hostnamesopenshift/controlplanes/names: Control plane node hostnamesopenshift/bootstraps/names: Bootstrap node hostnameopenshift/load-balancers/names: Load balancer hostnames
IPI Optional Parameters¶
openshift/workers/count: Number workers to wait to be preset before starting installation.openshift/controlplanes/count: Number controlplanes to wait to be preset before starting installation.openshift/resources/count: Number resources to wait to be preset before starting installation.
Required Files¶
The following files must be accessible to DRP:
- OpenShift Installer:
- Download from:
https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/stable-4.15/ - File:
openshift-install-linux.tar.gzor version-specificopenshift-install-linux-4.15.46.tar.gz - Upload to DRP at:
/files/redhat/openshift/openshift-install-linux-4.15.6.tar.gz -
Param:
openshift/installer-url -
OpenShift Client Tools:
- Download from:
https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/stable-4.15/ - File:
openshift-client-linux.tar.gzor version-specificopenshift-client-linux-4.15.46.tar.gz(includesocandkubectl) - Upload to DRP:
/files/redhat/openshift/oc-4.15.6-linux.tar.gz -
Param:
openshift/oc-url -
OpenShift Mirror Tool (for disconnected installations):
- Download from:
https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/stable-4.15/ - File:
oc-mirror.tar.gz - Upload to DRP at:
/files/redhat/openshift/oc-mirror.rhel9.tar.gz - Param:
openshift/oc-mirror-url
ISOs¶
These files are required for the UPI-based install.
- RedHat CoreOS - RHCOS
- Download from:
https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/ - File:
rhcos-4.15.23-x86_64-live.x86_64.iso - Photon (UPI-Only)
- Dowload from:
https://github.com/vmware/photon/wiki/Downloading-Photon-OS - File:
https://packages.vmware.com/photon/5.0/GA/iso/photon-5.0-dde71ec57.x86_64.iso
Containers¶
- Fedora container for oc-context
Version Customization¶
The bootstrap-openshift-contents bootstrap profile can be applied to a self-runner machine along with the bootstrap-versions param to specify which versions of openshift to download artifacts for and install. This bootstrap operation will automatically run a script similar to the custom bundles below.
To control whether to download IPI or/and UPI assets, the parameters set on the global profile or self-runner will drive what is provided.
openshift/bootstrap-ipi- defaults to trueopenshift/bootstrap-upi- defaults to false
Proxies can be used by setting the standard http_proxy/https_proxy/no_prxoy shell variables.
Custom Bundles¶
The generate_openshift_artifacts.sh tool in the repo generates a content bundle containing
a bootstrap and cluster profiles for a desired version of openshift.
OS_VERSION=latest-4.19
./generate_openshift_artifacts.sh --version $OS_VERSION --ipi --upi
# this assumes you have appropriate credentials already in place
# You will need use the actual version discovered. The output of the command has the full path.
oc_content/4.19.7/install.sh
Note
By default, execution will NOT include an installation profile. Remember to add --ipi, --upi, or both.
OS_VERSION=latest-4.19
./generate_openshift_artifacts.sh --version $OS_VERSION --ipi --upi
# this assumes you have appropriate credentials already in place
# You will need use the actual version discovered. The output of the command has the full path.
oc_content/4.19.7/install.sh
The bootstrap-openshift-<version> profile can be added to the self-runner to download the
needed artifacts when running the rebootstrap-drp blueprint or the
universal-bootstrap workflow.
The openshift-cluster-<version>-ipi or openshift-cluster-<version>-upi profile can be added
during cluster creation to specify which version of openshift and coreos to install.
Airgap¶
All required files can be downloaded using the --download option
on the generate_openshift_artifacts.sh tool.
This tool also provides the DRPCLI file and iso commands needed to upload the required files.
OS_VERSION=latest-4.19
./generate_openshift_artifacts.sh --version $OS_VERSION --ipi --upi --download -b
# this assumes you have appropriate credentials already in place
# You will need use the actual version discovered. The output of the command has the full path.
oc_content/4.19.7/install.sh
Note
The install script is updated to install the ISOs for the UPI-based install. Excluding --upi
will not download the ISOs.
Note
There is not a boostrap profile when download is used.
!!! This assumes that the DRP endpoint can be reached from the download system.
Airgap when downloading system can not access DRP¶
Note
This only works for IPI-based installation. ISOs will have to be transited by hand.
When the -b flag is added to generate_openshift_artifacts.sh, an upload bundle is also created.
The bundle will be in oc_content/bundles by default. The bundle name will include the version
of the openshift tools, e.g. openshift-upload-bundle-4.19.9.tgz.
Copy the version or versions you wish to deploy from the DRP endpoint to a machine that can access the API port of the server. Setting up credentials to access the DRP endpoint as appropriate for your environment, run:
OS_VERSION=4.19.9
drpcli files upload openshift-upload-bundle-${OS_VERSION}.tgz --explode
drpcli files download redhat/openshift/openshift-cluster-${OS_VERSION}.yaml as openshift-cluster-${OS_VERSION}.yaml
drpcli contents upload openshift-cluster-${OS_VERSION}.yaml
Note
Change the OS_VERSION as appropriate.
Note
Setup credentials through environment variables, .drpcli file, or command line flags.
Custom Bundles Script from DRP Server¶
The custom bundles script can be retrieved from the DRP server after the openshift content pack is loaded into the server.
drpcli machines create fake-machine
drpcli templates render generate-openshift-artifacts.sh.tmpl Name:fake-machine > generate_openshift_artifacts.sh
drpcli machines destroy Name:fake-machine
chmod +x generate_openshift_artifacts.sh
The temporary fake machine allows for rendering the file from the template. Follow the steps above for usage.
Deployment Process¶
UPI Deployment Process¶
The deployment is orchestrated by the universal-application-openshift-cluster pipeline, which is implemented as a specialized DRP profile. The process can be initiated through either the DRP web interface or CLI.
Web Interface Deployment¶
- Navigate to the cluster wizard
- Click "Add +" to create a new cluster
- Select "openshift-cluster" as the Cluster Pipeline
- Add the "openshift-cluster-\<Version>-upi"
- Add the "openshift-config-container-registry" profile if there is an external registry. Fill in additional fields.
- Select "openshift-client-runner" as the context
- Select appropriate broker (typically "pool-broker"). For IPI-based install, select "pool-broker" it is unused.
- Paste your pull secret
- Click "Save"
CLI Deployment¶
The following assumes your pull-secret is stored in the global profile. This is also for the UPI case.
# Create cluster configuration
cat > cluster-config.yaml <<EOF
---
Name: demo
Profiles:
- universal-application-openshift-cluster
- openshift-cluster-4.19.7-upi
Workflow: universal-start
Meta:
BaseContext: openshift-client-runner
Params:
broker/name: pool-broker
EOF
# Create the cluster
drpcli clusters create - < cluster-config.yaml
For external registries, there are additional steps required.
Deployment Stages¶
The deployment process consists of three main phases:
-
Pre-provisioning Tasks:
universal/cluster-provision-pre-flexiflow: - openshift-cluster-tools # Install OpenShift CLI and required tools - openshift-cluster-external-registry-create # Setup disconnected registry if configured - openshift-cluster-external-registry-update # Mirror required images if using disconnected registry - openshift-cluster-prep # Generate cluster configuration and ignition files -
Resource Provisioning:
- The resource broker (typically pool-broker) selects or creates the required machines
- Machines are assigned appropriate roles (bootstrap, control plane, worker, load balancer)
- Base operating system is installed and configured
-
Nodes wait at the approval stage for orchestrated deployment
-
Post-provisioning Tasks:
The pipeline ensures these phases execute in the correct order and handles all necessary synchronization between nodes.
IPI Deployment Process¶
The deployment is orchestrated by the universal-application-openshift-cluster pipeline, which is implemented as a specialized DRP profile. The process can be initiated through either the DRP web interface or CLI.
Prerequisties¶
- Create a pool for the cluster. The name can be anything, but matching the cluster name is useful.
- Move machines into the pool.
- For each machine:
- Add the
openshift/roleparameter to indicate the role of this machine. Valid values are:controlplane,worker,resource. - Add the networking configuration need for this node. Bonding, VLANs, static IPs, ...
- Add the
Web Interface Deployment¶
- Navigate to the cluster wizard
- Click "Add +" to create a new cluster
- Select "openshift-cluster" as the Cluster Pipeline
- Add the "openshift-cluster-\<Version>-ipi" profile
- Add the "openshift-config-container-registry" profile if there is an external registry
- Select "openshift-client-runner" as the context
- Select appropriate broker (typically "pool-broker"). For IPI-based install, select "pool-broker" it is unused.
- Paste your pull secret
- Click "Save"
CLI Deployment¶
The following assumes your pull-secret is stored in the global profile.
# Create cluster configuration
cat > cluster-config.yaml <<EOF
---
Name: demo
Profiles:
- universal-application-openshift-cluster
- openshift-cluster-4.19.7-ipi
Workflow: universal-start
Meta:
BaseContext: openshift-client-runner
Params:
broker/name: pool-broker
broker-pool/pool: demo
openshift/api-vip: 10.111.1.55
openshift/cluster-domain: os.eng.rackn.dev
openshift/ingress-vip: 10.111.1.56
openshift/network/clusterNetwork:
- hostPrefix: 23
cidr: 172.21.0.0/16
openshift/network/serviceNetwork:
- 172.22.0.0/16
openshift/network/machineNetwork:
- cidr: 10.111.0.0/20
EOF
# Create the cluster
drpcli clusters create - < cluster-config.yaml
Note
In this example, the cluster name demo is used for the pool containing the tagged machines.
For external registries, there are additional steps required.
Deployment Stages¶
The deployment process consists of three main phases:
-
Pre-provisioning Tasks:
universal/cluster-provision-pre-flexiflow: - openshift-cluster-tools # Install OpenShift CLI and required tools - openshift-cluster-external-registry-create # Setup disconnected registry if configured - openshift-cluster-external-registry-update # Mirror required images if using disconnected registry - openshift-cluster-prep # Generate cluster configuration and ignition files -
Resource Provisioning:
- Using the pool named after the cluster or a pool named by parameter
broker-pool/pool, machines are gathered. - Machines start hardware configuration process.
-
Nodes wait at the approval stage for orchestrated deployment
-
Post-provisioning Tasks:
The pipeline ensures these phases execute in the correct order and handles all necessary synchronization between nodes.
Advanced Network Configuration¶
To inject machine specific networking information, you will need to update the machines' parameters to contain additional information.
Testing OpenShift¶
Deploy Test Application¶
# Create a new project
oc new-project hello-openshift
# Create the deployment
kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.43 -- /agnhost serve-hostname
# Expose the service
oc expose deployment hello-node --port=9376
oc expose service hello-node
# Test the deployment
curl hello-node-hello-openshift.apps.demo.k8s.local
# Scale the deployment
oc scale deployment hello-node --replicas=3
# Cleanup (removes all resources in the project)
oc delete project hello-openshift
Advanced Features¶
Disconnected Installations¶
Support for air-gapped environments through: - External registry configuration - Image mirroring capabilities - Certificate management - Custom catalog sources
Load Balancer Configuration¶
By default, the content bundle configures HAProxy for cluster load balancing. However, production deployments often use external load balancers. Regardless of the implementation, the following ports must be configured:
- API server (port
6443) - Machine config server (port
22623) - HTTP ingress (port
80) - HTTPS ingress (port
443)
The load balancer configuration works in conjunction with the DNS configuration to provide access to cluster services.
Administrative Tasks¶
The content bundle includes several blueprints for common administrative tasks:
openshift-cluster-status: Check cluster health and componentsopenshift-cluster-dns-refresh: Update DNS and load balancer configurationopenshift-cluster-remove-node: Safely remove nodes from the cluster
Troubleshooting¶
Common Commands¶
# Check node status
oc get nodes
# View cluster operators
oc get clusteroperators
# Monitor pod status
oc get pods --all-namespaces
# Check events
oc get events --sort-by='.metadata.creationTimestamp'
# View cluster version
oc get clusterversion
# List available upgrade versions
oc adm upgrade
# Initiate upgrade
oc adm upgrade --to=<version-number>
# Example: oc adm upgrade --to=4.15.36
Resource Cleanup¶
Dedicated tasks for cleanup operations:
- openshift-cluster-cleanup: General cluster cleanup
DNS Configuration¶
When using external DNS, the following records must be configured (example for cluster "demo.k8s.local"). All records should use TTL of 0.
| Name | Type | Value |
|---|---|---|
| ns1 | A | \ |
| smtp | A | \ |
| helper | A | \ |
| helper.demo | A | \ |
| api.demo | A | \ |
| api-int.demo | A | \ |
| *.apps.demo | A | \ |
| cp1.demo | A | \ |
| cp2.demo | A | \ |
| cp3.demo | A | \ |
| worker1.demo | A | \ |
| worker2.demo | A | \ |
| worker3.demo | A | \ |
Local DNS Configuration¶
When using DRP as the DNS host, configure a client to use DRP as the DNS host for the kubernetes doamin:
To remove the rule
# Get the rule ID first
$rules = Get-DnsClientNrptRule | Where-Object {$_.Namespace -eq ".k8s.local"}
# Remove the rule using its ID
Remove-DnsClientNrptRule -Name $rules[0].Name
Support¶
For issues or questions: - Check the Digital Rebar documentation - Review the OpenShift documentation - Review the troubleshooting section - Contact RackN support
License¶
RackN License - See documentation for details.
.. Release v4.15.0 Start