22.31. krib - Kubernetes (KRIB)¶
The following documentation is for Kubernetes (KRIB) (krib) content package at version v4.12.0-alpha00.78+gc037aaa40eb3ad853690ce178f9ab8a5bae4c436.
License: KRIB is APLv2
This document provides information on how to use the Digital Rebar KRIB content add-on. Use of this content will enable the operator to install Kubernetes in either a Live Boot (immutable infrastructure pattern) mode, or via installed to local hard disk OS mode.
KRIB uses the [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/) cluster deployment methodology coupled with Digital Rebar enhancements to help proctor the cluster Master election and secrets management. With this content pack, you can install Kubernetes in a zero-touch manner.
KRIB does also support production, highly available (HA) deployments, with multiple masters. To enable this configuration, we’ve chosen to manage the TLS certificates and etcd installation in the Workflow instead of using the kubeadm process.
This document assumes you have your Digital Rebar Provisioning endpoint fully configured, tested, and working. We assume that you are able to properly provision Machines in your environment as a base level requirement for use of the KRIB content add-on use. See [Installing Krib] for step by step instructions.
- !!! note
documentation source: <https://gitlab.com/rackn/provision-content/blob/master/krib/._Documentation.meta>
## KRIB Video References
The following videos have been produced or presented by RackN related to the Digital Rebar KRIB solution.
[KRIB Zero Config Kubernetes Cluster channel](https://www.youtube-nocookie.com/embed/SYOHI8DfRMo&list=PLXPBeIrpXjfhKqmTvxI5-0CmgUh82dztr&index=1) on YouTube.
[KubeCon: Zero Configuration Pattern on Bare Metal](https://www.youtube-nocookie.com/embed/Psm9aOWzfWk) on YouTube - RackN presentation at 2017 KubeCon/Cloud NativeCon in Austin TX
## Online Requirements
KRIB uses community kubeadm for installation. That process relies on internet connectivity to download containers and other components.
## Immutable -vs- Local Install Mode
The two primary deployment patterns that the Digital Rebar KRIB content pack supports are:
- Live Boot (immutable infrastructure pattern)
[Making Server Deployment 10x Faster - the ROI on Immutable Infrastructure](https://www.rackn.com/2017/10/11/making-server-deployment-10x-faster-roi-immutable-infrastructure/)
[Go CI/CD and Immutable Infrastructure for Edge Computing Management](https://www.rackn.com/2017/09/15/go-cicd-immutable-infrastructure-edge-computing-management/)
Local Install (standard install-to-disk pattern)
The Live Boot mode uses an in-memory Linux image based on the Digital Rebar Sledgehammer (CentOS based) image. After each reboot of the Machine, the node is reloaded with the in-memory live boot image. This enforces the concept of immutable infrastructure - every time a node is booted, deployed, or needs updating, simply reload the latest Live Boot image with appropriate fixes, patches, enhancements, etc.
The Local Install mode mimics the traditional “install-to-my-disk” method that most people are familiar with.
## KRIB Basics
KRIB is a Content Pack addition to Digital Rebar Provision. It uses the Clusters which provides atomic guarantees. This allows for Kubernetes master(s) to be dynamically elected, forcing all other nodes to wait until the kubeadm on the elected master to generate an installation token for the rest of the nodes. Once the Kubernetes master is bootstrapped, the Digital Rebar system facilitates the security token hand-off to rest of the cluster so they can join without any operator intervention.
## Elected -vs- Specified Master
By default, the KRIB process will dynamically elect a Master for the Kubernetes cluster. This masters simply win the race-to-master election process and the rest of the cluster will coalesce around the elected master.
If you wish to specify a specific machines to be the designated masters, you can do so by setting a Param in the cluster Profile to the specific Machine that will be come the master. To do so, set the krib/cluster-masters Param to a JSON structure with the Name, UUID and IP of the machines to become masters. You may add this Param to the Profile in the below specifications, as follows:
The Kubernetes Master will be built on this Machine specified by the <UUID> value.
- !!! note
This MUST be in the cluster profile because all machines in the cluster must be able to see this parameter.
## Install KRIB
KRIB is a Content Pack and is installed in the standard method as any other Contents. We need the krib.json content pack to fully support KRIB and install the helper utility contents for stage changes.
Please review [Installing Krib] for step by step instructions.
[Installing Krib]: (../../../operators/integrations/krib)
### CLI Install
KRIB uses the Certs plugin to build TLS, you can install that from the RackN library
Using the Command Line (drpcli) utility configured to your endpoint, use this process:
`sh # Get code drpcli contents upload catalog:krib-tip `
### UX Install
In the UX, follow this process:
1. Open your DRP Endpoint: (eg. <https://127.0.0.1:8092/> ) 1. Authenticate to your Endpoint 1. Login with your RackN Portal Login account (upper right) 1. Go to the left panel “Content Packages” menu 1. Select Kubernetes (KRIB: Kubernetes Rebar Immutable Bootstrapping) from the right side panel (you may need to select Browser for more Content or use the Catalog button) 1. Select the Transfer button for both content packs to add the content to your local Digital Rebar endpoint
## Configuring KRIB
The basic outline for configuring KRIB follows the below steps:
1. create a Profile to hold the Params for the KRIB configuration (you can also clone the krib-example profile) 1. add a Param of name krib/cluster-profile to the Profile you created 1. add a Param of name etcd/cluster-profile to the Profile you created 1. apply the Profile to the Machines you are going to add to the KRIB cluster 1. change the Workflow on the Machines to krib-live-cluster for memory booting or krib-install-cluster to install to Centos. You may clone these reference workflows to build custom actions. 1. installation will start as soon as the Workflow has been set.
There are many configuration options available, review the krib/* and etcd/* parameters to learn more.
### Configure with the CLI
The configuration of the Cluster includes several reference Workflow that can be used for installation. Depending on which Workflow you use, will determine if the cluster is built via install-to-local-disk or via an immutable pattern (live boot in-memory boot process). Outside of the Workflow differences, all remaining configuration elements are the same.
You must writeable create a Profile from YAML (or JSON if you prefer) with the Params stagemap and param required information. Modify the Name or other fields as appropriate - be sure you rename all subsequent fields appropriately.
```sh echo ‘ — Name: “my-k8s-cluster” Description: “Kubernetes install-to-local-disk” Params:
krib/cluster-profile: “my-k8s-cluster” etcd/cluster-profile: “my-k8s-cluster”
- Meta:
color: “purple” icon: “ship” title: “My Installed Kubernetes Cluster” render: “krib” reset-keeps”: “krib/cluster-profile,etcd/cluster-profile”
‘ > /tmp/krib-config.yaml
- !!! note
The following commands should be applied to all of the Machines you wish to enroll in your KRIB cluster. Each Machine needs to be referenced by the Digital Rebar Machine UUID. This example shows how to collect the UUIDs, then you will need to assign them to the UUIDS variable. We re-use this variable throughout the below documentation within the shell function named my_machines. We also show the correct drpcli command that should be run for you by the helper function, for your reference.
Create our helper shell function my_machines
`sh function my_machines() { for U in $UUIDS; do set -x; drpcli machines $1 $U $2; set +x; done; } `
List your Machines to determine which to apply the Profile to
`sh drpcli machines list | jq -r '.[] | "\(.Name) : \(.Uuid)"' `
IF YOU WANT to make ALL Machines in your endpoint use KRIB, do:
`sh export UUIDS=`drpcli machines list | jq -r '.[].Uuid'` `
Otherwise - individually add them to the UUIDS variable, like:
`sh export UUIDS="UUID_1 UUID_2 ... UUID_n" `
Add the Profile to your machines that will be enrolled in the cluster
Change stage on the Machines to initiate the Workflow transition. YOU MUST select the correct stage, dependent on your install type (Immutable/Live Boot mode or install-to-local-disk mode). For Live Boot mode, select the stage ssh-access and for the install-to-local-disk mode select the stage centos-7-install.
```sh # for Live Boot/Immutable Kubernetes mode my_machines workflow krib-live-cluster
# for intall-to-local-disk mode: my_machines workflow krib-install-cluster
# runs example command: # drpcli machines workflow <UUID> krib-live-cluster # or # drpcli machines workflow <UUID> krib-install-cluster ```
### Configure with the UX
The below example outlines the process for the UX.
RackN assumes the use of CentOS 7 BootEnv during this process. However, it should theoretically work on most of the BootEnvs. We have not tested it, and your mileage will absolutely vary…
1. create a Profile for the Kubernetes Cluster (e.g. my-k8s-cluster) or clone the krib-example profile. 1. add a Param to that Profile: krib/cluster-profile = my-k8s-cluster 1. add a Param to that Profile: etcd/cluster-profile = my-k8s-cluster 1. Add the Profile (eg my-k8s-cluster) to all the machines you want in the cluster. 1. Change workflow on all the machines to krib-install-cluster for install-to-local-disk, or to krib-live-cluster for the Live Boot/Immutable Kubernetes mode
Then wait for them to complete. You can watch the Stage transitions via the Bulk Actions panel (which requires RackN Portal authentication to view).
- !!! note
The reason the Immutable Kubernetes/Live Boot mode does not need a reboot is because they are already running Sledgehammer and will start installing upon the stage change.
## Operating KRIB
### Who is my Master?
If you have not specified who the Kubernetes Master should be; and the master was chosen by election - you will need to determine which Machine is the cluster Master.
`sh # returns the Kubernetes cluster Machine UUID drpcli profiles show my-k8s-cluster | jq -r '.Params."krib/cluster-masters"' `
### Use kubectl - on Master
You can log in to the Master node as identified above, and execute kubectl commands as follows:
`sh export KUBECONFIG=/etc/kubernetes/admin.conf kubectl get nodes `
### Use kubectl - from anywhere
Once the Kubernetes cluster build has been completed, you may use the kubectl command to both verify and manage the cluster. You will need to download the conf file with the appropriate tokens and information to connect to and authenticate your kubectl connections. Below is an example of doing this:
### Advanced Stages - Helm and Sonobuoy
KRIB includes stages for advanced Kubernetes operating support.
The reference workflows already install Helm using the krib-helm stage. To leverage this utility simply define the required JSON syntax for your charts as shown in the krib-helm stage documentation.
Sonobuoy can be used to validate that the cluster conforms to community specification. Adding the krib-sonobuoy stage will start a test run. It can be rerun to collect the results or configured to wait for them. Storing test results in the files path requires setting the unsafe/password parameter and is undesirable for production clusters.
### Ingress/Egress Traffic, Dashboard Access, Istio
The Kubernetes dashboard is enabled within a default KRIB built cluster. However no Ingress traffic rules are set up. As such, you must access services from external connections by making changes to Kubernetes, or via the [Kubernetes Dashboard via Proxy](#kubernetes-dashboard-via-proxy).
These are all issues relating to managing, operating, and running a Kubernetes cluster, and not restrictions that are imposed by Digital Rebar Provision. Please see the appropriate Kubernetes documentation on questions regarding operating, running, and administering Kubernetes (<https://kubernetes.io/docs/home/>).
For Istio via Helm, please consult the krib-helm stage documentation for a reference install.
### Kubernetes Dashboard via Proxy
You can get the admin-user security token with the following command:
`sh kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') `
Now copy the token from the token part printed on-screen so you can paste it into the Enter token field of the dashboard log in screen.
Once you have obtained the admin.conf configuration file and security tokens, you may use kubectl in Proxy mode to the Master. Simply open a separate terminal/console session to dedicate to the Proxy connection, and do:
kubectl proxy
Now, in a local web browser (on the same machine you executed the Proxy command) open the following URL:
### MetalLB Load Balancer
If your cluster is running on bare metal you will most likely need a LoadBalancer provider. You can easily add this to your cluster by adding the krib-metallb stage after the krib-config stage in your workflow. Currently only L2 mode is supported. You will need to set the metallb/l2-ip-range param in your profile with the range of IP’s you wish to use. This ip range must not be within the configured DHCP scope. See the MetalLB docs for more information (<https://metallb.universe.tf/tutorial/layer2/>).
### NGINX Ingress
You can add nginx-ingress to your cluster by adding the krib-ingress-nginx stage to your workflow. This stage requires helm and tiller to be installed so should come after the krib-helm stage in your workflow.
This stage also requires a cloud provider LoadBalancer service or on bare metal you can add the krib-metallb stage before this stage in your workflow.
This stage includes support for cert-manager if your profile is properly configured. See example-cert-manager profile.
### Kubernetes Dashboard via NGINX Ingress
If your workflow includes the [NGINX Ingress](#nginx-ingress) stage the kubernetes dashboard will be accessable via https://k8s-db.LOADBALANCER_IP.xip.io. The access url and cert-manager tls can also be configured by setting the appropriate params in your profile. See example-k8s-db-ingress profile.
Please consult [Kubernetes Dashboard via Proxy](#kubernetes-dashboard-via-proxy) for information on getting the login token
### Rook Ceph Manager Dashboard
If you install the rook via the krib-helm chart template and have krib-ingress-nginx stage in your workflow an ingress will be created so you can access the Ceph Manager Dashboard at https://rook-db.LOADBALANCER_IP.xip.io. The access url and cert-manager tls can also be configured by setting the appropriate params in your profile. See example-rook-db-ingress profile.
The default username is admin and you can get the generated password with the with the following command:
`sh kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o yaml | grep "password:" | awk '{print $2}' | base64 --decode `
## Multiple Clusters
It is absolutely possible to build multiple Kubernetes KRIB clusters with this process. The only difference is each cluster should have a unique name and profile assigned to it. A given Machine may only participate in a single Kubernetes cluster type at any one time. You can install and operate both Live Boot/Immutable with install-to-disk cluster types in the same DRP Endpoint.
22.31.1. Object Specific Documentation¶
22.31.1.1. params¶
The content package provides the following params.
22.31.1.1.1. certmanager/acme-challenge-dns01-provider¶
cert-manager DNS01 Challenge Provider Name Only route53, cloudflare, akamai, and rfc2136 are currently supported See <https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#supported-dns01-providers>
22.31.1.1.2. certmanager/cloudflare-api-key¶
DNS01 Challenge Provider Configuration data See <https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#cloudflare>
22.31.1.1.3. certmanager/cloudflare-email¶
DNS01 Challenge Provider Configuration data See <https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#cloudflare>
22.31.1.1.4. certmanager/crds¶
Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the cert-manager CRDs.
22.31.1.1.5. certmanager/default-issuer-name¶
The default issuer to use when creating ingresses
22.31.1.1.6. certmanager/dns-domain¶
cert-manager DNS domain - the suffix appended to hostnames on certificates signed with cert-manager. Used to auto-generate the ingress for rook ceph, for example
22.31.1.1.7. certmanager/email¶
cert-manager ClusterIssuer configuration See <https://cert-manager.readthedocs.io/en/latest/reference/issuers.html#issuers>
22.31.1.1.8. certmanager/fastdns-access-token¶
DNS01 Challenge Provider Configuration data See <https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#akamai-fastdns>
22.31.1.1.9. certmanager/fastdns-client-secret¶
DNS01 Challenge Provider Configuration data See <https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#akamai-fastdns>
22.31.1.1.10. certmanager/fastdns-client-token¶
DNS01 Challenge Provider Configuration data See <https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#akamai-fastdns>
22.31.1.1.11. certmanager/fastdns-service-consumer-domain¶
DNS01 Challenge Provider Configuration data See <https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#akamai-fastdns>
22.31.1.1.12. certmanager/manifests¶
Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the cert-manager deployment (<https://github.com/jetstack/cert-manager/releases/download/v0.8.1/cert-manager-no-webhook.yaml> is the non-validating one) <https://github.com/jetstack/cert-manager/releases/download/v0.8.1/cert-manager.yaml> is the validating one
22.31.1.1.13. certmanager/rfc2136-nameserver¶
DNS01 Challenge Provider Configuration data See <https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#rfc2136>
22.31.1.1.14. certmanager/rfc2136-tsig-alg¶
DNS01 Challenge Provider Configuration data See <https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#rfc2136>
22.31.1.1.15. certmanager/rfc2136-tsig-key¶
DNS01 Challenge Provider Configuration data See <https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#rfc2136>
22.31.1.1.16. certmanager/rfc2136-tsig-key-name¶
DNS01 Challenge Provider Configuration data See <https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#rfc2136>
22.31.1.1.17. certmanager/route53-access-key¶
DNS01 Challenge Provider Configuration data See <https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#amazon-route53>
22.31.1.1.18. certmanager/route53-access-key-id¶
DNS01 Challenge Provider Configuration data See <https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#amazon-route53>
22.31.1.1.19. certmanager/route53-hosted-zone-id¶
DNS01 Challenge Provider Configuration data See <https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#amazon-route53>
22.31.1.1.20. certmanager/route53-region¶
DNS01 Challenge Provider Configuration data See <https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#amazon-route53>
22.31.1.1.21. certmanager/route53-secret-access-key¶
DNS01 Challenge Provider Configuration data See <https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#amazon-route53>
22.31.1.1.22. consul/agent-count¶
Allows operators to set the number of machines required for the consul agents cluster. Machines will be automatically added until the number is met.
- !!! note
These machines will also be the vault members
22.31.1.1.23. consul/agents¶
Param is set (output) by the consul cluster building process
22.31.1.1.24. consul/agents-done¶
Param is set (output) by the consul cluster building process
22.31.1.1.25. consul/cluster-profile¶
Part of the Digital Rebar Cluster pattern, this parameter is used to identify the machines used in the consul cluster This parameter is REQUIRED for KRIB and consul cluster contruction
22.31.1.1.26. consul/controller-client-cert¶
For use in configurations where the consul cluster is backed up from a machine external to the KRIB cluster. Will be generated for the value in consul/controller-ip.
22.31.1.1.27. consul/controller-client-key¶
For use in configurations where the consul cluster is backed up from a machine external to the KRIB cluster. Will be generated for the value in etcd/controller-client-cert.
22.31.1.1.28. consul/controller-ip¶
An optional IP outside the cluster designated for a “controller” (can be the DRP host) Can be used in combination with consul/controller-client-cert and consul/controller/client-key to remotely backup a consul cluster
If unset, will default to the DRP ProvisionerAddress.
22.31.1.1.29. consul/encryption-key¶
Enables gossip encryption between Consul nedes
22.31.1.1.30. consul/name¶
Allows operators to set a name for the consul cluster
22.31.1.1.31. consul/server-ca-cert¶
Stores consul CA cert for use in non-DRP managed hosts (like a backup host) Requires Cert Plugin
22.31.1.1.32. consul/server-ca-name¶
Allows operators to set the CA name for the server certificate Requires Cert Plugin
22.31.1.1.33. consul/server-ca-pw¶
Allows operators to set the CA password for the consul server certificate Requires Cert Plugin
22.31.1.1.34. consul/server-count¶
Allows operators to set the number of machines required for the consul cluster. Machines will be automatically added until the number is met.
- !!! note
should be an odd number
22.31.1.1.35. consul/servers¶
Param is set (output) by the consul cluster building process
22.31.1.1.36. consul/servers-done¶
Param is set (output) by the consul cluster building process
22.31.1.1.37. consul/version¶
Allows operators to determine the version of consul to install
22.31.1.1.38. containerd/loglevel¶
Allows operators to determine the log level used for the containerd runtime (defaults to “info”)
22.31.1.1.39. containerd/version¶
Allows operators to determine the version of containerd to install
String should NOT include v as as a prefix Used to download from <https://storage.googleapis.com/cri-containerd-release/cri-containerd-${VERSION}.linux-amd64.tar.gz> path
22.31.1.1.40. docker/apply-http-proxy¶
Apply HTTP/HTTPS proxy from to the Docker daemon
22.31.1.1.41. docker/daemon¶
Provide a custom /etc/docker/daemon.json See <https://docs.docker.com/engine/reference/commandline/dockerd/> For example:
`json {"insecure-registries":["ci-repo.englab.juniper.net:5010"]} `
22.31.1.1.42. docker/version¶
Docker Version to use for Kubernetes
22.31.1.1.43. docker/working-dir¶
Allows operators to change the Docker working directory
22.31.1.1.44. etcd/client-port¶
Allows operators to set the port used by etcd clients
22.31.1.1.45. etcd/cluster-client-vip-port¶
The VIP client port to use for multi-master etcd clusters. Each etcd instance will bind to etcd/client-port (2379 by default), but HA services will be serviced by this port number.
Defaults to 8379.
22.31.1.1.46. etcd/cluster-profile¶
Part of the Digital Rebar Cluster pattern, this parameter is used to identify the machines used in the etcd cluster This parameter is REQUIRED for KRIB and etcd cluster contruction
22.31.1.1.47. etcd/controller-client-cert¶
For configurations where the etcd cluster should communicate over a network other than the one that the machine was booted from. Will be generated for the value in etcd/controller-ip.
22.31.1.1.48. etcd/controller-client-key¶
For configurations where the etcd cluster should communicate over a network other than the one that the machine was booted from. Will be generated for the value in etcd/controller-client-cert.
22.31.1.1.49. etcd/controller-ip¶
An optional IP outside the cluster designated for a “controller” (can be the DRP host) Can be used in combination with etcd/controller-client-cert and etcd/controller/client-key to remotely backup an etcd cluster
If unset, will default to the DRP ProvisionerAddress.
22.31.1.1.50. etcd/ip¶
For configurations where the etcd cluster should communicate over a network other than the one that the machine was booted from.
If unset, will default to the Machine Address.
22.31.1.1.51. etcd/name¶
Allows operators to set a name for the etcd cluster
22.31.1.1.52. etcd/peer-ca-name¶
Allows operators to set the CA name for the peer certificate Requires Cert Plugin
22.31.1.1.53. etcd/peer-ca-pw¶
Allows operators to set the CA password for the peer certificate If missing, will be generated Requires Cert Plugin
22.31.1.1.54. etcd/peer-port¶
Allows operators to set the port for the cluster peers
22.31.1.1.55. etcd/server-ca-cert¶
Stores etcd CA cert for use in non-DRP managed hosts (like a backup host) Requires Cert Plugin
22.31.1.1.56. etcd/server-ca-name¶
Allows operators to set the CA name for the server certificate Requires Cert Plugin
22.31.1.1.57. etcd/server-ca-pw¶
Allows operators to set the CA password for the server certificate Requires Cert Plugin
22.31.1.1.58. etcd/server-count¶
Allows operators to set the number of machines required for the etcd cluster.
Machines will be automatically added until the number is met.
- !!! note
should be an odd number
22.31.1.1.59. etcd/servers¶
Param is set (output) by the etcd cluster building process
22.31.1.1.60. etcd/servers-done¶
Param is set (output) by the etcd cluster building process
22.31.1.1.61. etcd/version¶
Allows operators to determine the version of etcd to install Note: Changes should be coordinate with KRIB Kubernetes version
22.31.1.1.62. ingress/ip-address¶
IP Address assigned to ingress service via LoadBalancer
22.31.1.1.63. ingress/k8s-dashboard-hostname¶
Hostname to use for the kubernetes dashboard. You will need to manually configure your DNS to point to the ingress ingress/ip-address.
If no hostname is provided a {{“ingress/ip-address”}}.xip.io hostname will be assigned
22.31.1.1.64. ingress/longhorn-dashboard-hostname¶
Hostname to use for the Rancher Longhorn Dashboard. You will need to manually configure your DNS to point to the ingress ingress/ip-address.
If no hostname is provided a longhorn-db.$INGRESSIP.xip.io hostname will be assigned
22.31.1.1.65. ingress/rook-dashboard-hostname¶
Hostname to use for the Rook Ceph Manager Dashboard. You will need to manually configure your DNS to point to the ingress ingress/ip-address.
If no hostname is provided a rook-db.$INGRESSIP.xip.io hostname will be assigned
22.31.1.1.66. krib/apiserver-extra-SANs¶
List of additional SANs (IP addresses or FQDNs) used in the certificate used for the API Server
22.31.1.1.67. krib/apiserver-extra-args¶
Array of apiServerExtraArgs that you want added to the kubeadm configuration.
22.31.1.1.68. krib/calico-container-image-cni¶
Allows operators to optionally override the container image used in the Calico deployment. Possible use case would be pre-prepared images pushed to a local trusted registry
22.31.1.1.69. krib/calico-container-image-kube-controllers¶
Allows operators to optionally override the container image used in the Calico deployment. Possible use case would be pre-prepared images pushed to a local trusted registry
22.31.1.1.70. krib/calico-container-image-node¶
Allows operators to optionally override the container image used in the Calico deployment. Possible use case would be pre-prepared images pushed to a local trusted registry
22.31.1.1.71. krib/calico-container-image-pod2daemon-flexvol¶
Allows operators to optionally override the container image used in the Calico deployment. Possible use case would be pre-prepared images pushed to a local trusted registry
22.31.1.1.72. krib/cert-manager-container-image-cainjector¶
Allows operators to optionally override the container image used in the cert-manager deployment. Possible use case would be pre-prepared images pushed to a local trusted registry
22.31.1.1.73. krib/cert-manager-container-image-controller¶
Allows operators to optionally override the container image used in the cert-manager deployment. Possible use case would be pre-prepared images pushed to a local trusted registry
22.31.1.1.74. krib/cert-manager-container-image-webhook¶
Allows operators to optionally override the container image used in the cert-manager deployment. Possible use case would be pre-prepared images pushed to a local trusted registry
22.31.1.1.75. krib/cluster-admin-conf¶
Param is set (output) by the cluster building process
22.31.1.1.76. krib/cluster-api-port¶
The API bindPort number for the cluster masters. Defaults to 6443.
22.31.1.1.77. krib/cluster-api-vip-port¶
The VIP API port to use for multi-master clusters. Each master will bind to ‘krib/cluster-api-port’ (6443 by default), but HA services for the API will be services by this port number.
Defaults to 8443.
22.31.1.1.78. krib/cluster-bootstrap-token¶
Defines the bootstrap token to use. Default is fedcba.fedcba9876543210.
22.31.1.1.79. krib/cluster-bootstrap-ttl¶
How long BootStrap tokens for the cluster should live. Default is 24h0m0s.
Must use a format similar to the default.
22.31.1.1.80. krib/cluster-cni-version¶
Allows operators to specify the version of the Kubernetes CNI utilities to install.
22.31.1.1.81. krib/cluster-cri-socket¶
This Param defines which Socket to use for the Container Runtime Interface. By default KRIB content uses Docker as the CRI, however our goal is to support multiple container CRI formats. A viable alternative is /run/containerd/containerd.sock, assuming krib/container-runtime is set to “containerd”
22.31.1.1.82. krib/cluster-crictl-version¶
Allows operators to specify the version of the Kubernetes CRICTL utility to install.
22.31.1.1.83. krib/cluster-dns¶
Allows operators to specify the DNS address for the cluster name resolution services.
Set by default to 10.96.0.10.
- !!! warning
This IP Address must be in the same range as the krib/cluster-service-cidr specified addresses.
22.31.1.1.84. krib/cluster-domain¶
Defines the cluster domain for kublets to operate in by default. Default is cluster.local.
22.31.1.1.85. krib/cluster-image-repository¶
Allows operators to specify the location to pull images from for Kubernetes. Defaults to k8s.gcr.io.
22.31.1.1.86. krib/cluster-is-production¶
By default the KRIB cluster mode will be set to dev/test/lab (whatever you wanna call it). If you set this Param to true then the cluster will be tagged as in Production use.
If the cluster is in Production mode, then the state of the various Params for new clusters will be preserved, preventing the cluster from being overwritten.
If NOT in Production mode, the following Params will be wiped clean before building the cluster. This is essentially a destructive pattern.
krib/cluster-admin-conf - the admin.conf file Param will be wiped
krib/cluster-join - the Join token will be destroyed
This allows for “fast reuse” patterns with building KRIB clusters, while also allowing a cluster to be marked Production and require manual intervention to wipe the Params to rebuild the cluster.
22.31.1.1.87. krib/cluster-join-command¶
Param is set (output) by the cluster building process
22.31.1.1.88. krib/cluster-kubeadm-cfg¶
Once the cluster initial master has completed startup, then the KRIB config task will record the bootstrap configuration used by kubeadm init. This provides a reference going forward on the cluster configurations when it was created.
22.31.1.1.89. krib/cluster-kubernetes-version¶
Allows operators to specify the version of Kubernetes containers to pull from the krib/cluster-image-repository.
22.31.1.1.90. krib/cluster-master-certs¶
Requires Cert Plugin
22.31.1.1.91. krib/cluster-master-count¶
Allows operators to set the number of machines required for the Kubernetes cluster. Machines will be automatically added until the number is met.
- !!! note
should be an odd number
22.31.1.1.92. krib/cluster-master-vip¶
For High Availability (HA) configurations, a floating IP is required by the load balancer. This should be an available IP in the same subnet as the master nodes and not in the dhcp range. If using MetalLB the ip should not be in the configured metallb/l2-ip-range.
22.31.1.1.93. krib/cluster-masters¶
List of the machine(s) assigned as cluster master(s). If not set, the automation will elect leaders and populate the list automatically.
22.31.1.1.94. krib/cluster-masters-on-etcds¶
For development clusters, allows running etcd on the same machines as the Kubernetes masters RECOMMENDED: set to false for production clusters
22.31.1.1.95. krib/cluster-masters-untainted¶
For development clusters, allows nodes to run on the same machines as the Kubernetes masters.
- !!! note
If you have only master nodes helm/tiller install will fail if set to false.
RECOMMENDED: set to false for production clusters and have non-master nodes in the cluster
22.31.1.1.96. krib/cluster-name¶
Allows operators to set the Kubernetes cluster name
22.31.1.1.97. krib/cluster-pod-subnet¶
Allows operators to specify the podSubnet that will be used by CoreDNS during the ‘kubeadm init’ process of the cluster creation.
22.31.1.1.98. krib/cluster-profile¶
Part of the Digital Rebar Cluster pattern, this parameter is used to identify the machines used in the Kubernetes cluster
This parameter is REQUIRED for KRIB and etcd cluster contruction
22.31.1.1.99. krib/cluster-service-dns-domain¶
Allows operators to specify the Service DNS Domain that will be used by CoreDNS during the ‘kubeadm init’ process of the cluster creation.
By default we do not override the setting from kubeadm default behavior.
22.31.1.1.100. krib/cluster-service-subnet¶
Allows operators to specify the service subnet CIDR that will be used during the kubeadm init process of the cluster creation.
Defaults to 10.96.0.0/12.
22.31.1.1.101. krib/container-runtime¶
The container runtime to be used for the KRIB cluster. This can be either docker (the default) or containerd.
22.31.1.1.102. krib/dashboard-config¶
Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the Kubernetes Dashboard.
22.31.1.1.103. krib/dashboard-enabled¶
Boolean value that enables Kubernetes dashboard install
22.31.1.1.104. krib/externaldns-container-image¶
Allows operators to optionally override the container image used in the ExteranlDNS deployment. Possible use case would be pre-prepared images pushed to a local trusted registry
22.31.1.1.105. krib/fluent-bit-container-image¶
Allows operators to optionally override the container image used in the fluent-bit / logging daemonset. Possible use case would be pre-prepared images pushed to a local trusted registry
22.31.1.1.106. krib/helm-charts¶
Array of charts to install via Helm. The list will be followed in order. Work is idempotent: No action is taken if charts are already installed.
Fields: chart and name are required.
Options exist to inject additional control flags into helm install instructions:
name - name of the chart (required)
chart - reference of the chart (required) - may rely on repo, path or other helm install [chart] standard
namespace - kubernetes namespace to use for chart (defaults to none)
params - map of parameters to include in the helm install (optional). Keys and values are converted to –[key] [value] in the install instruction.
sleep - time to wait after install (defaults to 10)
wait - wait for name (and namespace if provided) to be running before next action
prekubectl - (optional) array of kubectl [request] commands to run before the helm install
postkubectl - (optional) array of kubectl [request] commands to run after the helm install
targz - (optional) provides a location for a tar.gz file containing charts to install. Path is relative.
templates - (optional) map of DRP templates keyed to the desired names (must be uploaded!) to render before doing other work.
repos - (optional) adds the requested repos to helm using helm repo add before installing helm. syntax is [repo name]: [repo path].
templatesbefore - (optional) expands the provided template files inline before the helm install happens.
templatesafter - (optional) expands the provided template files inline after the helm install happens
example:
- {
“chart”: “stable/mysql”, “name”: “mysql”
- }, {
“chart”: “istio-1.0.1/install/kubernetes/helm/istio”, “name”: “istio”, “targz”: “https://github.com/istio/istio/releases/download/1.0.1/istio-1.0.1-linux.tar.gz”, “namespace”: “istio-system”, “params”: {
“set”: “sidecarInjectorWebhook.enabled=true”
}, “sleep”: 10, “wait”: true, “kubectlbefore”: [“get nodes”], “kubectlafter”: [“get nodes”]
- }, {
“chart”: “rook-stable/rook-ceph”, “kubectlafter”: [
“apply -f cluster.yaml”
], “name”: “rook-ceph”, “namespace”: “rook-ceph-system”, “repos”: {
“rook-stable”: “https://charts.rook.io/stable”
}, “templatesafter”: [{
“name”: “helm-rook.after.sh.tmpl” “nodes”: “leader”,
}], “templatesbefore”: [{
“name”: “helm-rook.before.sh.tmpl”, “nodes”: “all”, “runIfInstalled”: true
}], “templates”: {
“cluster”: “helm-rook.cfg.tmpl”
}, “wait”: true
}
22.31.1.1.107. krib/helm-version¶
Allows operators to determine the version of etcd to install
- !!! note
Changes should be coordinate with KRIB Kubernetes version
22.31.1.1.108. krib/i-am-master¶
When this param is set to true AND the krib/selective-mastership param is set to true, then this node will participate in mastership election. On the other hard, if this param is set to false (the default), but krib/selective-mastership param is set to true, then this node will never become a master. This option is useful to prevent dedicated workers from assuming mastership.
Defaults to false.
22.31.1.1.109. krib/ignore-preflight-errors¶
Helpful for debug and test clusters. This flag allows operators to select none, all or some preflight error checks to ignore during the kubeadm init.
Use all to ignore all errors Use none to include all errors [default]
More Info, see <https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/>
22.31.1.1.110. krib/ingress-external-enabled¶
When enabled, deploys a second ingress controller (the first is default) Services are exposed to the second ingress using the class “nginx-external” instead of the “ingress” class. This can be helpful in environments where it may be helpful to expose some services only to the cluster, as an ingress, as opposed to the world.
22.31.1.1.111. krib/ingress-nginx-config¶
Set this string to an HTTP or HTTPS reference for a YAML configuration to use for nginx service install.
22.31.1.1.112. krib/ingress-nginx-external-loadbalancer-ip¶
If set, this param will set the LoadBalancer IP for the nginx external ingress service. Used in situations where you want to specifically choose the IP assigned to the ingress, rather than having it be applied by the cloud provider (or metallb, in the bare-metal case)
22.31.1.1.113. krib/ingress-nginx-loadbalancer-ip¶
If set, this param will set the LoadBalancer IP for the nginx ingress service. Used in situations where you want to specifically choose the IP assigned to the ingress, rather than having it be applied by the cloud provider (or metallb, in the bare-metal case)
22.31.1.1.114. krib/ingress-nginx-mandatory¶
Set this string to an HTTP or HTTPS reference for a YAML configuration to use for nginx install.
22.31.1.1.115. krib/ingress-nginx-publish-ip¶
If running an nginx ingress behind a NATing firewall, it may be required to explicitly specify the public IP assigned to ingresses, for example to make something like external-dns work If this value is set, then either the nginx ingress, or (if enabled) the external nginx ingress will have the –publish-status-address argument set to this value.
22.31.1.1.116. krib/ip¶
For configurations where kubelet does not correctly detect the IP over which nodes should communicate.
If unset, will default to the Machine Address.
22.31.1.1.117. krib/k3s¶
Informs tasks to use k3s instead of k8s No need to include etcd stages when k3s is true
22.31.1.1.118. krib/kubeadm-cfg¶
Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the kubeadm.cfg used during the kubeadm init process.
The default behavior is to use the Parameterized kubeadm.cfg from the template named krib-kubeadm.cfg.tmpl. This config file is used in the krib-config.sh.tmpl which is the main template script that drives the kubeadm cluster init and configuration.
22.31.1.1.119. krib/kubelet-rubber-stamp-container-image¶
Allows operators to optionally override the container image used in the kubelt rubber stamp deployment. Possible use case would be pre-prepared images pushed to a local trusted registry
22.31.1.1.120. krib/label-env¶
Used for node specification, labels should be set.
22.31.1.1.121. krib/labels¶
Used for adhoc node specification, labels should be set.
- !!! note
Use krib/label-env to set the env label
Use inventory/data to set physical characteristics
22.31.1.1.122. krib/log-target-gelf¶
An IP outside the cluster designated configured to receive GELF (GrayLog) “The target for GELF (Graylog) “The target for GELF (Graylog) logs on UDP 2201
22.31.1.1.123. krib/log-target-syslog¶
An IP outside the cluster designated configured to receive remote syslog logs on UDP 514
22.31.1.1.124. krib/longhorn-config¶
Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the Rancher Longhorn install.
22.31.1.1.125. krib/metallb-container-image-controller¶
Allows operators to optionally override the container image used in the MetalLB controller deployment. Possible use case would be pre-prepared images pushed to a local trusted registry
22.31.1.1.126. krib/metallb-container-image-speaker¶
Allows operators to optionally override the container image used in the MetalLB speaker daemonset. Possible use case would be pre-prepared images pushed to a local trusted registry
22.31.1.1.127. krib/metallb-version¶
Set this string to the version of MetalLB to install. Defaults to “master”, but cautious users may want to set this to an established MetalLB release version
22.31.1.1.128. krib/networking-provider¶
This Param can be used to specify either flannel, calico, or weave network providers for the Kubernetes cluster. This is completed using the provider specific YAML definition file.
The only supported providers are:
flannel (default)
calico
weave
22.31.1.1.129. krib/nginx-external-udp-services¶
Array of optional UDP services you want to expose using Nginx Ingress Controller Example might be:
`yaml 9000: "default/example-go:8080" `
The services defined here will be inserted in a configmap named “udp-services” in the “ingress-nginx-external” namespace. The ConfigMap can be updated later if you want to change/update services
See <https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/> for details
22.31.1.1.130. krib/nginx-ingress-controller-container-image¶
Allows operators to optionally override the container image used in the nginx-ingress-controller deployment. Possible use case would be pre-prepared images pushed to a local trusted registry
22.31.1.1.131. krib/nginx-ingress-version¶
Set this string to the version of MetalLB to install. Defaults to “0.25.1”. This should align with the param krib/ingress-nginx-mandatory, and is used to ensure that the container images used for the nginx deployment match the desired version.
22.31.1.1.132. krib/nginx-tcp-services¶
Array of optional TCP services you want to expose using Nginx Ingress Controller Example might be:
9000: "default/example-go:8080"
The services defined here will be inserted in a configmap named “tcp-services” in the “ingress-nginx” namespace. The ConfigMap can be updated later if you want to change/update services
See https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ for details
22.31.1.1.133. krib/nginx-udp-services¶
Array of optional UDP services you want to expose using Nginx Ingress Controller Example might be:
`yaml 9000: "default/example-go:8080" `
The services defined here will be inserted in a configmap named “udp-services” in the “ingress-nginx” namespace. The ConfigMap can be updated later if you want to change/update services
See <https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/> for details
22.31.1.1.134. krib/operate-action¶
This Param can be used to:
drain (default)
delete
cordon
uncordon
a node in a KRIB built Kubernetes cluster. If this parameter is not defined on the Machine, the default action will be to drain the node.
Each action can be passed custom arguments via use of the krib/operate-options Param.
22.31.1.1.135. krib/operate-on-node¶
This Param specifies a Node in a Kubernetes cluster that should be operated on. Currently supported operations are drain and uncordon.
The drain operation will by default maintain the contracts specified by PodDisruptionBudgets.
Options can be specified to override the default actions by use of the krib/operate-options Param. This Param will be passed directly to the kubectl command that has been specified by the krib/operate-action Param setting (defaults to drain operation if nothing specified).
The Node name must be a valid cluster member name, which by default in a KRIB built cluster; the fully qualified value of the Machine object Name value.
22.31.1.1.136. krib/operate-options¶
This Param can be used to pass additional flag options to the kubectl operation that is specified by the krib/operate-action Param. By default, the drain operation will be called if no action is defined on the Machine.
This Param provides some customization to how the operate operation functions.
For kubectl drain documentation, see the following URL: <https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain>
For kubectl uncordin doc, see the URL: <https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#uncordon>
- !!! note
the following flags are set as default options in the Template for drain operations:
–ignore-daemonsets –delete-local-data
For drain operations, if you override these defaults, you MOST LIKELY need to specify them for the drain operation to be successful. You have been warned.
No defaults provdided for uncordon operations (you shouldn’t need any).
22.31.1.1.137. krib/packages-to-prep¶
List of packages to install for preparation of KRIB install process. Designed to be used in some processes where pre-prep of the packages will accelerate the KRIB install process. More specifically, in Sledgehammer Discover for Live Boot cluster install, the ‘docker’ install requires several minutes to run through selinux context changes.
Simple space separated String list.
22.31.1.1.138. krib/repo¶
Allows operators to pre-prepare a URL (i.e., a local repository) of the installation packages necessary for KRIB. If this value is set, then tasks like containerd-install and etcd-config will source their installation files from this repository, rather than attempting to download them from the internet (which may take longer, given the amount of machines to be installed plus the capacity of the internet service)
22.31.1.1.139. krib/rook-ceph-container-image¶
Allows operators to optionally override the container image used in the rook-ceph deployment. Possible use case would be pre-prepared images pushed to a local trusted registry
22.31.1.1.140. krib/rook-ceph-container-image-ceph¶
Allows operators to optionally override the container image used in the rook-ceph deployment. Possible use case would be pre-prepared images pushed to a local trusted registry
22.31.1.1.141. krib/rook-ceph-container-image-daemon-base¶
Allows operators to optionally override the container image used in the rook-ceph deployment. Possible use case would be pre-prepared images pushed to a local trusted registry
22.31.1.1.142. krib/selective-mastership¶
When this param is set to true, then in order for a machine to self-elect to a mastetr, the, krib/i-am-master param must be set for that machine. This option is useful to prevent dedicated workers from assuming mastership.
Defaults to false.
22.31.1.1.143. krib/sign-kubelet-server-certs¶
When this param is set to true, then the kubelets will be configured to request their certs from the cluster CA, using CSRs. The CSR approver won’t natively sign server certs, so a custom operator, <https://github.com/kontena/kubelet-rubber-stamp>, will be deployed to sign these.
Defaults to ‘false’.
22.31.1.1.144. kubectl/working-dir¶
Allows operators to change the kubectl working directory
22.31.1.1.145. metallb/l2-ip-range¶
This should be set to match the IP range you have allocated to L2 MetalLB
Example: 192.168.1.240-192.168.1.250`
22.31.1.1.146. metallb/l3-ip-range¶
This should be set to match the CIDR route you have allocated to L3 MetalLB
Example: 192.168.1.0/24 (currently only a single route supported)
22.31.1.1.147. metallb/l3-peer-address¶
This should be set to match the IP of the BGP-enabled router you want MetalLB to peer with
Example: 192.168.1.1 (currently only a single peer supported)
22.31.1.1.148. metallb/limits-cpu¶
This should be set to match the cpu resource limits for MetalLB
Default: 100m
22.31.1.1.149. metallb/limits-memory¶
This should be set to match the memory resource limits for MetalLB
Default: 100Mi
22.31.1.1.150. metallb/monitoring-port¶
This should be set to match the port you want to use for MetalLB Prometheus monitoring
Default: 7472
22.31.1.1.151. provider/calico-config¶
Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the Calico network provider. If Calico is not installed, this Param will have no effect on the cluster.
22.31.1.1.152. provider/flannel-config¶
Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the Flannel network provider. If Flannel is not installed, this Param will have no effect on the cluster.
22.31.1.1.153. rook/ceph-cluster-network¶
This should be set to match a physical network on rook nodes to be used exclusively for cluster traffic
22.31.1.1.154. rook/ceph-public-network¶
This should be set to match the kubernetes “control plane” network on the nodes
22.31.1.1.155. rook/ceph-target-disk¶
If using physical disks for Ceph OSDs, this value will be used as an explicit target for OSD installation. It will also be WIPED during dev-reset, so use with care. The value of the param should be only the block device, don’t prefix with /dev/ Example value might be “sda”, indicating that /dev/sda is to be used for rook ceph, and WIPED DURING RESET.
- !!! warning
IF YOU SET THIS VALUE, /dev/<this value> WILL BE WIPED DURING cluster dev-reset!
22.31.1.1.156. rook/ceph-version¶
The version of rook-ceph to deploy
22.31.1.1.157. rook/data-dir-host-path¶
This should be set to match the desired location for Ceph storage.
Default: /mnt/hdd/rook
In future versions, this should be calculated or inferred based on the system inventory
22.31.1.1.158. sonobuoy/binary¶
Downloads tgz with compiled sonobuoy executable. The full path is included so that operators can choose the correct version and archtiecture
22.31.1.1.159. sonobuoy/wait-mins¶
Default is -1 so that stages do not wait for complete.
Typical runs may take 60 minutes.
If <0 then code does wait and assumes you will run it again to retrieve the results. Task is idempotent so you can re-start a run after you have started to check on results.
22.31.1.1.160. vault/awskms-access-key¶
Allows operators to specify an AWS region to be used in the Vault “awskms” seal
22.31.1.1.161. vault/awskms-kms-key-id¶
Allows operators to specify an AWS KMS key ID to be used in the Vault “awskms” seal
22.31.1.1.162. vault/awskms-region¶
Allows operators to specify an AWS region to be used in the Vault “awskms” seal
22.31.1.1.163. vault/awskms-secret-key¶
Allows operators to specify an AWS region to be used in the Vault “awskms” seal
22.31.1.1.164. vault/cluster-profile¶
Part of the Digital Rebar Cluster pattern, this parameter is used to identify the machines used in the vault cluster
This parameter is REQUIRED for KRIB and vault cluster contruction
22.31.1.1.165. vault/kms-plugin-token¶
Authorizes the vault-kms-plugin to communicate with vault on behalf of Kubernetes API
22.31.1.1.166. vault/name¶
Allows operators to set a name for the vault cluster
22.31.1.1.167. vault/root-token¶
The root token generated by initializing vault. Store this somewhere secure, and delete from DRP, for confidence
22.31.1.1.168. vault/seal¶
Vault can optionally be configured to automatically unseal, using a cloud-based KMS. Currently the only configured option is “awskms”, which necessitates you setting the following additional parameters
vault/awskms-region
vault/awskms-access-key
vault/awskms-secret-key
vault/awskms-kms-key-id
22.31.1.1.169. vault/server-count¶
Allows operators to set the number of machines required for the consul cluster. Machines will be automatically added until the number is met.
- !!! note
should be an odd number
22.31.1.1.170. vault/servers¶
Param is set (output) by the vault cluster building process
22.31.1.1.171. vault/servers-done¶
Param is set (output) by the vault cluster building process
22.31.1.1.172. vault/unseal-key¶
The key generated by initializing vault in KMS mode. Use this to unseal vault if KMS becomes unavailable. Store this somewhere secure, and delete from DRP, for confidence
22.31.1.1.173. vault/version¶
Allows operators to determine the version of vault to install
22.31.1.2. profiles¶
The content package provides the following profiles.
22.31.1.2.1. example-k8s-db-ingress¶
Example Profile for custom K8S ingress.
22.31.1.2.2. example-krib¶
Example of the absolute minimum required Params for a non-HA KRIB Kubernetes cluster.
22.31.1.2.3. example-krib-ha¶
Minimum required Params to set on a KRIB Kubernetes cluster to define a Highly Available setup.
Clone this profile as krib-ha and change the VIP to your needs.
22.31.1.2.4. example-rook-db-ingress¶
Example Profile for custom Rook Ceph Manager Dashboard ingress.
22.31.1.2.5. helm-reference¶
- !!! note
DO NOT USE THIS PROFILE!
Copy the contents of the helm/charts param into the Cluster!
22.31.1.2.6. krib-operate-cordon¶
This profile contians the default krib-operate task parameters to drive a cordon operation. This profile can be added to a node or stage to allow the krib-operate task to do a cordon operation.
This profile is used by the krib-cordon stage to allow a machine to be cordoned without changing the parameters on the machine.
22.31.1.2.7. krib-operate-delete¶
This profile contains the default krib-operate task parameters to drive a delete operation. This profile can be added to a node or stage to allow the krib-operate task to do a delete operation.
This profile is used by the krib-delete stage to allow a machine to be deleted without changing the parameters on the machine.
WARNING: This pattern destroys a kubernetes node.
22.31.1.2.8. krib-operate-drain¶
This profile contians the default krib-operate task parameters to drive a drain operation. This profile can be added to a node or stage to allow the krib-operate task to do a drain operation.
This profile is used by the krib-drain stage to allow a machine to be drained without altering the parameters on the machine.
22.31.1.2.9. krib-operate-uncordon¶
This profile contians the default krib-operate task parameters to drive an uncordon operation. This profile can be added to a node or stage to allow the krib-operate task to do an uncordon operation.
This profile is used by the krib-uncordon stage to allow a machine to be uncordoned without altering the parameters on the machine.
22.31.1.3. stages¶
The content package provides the following stages.
22.31.1.3.1. k3s-config¶
Designed to substitute for Kubernetes with K3s Installs k3s using the KRIB process and params with the goal of being able to use the same downstream stages
22.31.1.3.2. krib-contrail¶
Installs and runs Contrail kubectl install
- !!! note
CURRENTLY CENTOS ONLY
22.31.1.3.3. krib-external-dns¶
Installs and runs ExternalDNS
22.31.1.3.4. krib-helm¶
Installs and runs Helm Charts after a cluster has been constructed. This stage is idempotent and can be run multiple times. This allows operators to create workflows with multiple instances of this stage. The charts to run are determined by the helm/charts parameter.
Due to helm downloads, this stage requires internet access.
This stage also creats a tiller service account. For advanced security, this configuration may not be desirable.
22.31.1.3.5. krib-helm-charts¶
Installs and runs Helm Charts after a cluster has been constructed. This stage is idempotent and can be run multiple times. This allows operators to create workflows with multiple instances of this stage. The charts to run are determined by the helm/charts parameter.
Due to helm downloads, this stage requires internet access.
22.31.1.3.6. krib-helm-init¶
This stage is idempotent and can be run multiple times. This allows operators to create workflows with multiple instances of this stage. Due to helm downloads, this stage requires internet access.
This stage also creats a tiller service account. For advanced security, this configuration may not be desirable.
22.31.1.3.7. krib-ingress-nginx¶
Install/config ingress-nginx and optional cert-manager Requires a cloud LoadBalancer or MetalLB to provide Service ingress.ip must run after krib-helm stage
22.31.1.3.8. krib-ingress-nginx-tillerless¶
Install/config ingress-nginx and optional cert-manager Requires a cloud LoadBalancer or MetalLB to provide Service ingress.ip
22.31.1.3.9. krib-kubevirt¶
Installs KubeVirt.io using the chosen release from the cluster leader. This stage is idempotent and can be run multiple times. This allows operators to create workflows with multiple instances of this stage.
Due to yaml and container downloads, this stage requires internet access.
22.31.1.3.10. krib-logging¶
Installs and runs fluent-bit to aggregate container logs to a graylog server via GELF UDP input
22.31.1.3.11. krib-longhorn¶
Installs and runs Rancher Longhorn kubectl install
22.31.1.3.12. krib-metallb¶
Installs and runs MetalLB kubectl install
22.31.1.3.13. krib-operate¶
This stage runs an Operation (drain|uncordon) on a given KRIB built Kubernetes node. You must specify action you want taken via the krib/operate-action Param. If nothing specified, the default action will be to drain the node.
In addition - you may set the following Params to alter the behavior of this stage:
krib/operate-action - action to take (cordon or uncordon)
krib/operate-on-node - a Kubernetes node name to operate on
krib/operate-options - command line arguments to pass to the kubectl command for the action
- !!! note
DRAIN NOTES: this Stage does a few things that MAY BE VERY BAD !!
1. service pods are ignored for the drain operation 1. –delete-local-data is used to evict pods using local storage
Default options are –ignore-daemonsets –delete-local-data to the drain operation. If you override these values (by setting krib/operate-options) you MAY NEED to re-specify these values, otherwise, the Node will NOT be drained properly.
These options may mean your data might be nuked.
UNCORDON NODES: typically does not require additional options
22.31.1.3.14. krib-operate-cordon¶
This stage runs a Cordon operation on a given KRIB built Kubernetes node. It uses the krib-operate-cordon Profile.
In addition - you may set the following Params on a Machine object to override the default behaviors of this stage:
krib/operate-action - action to take (cordon or uncordon)
krib/operate-on-node - a Kubernetes node name to operate on
krib/operate-options - command line arguments to pass to the kubectl command for the action
If the krib/operate-on-node Param is empty, the node that is currently running the Stage will be operated on. Otherwise, specifying an alternate Node allows remote cordon a node.
22.31.1.3.15. krib-operate-delete¶
This stage runs an Delete node operation on a given KRIB built Kubernetes node. It uses the krib-operate-delete Profile
In addition - you may set the following Params on a Machine object to override the default behaviors of this stage:
krib/operate-action - action to take (cordon or uncordon)
krib/operate-on-node - a Kubernetes node name to operate on
krib/operate-options - command line arguments to pass to the kubectl command for the action
If the krib/operate-on-node Param is empty, the node that is currently running the Stage will be operated on. Otherwise, specifying an alternate Node allows remote delete a node.
WARNING: THIS OPERATE DESTROYS A KUBERNETES NODE!
Presumably, you want to krib-operate-drain the node first to remove it from the cluster and drain it’s workload to other cluster workers prior to deleting the node.
22.31.1.3.16. krib-operate-drain¶
This stage runs an Drain operation on a given KRIB built Kubernetes node. It uses the krib-operate-drain Profile
In addition - you may set the following Params on a Machine object to override the default behaviors of this stage:
krib/operate-action - action to take (cordon or uncordon)
krib/operate-on-node - a Kubernetes node name to operate on
krib/operate-options - command line arguments to pass to the kubectl command for the action
If the krib/operate-on-node Param is empty, the node that is currently running the Stage will be operated on. Otherwise, specifying an alternate Node allows remote draining a node.
- !!! warning
This Stage does a few things that MAY BE VERY BAD !!
1. service pods are ignored for the drain operation 1. –delete-local-data is used to evict pods using local storage
Default options are –ignore-daemonsets –delete-local-data to the drain operation. If you override these values (by setting ‘krib/operate-options’) you MAY NEED to re-specify these values, otherwise, the Node will NOT be drained properly.
These options may mean your data might be nuked.
22.31.1.3.17. krib-operate-uncordon¶
This stage runs an Uncordon operation on a given KRIB built Kubernetes node. This returns a Node back to service in a Kubernetes cluster that has previously been drained. It uses the krib-operate-uncordon Profile
In addition - you may set the following Params on a Machine object to override the default behaviors of this stage:
krib/operate-action - action to take (cordon or uncordon)
krib/operate-on-node - a Kubernetes node name to operate on
krib/operate-options - command line arguments to pass to the kubectl command for the action
If the krib/operate-on-node Param is empty, the node that is currently running the Stage will be operated on. Otherwise, specifying an alternate Node allows remote uncordon on a node.
Default options are “” (empty) to the uncordon operation.
22.31.1.3.18. krib-pkg-prep¶
Simple helper stage to install prereq packages prior to doing the kubernetes package installation. This just helps us get a Live Boot set of hosts (eg Sledgehammer Discovered) prepped a little faster with packages in some use cases.
22.31.1.3.19. krib-rook-ceph¶
Installs and runs Rook Ceph install
22.31.1.3.20. krib-runtime-install¶
This stage allows for the installation of multiple container runtimes. The single task (krib-runtime-install) which it executes, will launch further tasks based on the value of krib/container-runtime. Currently docker and containerd are supported, although the design is extensible.
22.31.1.3.21. krib-set-time¶
Helper stage to set time on the machine - DEV
22.31.1.3.22. krib-sonobuoy¶
Installs and runs Sonobuoy after a cluster has been constructed. This stage is idempotent and can be run multiple times. The purpose is to ensure that the KRIB cluster is conformant with standards
If credentials are required so that the results of the run are pushed back to DRP files.
Roadmap items:
eliminate need for DRPCLI credentials
make “am I running” detection smarter
22.31.1.4. tasks¶
The content package provides the following tasks.
22.31.1.4.1. consul-agent-config¶
Configures consul agents, to be used by Vault against a consul server cluster
22.31.1.4.2. consul-agent-install¶
Installs (but not configures) consul in agent mode, to be used as an HA backend to Vault
22.31.1.4.3. consul-server-config¶
Configures a consul server cluster, to be used as an HA backend to Vault
22.31.1.4.4. consul-server-install¶
Installs (but not configures) consul in server mode, to be used as an HA backend to Vault
22.31.1.4.5. containerd-install¶
Installs containerd using O/S packages
22.31.1.4.6. docker-install¶
Installs Docker using O/S packages
22.31.1.4.7. etcd-config¶
Sets Param: etcd/servers
If installing Kubernetes via Kubeadm, make sure you install a supported version!
This uses the Digital Rebar Cluster pattern so etcd/cluster-profile must be set
22.31.1.4.8. k3s-config¶
Sets Param: krib/cluster-join, krib/cluster-admin-conf
Configure K3s using built-in commands
This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set
Server is setup to also be an agent - all machines have workload
- !!! warning
Must NOT set etcd/cluster-profile when install k3s1
22.31.1.4.9. krib-config¶
Sets Param: krib/cluster-join, krib/cluster-admin-conf
Configure Kubernetes using Kubeadm
This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set
22.31.1.4.10. krib-contrail¶
Installs Contrail via kubectl from the contrail.cfg template.
Runs on the master only.
Template replies on the cluster VIP as the master IP address
22.31.1.4.11. krib-dev-hard-reset¶
Clears Created Params: krib/, etcd/
22.31.1.4.12. krib-dev-reset¶
Clears Created Params: krib/, etcd/
22.31.1.4.13. krib-helm¶
Installs Helm and runs helm init (which installs Tiller) on the leader. Installs Charts defined in helm/charts.
This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set.
The install checks to see if tiller is running and may skip initialization.
22.31.1.4.14. krib-helm-charts¶
Installs Charts defined in helm/charts.
This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set.
The install checks to see if tiller is running and may skip initialization.
22.31.1.4.15. krib-helm-init¶
Installs Helm and runs helm init (which installs Tiller) on the leader.
This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set.
The install checks to see if tiller is running and may skip initialization.
The tasks only run on the leader so it must be included in the workflow. All other machines will be skipped so it is acceptable to run the task on all machines in the cluster.
22.31.1.4.16. krib-ingress-nginx¶
Sets Param: ingress/ip-address
Install/config ingress-nginx and optional cert-manager
This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set
22.31.1.4.17. krib-ingress-nginx-tillerless¶
Sets Param: ingress/ip-address
Install/config ingress-nginx and optional cert-manager
This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set
22.31.1.4.18. krib-kubevirt¶
Installs KubeVirt on the leader. This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set.
The install checks to see if KubeVirt is running and may skip initialization.
Recommend: you may want to add intel_iommu=on to the kernel-console param
The Config is provided from the kubevirt-configmap.cfg.tmpl template instead of being downloaded from github. Version updates should be reflected in the template. This approach allows for parameterization of the configuration map.
The kubectl tasks only run on the leader so it must be included in the workflow. All other machines will run virt-host-validate so it is important to run the task on all machines in the cluster.
At this time, virtctl is NOT installed on cluster
22.31.1.4.19. krib-logging¶
Installs fluent-bit for aggregation of cluster logging to a graylog server
This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set.
The install checks to see if tiller is running and may skip initialization.
22.31.1.4.20. krib-pkg-prep¶
Installs prerequisite OS packages prior to starting KRIB install process. In some use cases this may be a faster pattern than performing the steps in the standard templates.
For example - Sledgehammer Discover nodes, add krib-prep-pkgs stage. As machine is finishing prep - you can move to setting up other things, before kicking off the KRIB workflow.
Uses packages listed in the ‘default’ Schema section of the Param krib/packages-to-prep. You can override this list by setting the Param in a Profile or directly on the Machines to apply this to.
Packages MUST exist in the repositories on the Machines already.
22.31.1.4.21. krib-runtime-install¶
Installs a container runtime
22.31.1.4.22. krib-sonobuoy¶
Installs Sonobuoy and runs it against the cluster on the leader. This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set.
- !!! note
Sonobuoy may take over an HOUR to complete. The task will be in process during this time.
The tasks only run on the leader so it must be included in the workflow. All other machines will be skipped so it is acceptable to run the task on all machines in the cluster.
22.31.1.4.23. kubernetes-install¶
Downloads Kubernetes installation components from repos.
This task relies on the O/S packages being updated and accessible.
- !!! note
Access to update repos is required!
22.31.1.4.24. vault-config¶
Configures a vault backend (using consul for storage) for secret encryption
22.31.1.4.25. vault-install¶
Installs (but not configures) consul, to be used as an HA backend to Vault
22.31.1.4.26. vault-kms-plugin¶
Configures a vault plugin for secret encryption