23.1.5. 1020 Build a Multi-Cloud Cluster using Pre-Defined Terraform¶
Time: 10 Minutes
Tags: terraform, cloud, self-service
Concepts: clusters, resorce brokers
Use Cluster Pipelines to build a dynamic set of machines via Terraform plans without building plans
Business ROI: Multi-cloud self-service process allows teams to share and reuse managed Terraform plans
Addtional Checklist Items:
Credentials for at least one Cloud (varies depending on the cloud)
220.127.116.11.1.1. Add a Resource Broker¶
Navigate to the Resource Brokers table and click the
Select the appropriate
*-cloudResource Profile and name your broker
Provide the information for the Required Params
These vary depending on the cloud you are using
Design Note: By setting credentials here, callers these the cloud credential profile will handle the security/authentication parameters for all clusters using the broker.
Review the information for the Optional Params (clicking reveals the default)
For aws, the “rsa/key-user” should be set to “ec2-user”
Save the Broker
Wait for the Broker to change into Work Order mode
Troubleshooting note: the broker parameters are not exercised in this phase; consequently, configuration mistakes will not be revealed until the next step.
If you’d like to review the generated Terraform Plans (they are not normally stored) then add the Param ” terraform/debug-plan” to the broker and make sure it’s true. Plans are stored with the Terraform State on the invoking cluster.
18.104.22.168.1.2. Create the lab1020 Cluster¶
Navigate to the Clusters table and click the
Name your Cluster
lab1020-brokerfor the “broker/name” Parameter.
Update the cluster/count to match your planned cluster size.
Save the Cluster and allow it to progress to Work Order mode.
Observe the Cluster’s Activity as it builds the Cluster to the “cluster-provision” task
22.214.171.124.1.3. Review the Cluster¶
From Resource Brokers
Observe the Activity to see Terraform being run during the
From Work Orders activity wait for the
At this point, you should be able to see the instances created the target cloud.
From Machines watch for instances to register
terraform-applytask will add machines into Digital Rebar before they are created by the Cloud Provider so that they can join-up and have pre-defined operations as soon as they are provisioned.
Observe the newly created Machines that are part of the new cluster.
Wait for the machine to be created and automatically connect and start processing its own pipeline. This may take several minutes depending on the cloud.
Troubleshooting note: if the machine does not join automatically, you can run the
`broker-start-agents-via-ansible-joinupBlueprint <https://portal.rackn.io/#/e/0.0.0.0/blueprints/broker-start-agents-via-ansible-joinup/resources>`__ from the resource broker to try to start the join-up via Ansible.
126.96.36.199.1.4. Resize the Cluster¶
From your Cluster after the Pipeline completes
Note that the Cluster is now in “Work Order” mode.
Look for the “
terraform/tfinfo” to review the Terraform state file that will be used when updating or removing the cluster.
Note the cost calculation in
inventory/costparam has been updated as a total of all the
inventory/costvalues on the cluster’s machines.
If zero, then all machines will be removed but the cluster will remain available for further action.
universal-application-base-clusterBlueprint. This will automatically re-apply Terraform and update your cluster.
From your Cloud Service Provider’s control panel, look to see the created instances.
Note the updated cost calculation in
This value is automatically updated during cluster provisioning
188.8.131.52.1.5. Cleanup your cluster¶
This cluster is used in Lab 1030, so skip this step if you are continuing.
Use the Actions list to return your Cluster to
Use the Actions list to Cleanup your cluster (the related machines will be automatically removed also)
Cleanup is a special version of Destroy that will run and complete the
on-delete-workflowbefore invoking Destroy.
From your Cloud Service Provider’s control panel, look to see that all instances were terminated.