23.1.5. 1020 Build a Multi-Cloud Cluster using Pre-Defined Terraform

23.1.5.1. Overview

  • Id: 1020

  • Time: 10 Minutes

  • Enabled: Yes

  • Difficulty: introductory

  • Tags: terraform, cloud, self-service

  • Concepts: clusters, resorce brokers

Video Link

23.1.5.2. Objective

Use Cluster Pipelines to build a dynamic set of machines via Terraform plans without building plans

Business ROI: Multi-cloud self-service process allows teams to share and reuse managed Terraform plans

23.1.5.3. Prerequisites

Required Labs:

  • 1010

Addtional Checklist Items:

  • Credentials for at least one Cloud (varies depending on the cloud)

23.1.5.3.1. Summary

23.1.5.3.1.1. Add a Resource Broker

  1. Navigate to the Resource Brokers table and click the Add button

  2. Select the appropriate *-cloud Resource Profile and name your broker lab1020-broker

  3. Provide the information for the Required Params

    These vary depending on the cloud you are using

    Design Note: By setting credentials here, callers these the cloud credential profile will handle the security/authentication parameters for all clusters using the broker.

  4. Review the information for the Optional Params (clicking reveals the default)

    For aws, the “rsa/key-user” should be set to “ec2-user”

  5. Save the Broker

  6. Wait for the Broker to change into Work Order mode

    Troubleshooting note: the broker parameters are not exercised in this phase; consequently, configuration mistakes will not be revealed until the next step.

  7. Optional Setup

    If you’d like to review the generated Terraform Plans (they are not normally stored) then add the Param ” terraform/debug-plan” to the broker and make sure it’s true. Plans are stored with the Terraform State on the invoking cluster.

23.1.5.3.1.2. Create the lab1020 Cluster

  1. Navigate to the Clusters table and click the Add button

    Name your Cluster lab1020

    Choose the lab1020-broker for the “broker/name” Parameter.

    Update the cluster/count to match your planned cluster size.

    Save the Cluster and allow it to progress to Work Order mode.

  2. Observe the Cluster’s Activity as it builds the Cluster to the “cluster-provision” task

23.1.5.3.1.3. Review the Cluster

  1. From Resource Brokers

    Observe the Activity to see Terraform being run during the terraform-apply task.

  2. From Work Orders activity wait for the terraform-apply to complete.

    At this point, you should be able to see the instances created the target cloud.

  3. From Machines watch for instances to register

    The terraform-apply task will add machines into Digital Rebar before they are created by the Cloud Provider so that they can join-up and have pre-defined operations as soon as they are provisioned.

  4. Observe the newly created Machines that are part of the new cluster.

  5. Wait for the machine to be created and automatically connect and start processing its own pipeline. This may take several minutes depending on the cloud.

    Troubleshooting note: if the machine does not join automatically, you can run the `broker-start-agents-via-ansible-joinup Blueprint <https://portal.rackn.io/#/e/0.0.0.0/blueprints/broker-start-agents-via-ansible-joinup/resources>`__ from the resource broker to try to start the join-up via Ansible.

23.1.5.3.1.4. Resize the Cluster

  1. From your Cluster after the Pipeline completes

  2. Note that the Cluster is now in “Work Order” mode.

  3. Look for the “terraform/tfinfo” to review the Terraform state file that will be used when updating or removing the cluster.

  4. Note the cost calculation in inventory/cost

    The inventory/cost param has been updated as a total of all the inventory/cost values on the cluster’s machines.

  5. Change the cluster/count value.

    If zero, then all machines will be removed but the cluster will remain available for further action.

  6. Apply the universal-application-base-cluster Blueprint. This will automatically re-apply Terraform and update your cluster.

  7. From your Cloud Service Provider’s control panel, look to see the created instances.

  8. Note the updated cost calculation in inventory/cost

    This value is automatically updated during cluster provisioning

23.1.5.3.1.5. Cleanup your cluster

This cluster is used in Lab 1030, so skip this step if you are continuing.

  1. From the Clusters table, select lab1020

  2. Use the Actions list to return your Cluster to Workflow mode

  3. Use the Actions list to Cleanup your cluster (the related machines will be automatically removed also)

    Cleanup is a special version of Destroy that will run and complete the on-delete-workflow before invoking Destroy.

  4. From your Cloud Service Provider’s control panel, look to see that all instances were terminated.