Provisioning GCP Resources with Terraform – Part 1

Provisioning GCP Resources with Terraform

Infrastructure-as-a-Service (IaaS) cloud computing environments allow users to utilize computing infrastructure—servers, disks, networks, and so on—without significant long-term or ahead-of-time investment. This enables flexibility, both financial and operational, which is a huge benefit for many modern organizations. Adding resources, or removing unused resources to reduce costs, is always just a few clicks away. 

But IaaS has another benefit, one that is sometimes only apparent at second glance: automation. The ability to programmatically control infrastructure opens up a world of possibilities. Namely, it has enabled us engineers to describe our infrastructure as code. This code can be executed to provision (or destroy) infrastructure and ensure it is set up exactly as we need it. It can also be committed to git or other SCM tools and managed along with our application code. 

In this two-part series, I’ll cover exactly how to do this with two specific tools that work very well together: Google Cloud Platform, as our Infrastructure as a Service (IaaS) provider, and Terraform, as our Infrastructure as Code (IaC) tool. For Part 1, I’ll be sticking to the basics. 

What Is Terraform?

Terraform by HashiCorp is a software tool that manages infrastructure by describing it as code. It is open-source and enjoys a thriving ecosystem, which enables it to support the automation of hundreds of services, tools, and infrastructure providers. 

With Terraform, you describe what your infrastructure should be like using a declarative language called HCL. Terraform then takes care of comparing the current state of your infrastructure with the desired state and uses the Cloud vendor’s APIs to ensure the two match. This is done in a reproducible, traceable, and clear way. Given the same input (Terraform files and input variables), you will always get the same output (infrastructure). Terraform will also ensure resources are not duplicated due to repeating runs (this is referred to as idempotency) and will manage dependencies between resources; for example, it will not try to attach a disk to a machine that doesn’t exist yet.

It’s worth mentioning that Terraform supports many types of resources—not just machine instances, images, load balancers, or storage buckets but also resources you’d less expect like an alerting rule in a monitoring system or a routing table entry.

Getting Up & Running

To follow the examples in this post, you’ll need to set up your local environment to connect to Google Cloud and have Terraform installed. You’ll also need a Google Cloud project set up.

NOTE: You may be charged for Google Cloud resources created during this tutorial and should make sure to destroy your GCP project once it is done. You may also want to take advantage of Google Cloud’s free tier and credit offering for new accounts.

Installing Terraform

Installing Terraform should be easy in most environments; simply follow the official Terraform tutorial. To ensure Terraform is installed and working, run the terraform version command:

$ terraform version

Terraform v0.14.5

Setting Up the Google Cloud Environment

Google Cloud Platform organizes resources in projects. Almost every Google Cloud resource is created within the scope of a project, which can also have its own specific billing and authorization setup. This comes in handy, as you can experiment without affecting any other environment your Google user has access to and simply delete the project when you’re done.

If you’ve never used Google Cloud Platform before, try logging in to the Google Cloud Console. You may have to create a new Google account or complete some required setup before you can begin using GCP.

Install gcloud

While it is not required in order to use Terraform, you should install the Google Cloud SDK on your local machine so you can create and manage your GCP project and control resources directly from the command line. To verify you have the Google Cloud SDK installed, run:

$ gcloud version

Google Cloud SDK 325.0.0

You may see a note on available updates. It is recommended to follow the instructions at this stage to ensure you have the latest version of all Google Cloud SDK components installed. 

Authenticate gcloud with Google

To check if you’re interacting with Google Cloud as the right user:

      • Run gcloud auth list to list logged-in users and gcloud config set account, if needed, to switch to the current account:

$ gcloud auth list

   Credentialed Accounts



$ gcloud config set account

Updated property [core/account].

      • If the right account is not listed, run gcloud auth login. This will open a browser window and allow you to log in to your account of choice.

You will need a new Google Cloud project to use during this tutorial, so run:

$ gcloud projects create tfm-tutorial \

    --name "Playing with Terraform" \


If you’ve already created a dedicated project via the Google Cloud console and want to switch the SDK to use it, run:

$ gcloud projects list


PROJECT_ID           NAME                    PROJECT_NUMBER

ageless-fire-295512  My First Project        9876543212345

tfm-tutorial         Playing with Terraform  1234567898765

$ gcloud config set project tfm-tutorial

Updated property [core/project].

Are you a tech blogger?

We're currently seeking new cloud experts to join our network of influencers. Devops? Serverless? Machine learning?

Mind the project ID of the project you’re using (in this case, tfm-tutorial). You will need it later.

After you’ve created your project, an important step is to enable the Google Compute Engine APIs for it (later on, you may also need to do this for additional APIs). Go to, select your newly created project from the top navigation bar, and click the big blue “Enable” button. 

Your First Terraform Managed GCE VM

Ok, let’s get to it. You’ll be creating a basic project by launching a single Google Compute Engine VM managed by Terraform. 

Typically, Terraform files related to a specific code or project are kept in a place that makes sense, either in a directory under your project root or, in some cases, in a separate source tree or repository. In my opinion, it’s best to keep your infrastructure close to the code. So go ahead and create a directory for your mock project:

$ mkdir tfm-tutorial && cd tfm-tutorial

$ echo “# My mock project” >

This is where your project will live. In a real-world scenario, this is where you’d place your application source code, and probably where you’d run git init to create a repository. 

Place your Terraform files in a dedicated directory under your project root:

$ mkdir terraform && cd terraform

The files you create in this directory should be committed to your source control repository along with the rest of the project. 

For your first Terraform file, create a file named in the current directory with the following content:

provider "google" {

  project = "tfm-tutorial"

  region  = "us-central1"


resource "google_compute_instance" "my_vm" {

  name         = "tf-tutorial-vm"

  machine_type = "f1-micro"

  zone         = "us-central1-a"

  boot_disk {

    initialize_params {

      image = "debian-cloud/debian-10"



  network_interface {

    network = "default"

    access_config {}



This file sets the basic configuration for Terraform’s google provider and also defines your first resource—a GCE instance you’ve identified as “my_vm”. But before you run this, let me break it down a bit:

      • The provider block defines your GCP project to use and a default GCP region.
      • The google_compute_instance resource block defines a GCE machine instance of type f1-micro (you’ll use this type, as it may be eligible for free use) with a Debian image. 
      • The instance will be attached to the default GCP network.
      • You can configure a lot more, but for now, you’ll just use the default settings for everything else to keep things simple.

Providing Terraform with GCP Credentials

Are we ready to run this? Well… almost. First, you need to set up a GCP Service Account—this is an API-only account that will be used to provision your infrastructure for this project.

To create a service account, run:

$ gcloud iam service-accounts create tfm-tutorial \

    --display-name "Tutorial Account" 

Created service account [tfm-tutorial].

$ gcloud iam service-accounts list

DISPLAY NAME      EMAIL                                             DISABLED

Tutorial Account False

Note the email address for your service account user. This is a fully qualified identifier for your account, which you’ll need soon. 

Next, you have to provide this service account with permissions to manage pretty much every resource in your project. For the sake of simplicity, you’ll just grant the account the owner role for your project (yes, there might be more granular ways to do this, but for now, this will work):

$ gcloud projects add-iam-policy-binding tfm-tutorial --role=roles/owner \

Now, create a secret key file for this account, named account-key.json, in the current directory. This key will be used by Terraform to authenticate as your service account:

$ gcloud iam service-accounts keys create account-key.json \

NOTE: Keep this file safe, and never commit it to SCM. If leaked, it can provide access to your entire GCP project. 

To use this key for all upcoming Terraform operations, run:

$ export GOOGLE_APPLICATION_CREDENTIALS=account-key.json

You’ll need to re-run this command whenever you start a new terminal/shell. 

Let Terraform Get to Work

Now, you’re ready to find out what Terraform will do with your configuration. Start by running the plan command to see how Terraform plans to provision the resources you’ve described:

$ terraform plan

An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

  + create

Terraform will perform the following actions:

  # google_compute_instance.my_vm will be created

  + resource "google_compute_instance" "my_vm" {

  ... ...


Plan: 1 to add, 0 to change, 0 to destroy.

The key thing here is that Terraform is telling you that it wants to create one new resource. This of course makes sense—you defined one resource (your VM instance), and it has not been created yet. Take a moment to review the entire output of this command; this will help you get a clear understanding of what Terraform is about to do.

Now, it’s time to run the apply command to provision your infrastructure. Terraform will show you its plan again and request that you type yes to approve its actions.

Review, approve, sit back, and enjoy the show: 

$ terraform apply

  … …

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The last line provides a summary of all the actions taken during the execution of the plan. 

To see that Terraform actually created a VM instance for you, go ahead and run:

$ gcloud compute instances list


tf-tutorial-vm  us-central1-a  f1-micro         RUNNING

As you can see, your tf-tutorial-vm machine is up and running and has as its public IP address. While this machine doesn’t do much (you’ll fix that in Part 2), it’s easy to see how powerful this type of automated provisioning can be.

Cleaning Up

This seems like a good place to end the first part of this tutorial. But first, let’s leave your campsite clean. Run the following command to destroy all the resources you have just created: 

$ terraform destroy

Just like with apply commands, you will be displayed a plan and asked to confirm it. If you’ve followed this tutorial properly, expect to see Terraform planning to destroy two resources: your VM and your firewall rule. 

Running gcloud compute instances list again should confirm that there are no instances up and running.

What’s Next?

In Part 1 of this series, I showed you the essential facts for Terraform and Google Cloud and helped you create the most basic resource. 

In the next part, I’ll dive deeper into both GCP and Terraform. You’ll create more advanced GCP resources and learn to manage relationships between them. I’ll do that while demonstrating advanced Terraform features such as variables, modules, and loops.

Scale your B2B tech marketing operations through our self-service platform, athe clilck of a button

Related posts