Terraform modules simplified.

Terraform is probably already the de-facto standard for cloud deployment. I use it on a daily basis deploying and destroying my tests and demo setups in my Oracle cloud tenancy. Sometimes the deployment environment for a demo has too many files or some files inside are really big and hard to read due to the number of different resources and parameters included there. How can we make our configuration more usable? Let’s try Terraform modules and demonstrate how they work.
For our tests we are going to use terraform v1.0.3 and Oracle Cloud Infrastructure (OCI). You will need a working OCI and on your machine with terraform defined environment variables. The full list of required environment variables will be provided in the README file in the GitHub repository.
Let’s say we have a simple demo or test configuration with a dedicated network, internet gateway and a VM. And we want to assign multiple security rules using security lists and maybe one or two security groups. We can include all those rules to the configuration file for the network but maybe there is a better way. What if we want to reuse the similar set of the security rules and security groups not only to that deployment but share with some other stacks? We can try to use Terraform modules.

I published a sample configuration which I am using here on Github and you can clone it from https://github.com/gotochkin/terraform-cloud.git . We are going to use the demo01 for the blog located in the “./modules-demo/demo01” folder.
Let’s clone the repository and change the directory to ./terraform-cloud/modules-demo/demo01.

[opc@gleb-bastion-ca demo01]$ tree
.
├── compute.tf
├── main.tf
├── output.tf
├── variables.tf
└── vcn.tf

0 directories, 5 files

Before using the terraform you need to define some key environment variables which will be passed to the configuration and install the terraform command line from the terraform site. The configuration was tested with the version 1.0.3.

Here is the list of the environment variables I am passing to the deployment, you can also find it in the main README for the repository. You will need to provide your own values.

export TF_VAR_tenancy_ocid=ocid1.tenancy.oc1..aaaaaaaaq....
export TF_VAR_user_ocid=ocid1.user.oc1..aaaaaaaaa...
export TF_VAR_compartment_ocid=ocid1.compartment.oc1..aaaaaaaa...
export TF_VAR_fingerprint=$(cat ~/.oci/oci_api_key_fingerprint) #I put it in the file - you can export it explicitly
export TF_VAR_private_key_path=~/.oci/oci_api_key.pem
export TF_VAR_ssh_public_key=$(cat ~/.ssh/id_rsa.pub) #I've used the cat command to put it to the variable
export TF_VAR_ssh_private_key=$(cat ~/.ssh/id_rsa)  #Only if you use it (optional)
export TF_VAR_region=ca-toronto-1

In the folder we have a set of *.tf configuration files. The variables.tf file provides definitions and default values for variables, the main.tf gives the provider information and some additional resources, the compute.tf has our VM configuration and the vcn.tf defines the main network components. The network configuration has only a VCN itself, public subnet, internet gateway and a default route table with a route using the internet gateway. We didn’t specify any security lists and rules there.

# Define VCN
resource "oci_core_vcn" "moduletest_vcn" {
  cidr_block     = var.vcn_cidr_block
  dns_label      = "moduletestvcn1"
  compartment_id = var.compartment_ocid
  display_name   = "moduletest_vcn"
}
 
# A regional subnet will not specify an Availability Domain
#Public subnet
resource "oci_core_subnet" "moduletest_vcn_subnet_pub" {
  cidr_block        = var.pub_sub_cidr_block
  display_name      = "moduletest_vcn_subnet_pub"
  dns_label         = "moduletestvcn1"
  compartment_id    = var.compartment_ocid
  vcn_id            = oci_core_vcn.moduletest_vcn.id
}
 
#Internet gateway
resource "oci_core_internet_gateway" "moduletest_internet_gateway" {
  compartment_id = var.compartment_ocid
  display_name   = "moduletestInternetGateway"
  vcn_id         = oci_core_vcn.moduletest_vcn.id
}
 
#Default route table
resource "oci_core_default_route_table" "default_route_table" {
  manage_default_resource_id = oci_core_vcn.moduletest_vcn.default_route_table_id
  display_name               = "defaultRouteTable"
 
  route_rules {
    destination       = "0.0.0.0/0"
    destination_type  = "CIDR_BLOCK"
    network_entity_id = oci_core_internet_gateway.moduletest_internet_gateway.id
  }

And the main.tf file has the commented module declaration for now.

# Provider and authentication details
provider "oci" {
  tenancy_ocid     = var.tenancy_ocid
  user_ocid        = var.user_ocid
  fingerprint      = var.fingerprint
  private_key_path = var.private_key_path
  region           = var.region
}
 
data "oci_identity_compartment" "database_test_compartment" {
    #Required
    id = var.compartment_ocid
}
 
data "oci_identity_availability_domain" "ad" {
  compartment_id = var.tenancy_ocid
  ad_number      = 1
}
 
# Modules declaration 
/*
module "security_lists" {
  source = "../modules/security_lists"
  vcn_id = oci_core_vcn.moduletest_vcn.id
  compartment_ocid = var.compartment_ocid
  security_list_id = oci_core_vcn.moduletest_vcn.default_security_list_id
}
*/

Let’s run the configuration and create the resources in OCI.

[opc@gleb-bastion-ca demo01]$ terraform init
 
Initializing the backend...
...
redacted
...
[opc@gleb-bastion-ca demo01]$ terraform plan
 
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
...
redacted
...
[opc@gleb-bastion-ca demo01]$ terraform apply
 
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 
Terraform will perform the following actions:
...
redacted
...
oci_core_instance.moduletest_instance: Creation complete after 36s [id=ocid1.instance.oc1.ca-toronto-1.an2g6ljrvrxjjqycvwigqczdndrhz2776wxm64kbqsri5fgpbsewzegrpota]
 
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
 
Outputs:
 
primary_ip_addresses = [
  "132.145.103.8",
]
[opc@gleb-bastion-ca demo01]$

If we have a look at the network configuration we will see the default security list with default rules for ingress and egress and no security groups.

Now let’s have a look to the modules configurations in the ../modules/security_lists directory:

[opc@gleb-bastion-ca demo01]$ tree ../modules/security_lists/
../modules/security_lists/
├── security_groups.tf
├── security_lists.tf
└── variables.tf
 
0 directories, 3 files

In the security_groups.tf file we have some additional rules for the default security list:

#  Default security list
resource "oci_core_default_security_list" "default_security_list" {
  manage_default_resource_id = var.security_list_id
  display_name               = "defaultSecurityList"
 
...
redacted
...
    # allow internal network traffic
  ingress_security_rules {
    protocol  = 6         # tcp
    source    = "10.11.0.0/22"
    stateless = false
  }
    # allow all inbound icmp traffic for internal network
  ingress_security_rules {
    protocol  = 1         #icmp
    source    = "10.11.8.0/22"
    stateless = false
  }
}

And for security group we added a couple of hosts to access the Oracle SQL*Net port 1521:

#TNS*Net Network Security Group
 
resource "oci_core_network_security_group" "tnsnet_network_security_group" {
    #Required
    compartment_id = var.compartment_ocid
    vcn_id = var.vcn_id
    display_name = "TNSNetNSG"
}
 
# DataStream Network Security Group Rules
resource "oci_core_network_security_group_security_rule" "tnsnet_nsg_rule_01" {
    #Required
    network_security_group_id = oci_core_network_security_group.tnsnet_network_security_group.id
    direction = "INGRESS"
    protocol = 6
    #Optional
    description = "TNS*Net whitelisted IP 01"
    source = "34.67.6.157/32"
    source_type = "CIDR_BLOCK"
    tcp_options {
        #Optional
        destination_port_range {
            #Required
            max = 1521
            min = 1521
        }
    }
}
...
redacted
...

The module’s variables.tf file has the required input variables for the security lists and groups which we need to define in the module declaration in our main.tf in the “./modules-demo/demo01”

# Terraform module variables for security lists and groups
variable "vcn_id" {
  description = "vcn id"
  type        = string
}
 
variable "compartment_ocid" {
  description = "compartment id"
  type        = string
}
 
variable "security_list_id" {
  description = "ocid for the security list"
  type        = string
}

Now the question how can we include it to the existing configuration?
The first thing is to uncomment block with the module in the our main.tf file.

module "security_lists" {
  source = "../modules/security_lists"
  vcn_id = oci_core_vcn.moduletest_vcn.id
  compartment_ocid = var.compartment_ocid
  security_list_id = oci_core_vcn.moduletest_vcn.default_security_list_id
}

Then we enable the module:

[opc@gleb-bastion-ca demo01]$ terraform get
- security_lists in ../modules/security_lists
[opc@gleb-bastion-ca demo01]$

Now we can apply the configuration to our system.

[opc@gleb-bastion-ca demo01]$ terraform apply
oci_core_vcn.moduletest_vcn: Refreshing state... [id=ocid1.vcn.oc1.ca-toronto-1.amaaaaaavrxjjqya3vdrtyi2gs3x6moohxk6irmszaami7r3f66qh62gixuq]
oci_core_subnet.moduletest_vcn_subnet_pub: Refreshing state... [id=ocid1.subnet.oc1.ca-toronto-1.aaaaaaaaycbfledvw5t5q4irqx3queszappx4khshm6tqq3hhudmgphmxurq]
oci_core_internet_gateway.moduletest_internet_gateway: Refreshing state... [id=ocid1.internetgateway.oc1.ca-toronto-1.aaaaaaaam66plumoftjur4mljlqdomspv2ved6l56kpmdd2rd7sf74pwlt2a]
oci_core_instance.moduletest_instance: Refreshing state... [id=ocid1.instance.oc1.ca-toronto-1.an2g6ljrvrxjjqycvwigqczdndrhz2776wxm64kbqsri5fgpbsewzegrpota]
oci_core_default_route_table.default_route_table: Refreshing state... [id=ocid1.routetable.oc1.ca-toronto-1.aaaaaaaair2lbzh7jtm3fl6mpalcimn4cry62z5kohcg2wunm7ekdagiq2va]
 
...
redacted
...
 
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
 
Outputs:
 
primary_ip_addresses = [
  "132.145.103.8",
]
[opc@gleb-bastion-ca demo01]$

And if we look to the security list configuration we can see the changes.

And we can see the security group TNSNetNSG which we can later add to the VM configuration.

That’s great but what really have we achieved so far? We’ve made our code more readable and more manageable. You can have even different version control repositories for the modules. But it is not only about that. Our modules can be used in the different terraform stacks. As you’ve probably noticed we are passing three variables to the module. So if we want to reuse the same security lists and groups for another network or deployment we can do it by declaring the module in the new deployment and passing the compartment, vcn and security list ids. Let’s try it and see how it works. We move to the “demo02” directory and build another stack using the same modules.

We need to initialize provider and module again since this is a new configuration.

[opc@gleb-bastion-ca demo01]$ cd ../demo02/
[opc@gleb-bastion-ca demo02]$ terraform init
Initializing modules...
- security_lists in ../modules/security_lists
 
Initializing the backend...
 
Initializing provider plugins...
- Finding latest version of hashicorp/oci...
- Installing hashicorp/oci v4.37.0...
- Installed hashicorp/oci v4.37.0 (signed by HashiCorp)

Then we build the stack which is going to use the same module but passing in parameters reflecting the new network.

[opc@gleb-bastion-ca demo02]$ terraform apply
 
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 
Terraform will perform the following actions:
 
  # oci_core_default_route_table.default_route_table will be created
  + resource "oci_core_default_route_table" "default_route_table" {
...
redacted
...
Apply complete! Resources: 9 added, 0 changed, 0 destroyed.
 
Outputs:
 
primary_ip_addresses = [
  "132.145.96.54",
]
[opc@gleb-bastion-ca demo02]$

And if we have a look at the new VCN we can see the same security list rules and the network security groups. As an addition to the configuration I’ve added the network security group from the module to the instance network interface.

create_vnic_details {
    subnet_id        = oci_core_subnet.moduletest02_vcn_subnet_pub.id
    display_name     = "Primaryvnic"
    assign_public_ip = true
    hostname_label   = "moduletest02"
    nsg_ids = [
      module.security_lists.tnsnet_network_security_group_id
    ]
  }

The” tnsnet_network_security_group_id” was defined in the output.tf file in the module folder and that allowed me to use it in my VM configuration.

You can see the security group is attached to the VNIC.

Can we make our configuration more dynamic and adaptable? In some of the next posts we will talk about list type variables. Stay tuned and happy terraforming.

2 thoughts on “Terraform modules simplified.”

  1. Hi Gleb,

    I haven’t seen to running around the farm for a little while …
    Change in route or re-located ???
    I’ll be out there all through the fall and winter … I hope (hahahaha)

    kind regards,
    Stephen

Leave a Reply

Your email address will not be published. Required fields are marked *