Infrastructure as Code β Terraform
Declarative vs imperative, providers, state, modules, workspaces, and best practices
Declarative vs Imperative
| Approach | How | Examples |
|---|---|---|
| Imperative | Tell the computer how to do it, step by step | Bash scripts, AWS CLI |
| Declarative | Tell the computer what you want; it figures out how | Terraform, CloudFormation, Kubernetes |
# Imperative (bash) β you manage state manuallyaws ec2 run-instances --image-id ami-123 --instance-type t3.medium# What if it's already running? You check first. Then update. Then handle failures.
# Declarative (Terraform) β desired stateresource "aws_instance" "web" { ami = "ami-123" instance_type = "t3.medium"}# Terraform figures out: create it, update it, or do nothing if already correct.Core Concepts
Providers & Resources
# Provider β plugin that knows how to talk to AWS/GCP/Azure/etc.terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" # allows 5.x but not 6.x } } required_version = ">= 1.6.0"}
provider "aws" { region = "us-east-1"
default_tags { tags = { ManagedBy = "terraform" Environment = var.environment } }}
# Resource β a specific piece of infrastructureresource "aws_vpc" "main" { cidr_block = "10.0.0.0/16"
tags = { Name = "main-vpc" }}
resource "aws_subnet" "public" { vpc_id = aws_vpc.main.id # reference another resource cidr_block = "10.0.1.0/24" availability_zone = "us-east-1a"}Variables & Outputs
variable "environment" { description = "Deployment environment (dev, staging, prod)" type = string
validation { condition = contains(["dev", "staging", "prod"], var.environment) error_message = "Environment must be dev, staging, or prod." }}
variable "instance_type" { description = "EC2 instance type" type = string default = "t3.medium"}
variable "allowed_ips" { description = "List of IPs allowed SSH access" type = list(string) default = []}
# outputs.tfoutput "vpc_id" { description = "ID of the created VPC" value = aws_vpc.main.id}
output "public_subnet_ids" { description = "IDs of public subnets" value = aws_subnet.public[*].id}# Passing variablesterraform apply -var="environment=prod"terraform apply -var-file="prod.tfvars"
# terraform.tfvars (auto-loaded if present)environment = "staging"instance_type = "t3.large"allowed_ips = ["10.0.0.0/8"]State
Why State Exists
Terraform keeps a state file (terraform.tfstate) that maps your configuration to real infrastructure. Without it, Terraform doesnβt know what already exists and would try to create everything from scratch.
// terraform.tfstate (simplified){ "resources": [ { "type": "aws_instance", "name": "web", "instances": [ { "attributes": { "id": "i-0abcdef1234567890", "instance_type": "t3.medium", "ami": "ami-123" } } ] } ]}State Drift
Drift occurs when real infrastructure diverges from the state file (someone manually changed something in AWS console).
# Detect driftterraform plan # shows diff between state and real infrastructureterraform refresh # update state to match reality (deprecated in favor of plan -refresh-only)terraform plan -refresh-only # show what would be refreshed
# Import existing infrastructure into state (don't manage things twice)terraform import aws_instance.web i-0abcdef1234567890Remote Backends
Never store state locally in a team environment. State contains sensitive values and needs locking.
terraform { backend "s3" { bucket = "mycompany-terraform-state" key = "production/main/terraform.tfstate" region = "us-east-1" encrypt = true dynamodb_table = "terraform-locks" # state locking }}# Initialize with backendterraform init
# List stateterraform state list
# Show specific resource in stateterraform state show aws_instance.web
# Remove resource from state (doesn't destroy real resource)terraform state rm aws_instance.web
# Move resource in state (e.g., after renaming)terraform state mv aws_instance.old aws_instance.newModules
Modules are reusable packages of Terraform configuration.
Module Structure
modules/βββ vpc/ βββ main.tf βββ variables.tf βββ outputs.tf βββ README.md
environments/βββ dev/β βββ main.tfβ βββ terraform.tfvarsβββ prod/ βββ main.tf βββ terraform.tfvarsWriting a Module
variable "name" { type = string}
variable "cidr" { type = string default = "10.0.0.0/16"}
variable "azs" { type = list(string)}
variable "public_subnets" { type = list(string)}
variable "private_subnets" { type = list(string)}
# modules/vpc/main.tfresource "aws_vpc" "this" { cidr_block = var.cidr tags = { Name = var.name }}
resource "aws_subnet" "public" { count = length(var.public_subnets) vpc_id = aws_vpc.this.id cidr_block = var.public_subnets[count.index] availability_zone = var.azs[count.index] tags = { Name = "${var.name}-public-${count.index + 1}" }}
# modules/vpc/outputs.tfoutput "vpc_id" { value = aws_vpc.this.id }output "public_subnet_ids" { value = aws_subnet.public[*].id }Using a Module
module "vpc" { source = "../../modules/vpc"
name = "prod-vpc" cidr = "10.0.0.0/16" azs = ["us-east-1a", "us-east-1b", "us-east-1c"] public_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"] private_subnets = ["10.0.11.0/24", "10.0.12.0/24", "10.0.13.0/24"]}
# Reference module outputsresource "aws_instance" "web" { subnet_id = module.vpc.public_subnet_ids[0] # ...}
# Use public registry modulesmodule "eks" { source = "terraform-aws-modules/eks/aws" version = "~> 20.0"
cluster_name = "my-cluster" cluster_version = "1.29" # ...}Environment Separation
Option 1: Separate Directories (Recommended)
infrastructure/βββ modules/β βββ vpc/β βββ eks/βββ environments/ βββ dev/ β βββ main.tf # calls modules with dev values β βββ backend.tf # dev state bucket β βββ terraform.tfvars βββ staging/ βββ prod/Each environment has its own:
- State file (separate S3 key)
terraform.tfvarswith environment-specific values- Can be deployed/destroyed independently
Option 2: Workspaces
Workspaces share the same configuration but separate state.
# Create and switch workspacesterraform workspace new devterraform workspace new prodterraform workspace listterraform workspace select prod
# Use workspace name in configresource "aws_instance" "web" { instance_type = terraform.workspace == "prod" ? "t3.large" : "t3.small"}Caveat: Workspaces can lead to a single mistake affecting all environments. Separate directories are safer for prod.
Best Practices
Naming
# Use consistent naming: {env}-{project}-{resource}resource "aws_security_group" "web" { name = "${var.environment}-${var.project_name}-web"}
# Tag everythinglocals { common_tags = { Environment = var.environment Project = var.project_name ManagedBy = "terraform" Owner = "platform-team" }}
resource "aws_instance" "web" { tags = merge(local.common_tags, { Name = "${var.environment}-web-server" })}Versioning
# Pin provider versions β avoid surprise breaking changesterraform { required_providers { aws = { source = "hashicorp/aws" version = "= 5.31.0" # exact version for production } }}
# Or use pessimistic constraint operatorversion = "~> 5.31" # allows 5.31.x but not 5.32.xTerraform Workflow
# Standard workflowterraform init # download providers and modulesterraform validate # check syntaxterraform fmt # auto-format codeterraform plan # preview changesterraform apply # apply changes (prompts for confirmation)terraform apply -auto-approve # skip confirmation (use in CI only)terraform destroy # destroy all resources (dangerous!)
# Plan output to file (for apply in CI)terraform plan -out=tfplanterraform apply tfplan
# Target specific resourceterraform plan -target=aws_instance.webterraform apply -target=aws_instance.webProviders & Module Registry
# Find public modules# https://registry.terraform.io/
# Popular AWS modulesmodule "vpc" { source = "terraform-aws-modules/vpc/aws" }module "eks" { source = "terraform-aws-modules/eks/aws" }module "rds" { source = "terraform-aws-modules/rds/aws" }module "alb" { source = "terraform-aws-modules/alb/aws" }