Terraform’s HCL syntax is learnable, but knowing the right resources, their required arguments, and how to structure modules for reuse takes experience. Claude Code writes accurate Terraform configurations because it knows the AWS, GCP, and Azure provider resources and generates configurations that plan and apply correctly.
This guide covers using Claude Code for Terraform: writing resource configurations, module structure, state management, common AWS patterns, and debugging plan errors.
Setting Up Claude Code for Terraform
Cloud and pattern context matters:
# Infrastructure Context
## Stack
- Terraform 1.7, AWS provider ~> 5.0
- State: S3 backend + DynamoDB state lock
- Environment structure: modules + environment directories (not workspaces)
- Naming: {project}-{environment}-{resource} (e.g., acme-prod-db)
## AWS Account Structure
- Accounts: dev (123456789), staging (234567890), prod (345678901)
- VPC CIDR: 10.0.0.0/16 (each env in own VPC)
- Regions: us-east-1 (primary), us-west-2 (DR)
## Conventions
- All resources must have tags: Project, Environment, ManagedBy=terraform, Owner
- Security groups: deny all by default, open minimum required ports
- No public subnets for databases — always private subnets
- IAM: least-privilege policies, no admin roles except break-glass
## Never
- Hardcode secrets in .tf files — use Secrets Manager or SSM
- Open 0.0.0.0/0 on any port except HTTP/HTTPS
- Put resources directly in root module — use modules
See the CLAUDE.md setup guide for full configuration.
Writing Terraform Configurations
VPC and Networking
Create a VPC with public and private subnets across 3 availability zones.
Public subnets for load balancers, private for application and database tiers.
NAT Gateway for private subnet internet access.
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = "${var.project}-${var.environment}"
cidr = var.vpc_cidr
azs = data.aws_availability_zones.available.names
private_subnets = [for i, az in local.azs : cidrsubnet(var.vpc_cidr, 8, i)]
public_subnets = [for i, az in local.azs : cidrsubnet(var.vpc_cidr, 8, i + 10)]
database_subnets = [for i, az in local.azs : cidrsubnet(var.vpc_cidr, 8, i + 20)]
enable_nat_gateway = true
single_nat_gateway = var.environment != "prod" # Multi-AZ NAT in prod only
enable_dns_hostnames = true
enable_dns_support = true
# DB subnet group for RDS
create_database_subnet_group = true
tags = local.common_tags
}
data "aws_availability_zones" "available" {
state = "available"
}
locals {
azs = slice(data.aws_availability_zones.available.names, 0, 3)
common_tags = {
Project = var.project
Environment = var.environment
ManagedBy = "terraform"
Owner = var.team
}
}
single_nat_gateway = var.environment != "prod" — single NAT in dev/staging saves cost, multi-AZ NAT in production for resilience. cidrsubnet() for systematic subnet allocation.
ECS Fargate Service
Deploy a containerized app on ECS Fargate.
3 tasks, ALB in front, auto-scaling on CPU.
Secrets from Secrets Manager.
resource "aws_ecs_cluster" "main" {
name = "${local.name_prefix}-cluster"
setting {
name = "containerInsights"
value = "enabled"
}
tags = local.common_tags
}
resource "aws_ecs_task_definition" "app" {
family = "${local.name_prefix}-app"
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = var.task_cpu
memory = var.task_memory
execution_role_arn = aws_iam_role.ecs_execution.arn
task_role_arn = aws_iam_role.ecs_task.arn
container_definitions = jsonencode([{
name = "app"
image = "${var.ecr_repository_url}:${var.image_tag}"
portMappings = [{
containerPort = 8080
protocol = "tcp"
}]
secrets = [
{
name = "DATABASE_URL"
valueFrom = aws_secretsmanager_secret.db_url.arn
},
{
name = "API_KEY"
valueFrom = "${aws_secretsmanager_secret.api_keys.arn}:api_key::"
}
]
logConfiguration = {
logDriver = "awslogs"
options = {
"awslogs-group" = aws_cloudwatch_log_group.app.name
"awslogs-region" = var.aws_region
"awslogs-stream-prefix" = "app"
}
}
healthCheck = {
command = ["CMD-SHELL", "wget -qO- http://localhost:8080/health || exit 1"]
interval = 30
timeout = 5
retries = 3
startPeriod = 60
}
}])
tags = local.common_tags
}
resource "aws_ecs_service" "app" {
name = "${local.name_prefix}-app"
cluster = aws_ecs_cluster.main.id
task_definition = aws_ecs_task_definition.app.arn
desired_count = var.task_count
launch_type = "FARGATE"
network_configuration {
subnets = module.vpc.private_subnets
security_groups = [aws_security_group.app.id]
assign_public_ip = false
}
load_balancer {
target_group_arn = aws_lb_target_group.app.arn
container_name = "app"
container_port = 8080
}
deployment_circuit_breaker {
enable = true
rollback = true
}
tags = local.common_tags
}
deployment_circuit_breaker with rollback enabled — ECS will automatically roll back a deployment if health checks fail. Claude includes this by default for ECS services.
RDS with Multi-AZ
Create an RDS PostgreSQL instance.
Multi-AZ for prod. Automated backups. Encryption at rest.
Parameter group for performance-tuned settings.
resource "aws_db_instance" "postgres" {
identifier = "${local.name_prefix}-db"
engine = "postgres"
engine_version = "16.2"
instance_class = var.db_instance_class
allocated_storage = var.db_storage_gb
max_allocated_storage = var.db_storage_gb * 3 # Auto-scaling to 3x
storage_type = "gp3"
storage_encrypted = true
kms_key_id = aws_kms_key.rds.arn
db_name = var.db_name
username = "postgres"
password = random_password.db.result
db_subnet_group_name = module.vpc.database_subnet_group
vpc_security_group_ids = [aws_security_group.rds.id]
multi_az = var.environment == "prod"
backup_retention_period = var.environment == "prod" ? 30 : 7
backup_window = "03:00-04:00"
maintenance_window = "Sun:04:00-Sun:05:00"
deletion_protection = var.environment == "prod"
skip_final_snapshot = var.environment != "prod"
final_snapshot_identifier = var.environment == "prod" ? "${local.name_prefix}-final" : null
parameter_group_name = aws_db_parameter_group.postgres16.name
tags = local.common_tags
}
resource "random_password" "db" {
length = 32
special = true
override_special = "!#$%&*()-_=+[]{}<>:?"
}
# Store the password in Secrets Manager
resource "aws_secretsmanager_secret_version" "db_credentials" {
secret_id = aws_secretsmanager_secret.db_credentials.id
secret_string = jsonencode({
username = aws_db_instance.postgres.username
password = random_password.db.result
host = aws_db_instance.postgres.address
port = aws_db_instance.postgres.port
database = aws_db_instance.postgres.db_name
})
}
random_password generates a secure password managed by Terraform and stored in Secrets Manager. Never hardcode database passwords.
Module Structure
Structure a Terraform project for: networking (VPC), compute (ECS),
database (RDS), and monitoring (CloudWatch/alarms).
Show how environments (dev/staging/prod) use these modules.
infrastructure/
├── modules/
│ ├── networking/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── outputs.tf
│ ├── compute/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── outputs.tf
│ ├── database/
│ └── monitoring/
├── environments/
│ ├── dev/
│ │ ├── main.tf # Uses modules
│ │ ├── terraform.tfvars # Dev-specific values
│ │ └── backend.tf
│ ├── staging/
│ └── prod/
└── global/
├── iam/ # Cross-environment IAM
└── route53/ # DNS (shared)
# environments/prod/main.tf
module "networking" {
source = "../../modules/networking"
project = "acme"
environment = "prod"
vpc_cidr = "10.0.0.0/16"
}
module "database" {
source = "../../modules/database"
project = "acme"
environment = "prod"
vpc_id = module.networking.vpc_id
subnet_ids = module.networking.database_subnet_ids
db_instance_class = "db.r6g.xlarge"
db_storage_gb = 100
}
module "compute" {
source = "../../modules/compute"
project = "acme"
environment = "prod"
vpc_id = module.networking.vpc_id
subnet_ids = module.networking.private_subnet_ids
db_url = module.database.connection_url
task_count = 3
}
Environment directories (not Terraform workspaces) for prod/staging/dev — separate state files per environment, clearer separation of concerns.
Debugging Terraform Plans
My terraform plan shows this error:
Error: Error creating Security Group Rule:
InvalidPermission.Duplicate: the specified rule already exists
Here's the resource: [paste]
Claude identifies: the security group rule already exists outside Terraform state. Options: import the existing rule with terraform import, or delete and recreate if it’s safe. It generates the exact terraform import command with the correct resource address format.
Circular Dependencies
Terraform says: "Cycle: aws_security_group.app → aws_security_group.rds → aws_security_group.app"
Claude identifies the circular reference in security group rules and resolves it by using aws_security_group_rule resources (separate from the security group itself) to break the cycle.
State Management
I need to move a resource to a different module without destroying/recreating it.
It's a production RDS instance — can't have downtime.
# Move state without destroying
terraform state mv \
module.old.aws_db_instance.postgres \
module.new.aws_db_instance.postgres
Claude explains: terraform state mv updates the state file to reflect the new address, so the next terraform plan sees no changes for that resource. It warns to run plan after the move to verify before applying.
CI/CD for Terraform
Set up GitHub Actions to plan on PR, apply on merge to main.
Use OIDC for AWS authentication (no long-lived credentials).
- name: Configure AWS credentials via OIDC
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::${{ vars.AWS_ACCOUNT_ID }}:role/TerraformGitHubRole
aws-region: us-east-1
- name: Terraform Plan
run: |
terraform init
terraform plan -out=tfplan -no-color 2>&1 | tee plan.txt
- name: Post Plan to PR
uses: actions/github-script@v7
with:
script: |
const plan = require('fs').readFileSync('plan.txt', 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
body: `\`\`\`\n${plan.slice(-4000)}\n\`\`\`` // Last 4k chars
});
OIDC (not IAM access keys) for GitHub Actions — AWS trusts GitHub’s OIDC provider, no credentials stored in GitHub Secrets. Claude generates the OIDC provider Terraform resource and the trust policy.
Infrastructure as Code with Claude Code
Terraform’s provider documentation is vast — each AWS resource has dozens of arguments. Claude Code’s advantage is knowing which arguments are required vs. optional, which have security implications (encryption, deletion protection), and which affect billing (multi-AZ, storage type). Good Terraform from scratch in minutes rather than hours of documentation reading.
For containerizing the apps that run on this infrastructure see the Docker guide and Kubernetes guide. For automating Terraform runs in CI/CD pipelines see the CI/CD guide. The Claude Skills 360 bundle includes infrastructure skill sets for common AWS, GCP, and Azure patterns. Start with the free tier.