AWS Transit Gateway
Cloud router connecting VPCs and on-premises networks through a central hub
Imagine you live in a city with 50 neighborhoods, and each one has its own little road system. If you want to visit a friend across town, you could build a direct road from your neighborhood to theirs. But now think about it: if every neighborhood needs to connect to every other neighborhood, you'd need 50 ร 49 รท 2 = 1,225 roads. That's unmanageable. So instead, the city builds one massive hub, a train station right in the center. Every neighborhood builds just one connection to that hub. Now, to get anywhere, you just go to the hub and transfer. You've gone from 1,225 connections to 50. That's Transit Gateway. In AWS, each VPC is like a neighborhood. Without Transit Gateway, you'd need VPC peering connections between every pair; a full mesh that gets unwieldy fast. Transit Gateway is your central hub that connects VPCs, on-premises networks via Direct Connect or VPN, and even other Transit Gateways in different regions.
Transit Gateway (TGW) is a regional network transit hub that operates at Layer 3. Each TGW has one or more route tables. You attach VPCs, VPNs, and Direct Connect Gateways to the TGW, then associate them with specific route tables. Route tables control which attachments can talk to which other attachments. Attachment types: VPC attachments, VPN attachments, Direct Connect Gateway attachments, TGW peering attachments (for inter-region), and AWS Transit Gateway Connect (for SD-WAN appliances).
Gotchas & Constraints
Gotcha #1: By default, attachments in the same route table can communicate. If you attach 10 VPCs to one route table without thinking, you've just created a flat network, so production and dev can now see each other. Always use separate route tables for isolation (e.g., one for prod, one for dev, one for shared services). Gotcha #2: Transit Gateway does not perform stateful inspection. It's a router, not a firewall. If you need deep packet inspection or IDS/IPS, you must route traffic through a centralized inspection VPC with firewall appliances (like AWS Network Firewall or third-party solutions). Common misconfiguration: Forgetting that TGW attachments require subnet-level route table entries in your VPC pointing back to the TGW. Without proper routes in both the VPC route table and the TGW route table, traffic blackholes.
A financial services company has 30 VPCs across us-east-1 and eu-west-1, plus an on-premises data center in New York. They need: Production VPCs isolated from development, all environments able to reach shared services (Active Directory, DNS), on-premises connectivity to production only (not dev), and multi-region disaster recovery. Solution: Deploy one Transit Gateway in each region (us-east-1 and eu-west-1). Create three route tables per TGW: Production, Development, Shared Services. Attach production VPCs to the Production route table, dev VPCs to Development, and shared services VPC to Shared Services. Configure route propagation: Production and Dev route tables both propagate routes to Shared Services, but not to each other. Attach Direct Connect Gateway to the us-east-1 TGW, associate only with the Production route table. On-prem can now reach prod and shared services, but not dev. Enable TGW Peering between us-east-1 and eu-west-1 TGWs. Add peer attachment to both Production route tables for DR replication traffic.