In a robust development workflow, the ability to reliably preview changes before they're merged into production is critical. One powerful approach to achieving this is creating a unique, isolated environment for each Pull Request (PR). This allows your team to thoroughly test, review, and validate new features or bug fixes without affecting your production environment.
In this guide, we'll explore a structured approach to setting up automated preview URLs for your PRs using GitHub Actions and AWS. We'll cover essential considerations, architectural decisions, and practical implementation details.
Why Preview Environments Matter
Preview environments solve several critical challenges faced by development teams:
- Isolation: Changes can be reviewed without impacting other developers or production.
- Early feedback: QA, product teams, or stakeholders can quickly provide targeted feedback.
- Reduced risk: Catch integration and environment-specific issues before merging into production.
- Improved confidence: Enhances overall code quality and reliability.
Now, let's dig into how you can set this up in a real-world scenario using GitHub and AWS.
Understanding the Architecture
Here’s the typical production environment we're referencing:
- Frontend: React application deployed via AWS S3 and CloudFront.
- Backend: NestJS application deployed using AWS ECS (Fargate) behind an Application Load Balancer (ALB).
- Database: PostgreSQL hosted on AWS RDS.
- Domain: Applications running at
app.example.com(frontend) andapi.example.com(backend).
Our goal is clear: every PR should have its own isolated environment with:
- Distinct URLs like
pr756.preview.app.example.comandpr756.preview.api.example.com - Fully automated provisioning and teardown upon PR closure or merge
- Directly accessible links posted automatically in the PR comments
Implementing Preview Environments
To automate this process effectively, GitHub Actions is a natural fit, providing the following capabilities:
- Automatic triggers on PR lifecycle events.
- Docker image builds for frontend and backend components.
- Automated deployments to AWS services (ECS for backend, S3/CloudFront for frontend).
- Dynamic routing and URL management.
- Resource cleanup post-PR closure.
Before diving into code examples, let's first discuss some key architectural decisions.
Architectural Considerations
DNS & Subdomain Structure
Instead of relying on a complex DNS setup with Route 53, you can leverage existing AWS infrastructure efficiently by combining your Application Load Balancer (ALB) and CloudFront with Lambda@Edge:
Backend Routing with ALB
Your existing ALB can dynamically route backend requests to the appropriate ECS tasks using host-based routing rules. For example:
-
Configure ALB listener rules that match host headers like
pr756.preview.api.example.comand route these requests directly to the corresponding ECS service or task. You can automate this process with AWS SDK or AWS CLI during the preview deployment step. -
Ensure your ALB is configured with a wildcard SSL certificate (
*.preview.api.example.com) through AWS Certificate Manager (ACM), simplifying secure subdomain handling.
Frontend Routing with CloudFront & Lambda@Edge
For frontend applications, leverage CloudFront combined with Lambda@Edge functions:
-
Deploy each preview build of your frontend application to a unique path within an S3 bucket (e.g.,
s3://frontend-preview-bucket/pr756/). -
Set up CloudFront distribution with Lambda@Edge to rewrite requests dynamically based on the incoming hostname (like
pr756.preview.app.example.com) to the appropriate S3 path (/pr756/). -
Similar to backend setup, provision a wildcard SSL certificate (
*.preview.app.example.com) via ACM for secure handling.
This approach simplifies DNS management significantly, avoiding the overhead and complexity of creating numerous individual DNS records for each preview environment.
Database Strategy
Managing databases for previews presents two primary strategies:
Option 1: Shared Database
A single RDS database shared among all preview environments, with isolation achieved through schemas or table prefixes.
Pros:
- Simple and cost-effective
- Fast environment provisioning
Cons:
- Risk of conflicts between PRs
- Potential for data contamination
Option 2: Dedicated Database per PR
Deploy a dedicated PostgreSQL instance per PR using Docker containers on ECS, optionally with persistent volume mounts and data seeding.
Pros:
- Strong isolation
- Ideal for complex integration tests
Cons:
- Increased resource consumption
- Higher operational complexity
Recommended Approach
For most professional teams, particularly those working on sensitive features or requiring thorough integration testing, a dedicated ECS-hosted Docker database per PR is recommended. For smaller teams or simpler setups, a shared database with strict conventions could suffice.
Data Seeding Strategy
Effective data seeding is crucial to making preview environments useful and reliable. Consider the following guidelines:
- Minimal Viable Data: Seed just enough data to test functionality thoroughly, avoiding large datasets that prolong setup time.
- Realistic Scenarios: Include representative user accounts, permissions, and sample data mimicking production scenarios.
- Repeatable Scripts: Automate data seeding through migration scripts or dedicated seeding tools (e.g., Prisma).
Automating the Setup
With these decisions made, here's how the automation flow looks using GitHub Actions:
-
On PR open/update:
- Build frontend and backend Docker images.
- Deploy frontend assets to an S3 bucket using CloudFront and Lambda@Edge.
- Deploy backend Docker containers to ECS Fargate, configured with dynamic host-based routing via ALB.
- Run database migrations and seed essential data.
- Post the preview URLs as comments in the GitHub PR.
-
On PR close/merge:
- Clean up ECS tasks/services, CloudFront configurations, S3 buckets, and any associated Docker resources.
Enhancing Security and Cost Efficiency
To maintain secure and cost-efficient operations:
- Secure preview URLs using Basic Auth or temporary tokens.
- Manage secrets securely via GitHub Secrets or AWS Parameter Store.
- Use Fargate Spot instances and short-lived CloudFront distributions to optimize costs.
- Automate cleanup thoroughly to prevent resource wastage.
GitHub Actions Workflow (Example)
Here's a simplified example of a GitHub Actions workflow:
on:
pull_request:
types: [opened, synchronize, closed]
jobs:
deploy_preview:
if: github.event.action != 'closed'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- run: echo "PR_ID=pr${{ github.event.pull_request.number }}" >> $GITHUB_ENV
- run: |
docker build -t frontend:preview ./apps/web-app
docker build -t backend:preview ./apps/backend
- run: ./scripts/deploy-preview.sh $PR_ID
- uses: marocchino/sticky-pull-request-comment@v2
with:
message: |
Preview environment ready:
- Frontend: https://$PR_ID.preview.app.example.com
- Backend: https://$PR_ID.preview.api.example.com
cleanup_preview:
if: github.event.action == 'closed'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- run: echo "PR_ID=pr${{ github.event.pull_request.number }}" >> $GITHUB_ENV
- run: ./scripts/cleanup-preview.sh $PR_IDFinal Thoughts
Setting up automated preview environments isn't just about enhancing your CI/CD process—it's a fundamental shift toward higher quality and safer deployments. While it requires upfront investment in infrastructure and automation, the payoff is significant: improved code quality, streamlined reviews, and increased confidence across your team.
Taking the time to implement this workflow properly will transform your development process, setting a solid foundation for continued growth and stability.
Start small, iterate, and scale the solution as your team and needs evolve.