Helix Platform - Build Architecture
Overview
The Helix platform uses a hybrid package management approach with Docker BuildKit caching for optimal build performance and true microservice isolation.
Architecture Principles
1. Root Package.json (Shared Dependencies)
Purpose: Dependencies used by ALL or MOST services
Location: /package.json
Contains:
{
"dependencies": {
"@nestjs/common": "^10.0.0",
"@nestjs/core": "^10.0.0",
"prisma": "^5.0.0",
"ioredis": "^5.0.0",
"nestjs-cls": "^4.0.0"
}
}
Rebuild Impact: Changes here rebuild ALL services (acceptable - they all use these)
2. Per-App Package.json (Service-Specific Dependencies)
Purpose: Dependencies used by ONLY ONE service
Location: /apps/{service-name}/package.json
Example: apps/file-service/package.json
{
"name": "file-service",
"dependencies": {
"stripe": "^12.0.0",
"@aws-sdk/client-s3": "^3.0.0"
}
}
Rebuild Impact: Changes here rebuild ONLY that service ✅
Docker BuildKit Caching
Layer Caching Strategy
# Layer 1: Root npm ci (cached unless root package.json changes)
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm npm ci
# Layer 2: App-specific npm install (cached unless app package.json changes)
COPY apps/file-service/package.json apps/file-service/
RUN --mount=type=cache,target=/root/.npm cd apps/file-service && npm install
# Layer 3: Copy source (invalidated on any code change)
COPY . .
# Layer 4: Nx build (runs only if Layer 3 changes)
RUN npx nx build file-service
BuildKit Benefits
| Feature | Benefit |
|---|---|
| npm cache mount | Avoids re-downloading packages (saves 2-3 min per build) |
| Layer caching | Skips unchanged layers |
| Parallel builds | All services build simultaneously |
| Smart invalidation | Only affected layers rebuild |
Build Scenarios
Scenario 1: Change Code in One Service
# Edit apps/auth-service/src/main.ts
docker-compose up auth-service --build
What happens:
- ✅ auth-service: Layers 3-4 rebuild (~30 seconds)
- ❌ Other services: Use cached images (0 seconds)
Total time: ~30 seconds
Scenario 2: Add Service-Specific Package
# Create apps/file-service/package.json
echo '{"dependencies":{"stripe":"^12.0.0"}}' > apps/file-service/package.json
docker-compose up file-service --build
What happens:
- ✅ file-service: Layers 2-4 rebuild (~45 seconds)
- ❌ Other services: Use cached images (0 seconds)
Total time: ~45 seconds
Scenario 3: Add Global Package (Root)
npm install @workos-inc/node
docker-compose up --build
What happens:
- ✅ ALL services: Layers 1-4 rebuild (~3 minutes parallel)
Total time: ~3 minutes (acceptable - affects everyone)
Scenario 4: Change Shared Library
# Edit libs/shared/database/src/database.service.ts
docker-compose up --build
What happens:
- ✅ Services using database lib: Layers 3-4 rebuild
- ❌ API Gateway (doesn't use database): Uses cache
Total time: ~1-2 minutes (only affected services)
Local Development Workflow
First Time Setup
git clone <repo>
docker-compose up --build
That's it! Docker handles all npm installs.
Daily Development
Option A: Run specific service
docker-compose up auth-service
Option B: Run all services
docker-compose up
Option C: Rebuild specific service
docker-compose up auth-service --build
Option D: Rebuild all (after pulling changes)
docker-compose up --build
Production CI/CD
GitHub Actions Pipeline
jobs:
detect-changes:
runs-on: ubuntu-latest
outputs:
affected: ${{ steps.nx.outputs.affected }}
steps:
- run: echo "affected=$(npx nx affected:apps --plain)" >> $GITHUB_OUTPUT
id: nx
build:
needs: detect-changes
strategy:
matrix:
service: ${{ fromJson(needs.detect-changes.outputs.affected) }}
runs-on: ubuntu-latest
steps:
- name: Enable BuildKit
run: echo "DOCKER_BUILDKIT=1" >> $GITHUB_ENV
- name: Build
run: |
docker build \
-f apps/${{ matrix.service }}/Dockerfile \
-t ${{ matrix.service }}:${{ github.sha }} \
--cache-from ${{ matrix.service }}:latest \
.
- name: Push to ECR
run: docker push ...
Benefits:
- Only changed services build
- Parallel builds across GitHub runners
- BuildKit cache from previous builds
- Cost-efficient (pay only for what changed)
Performance Metrics
Build Times (Local - M1 Mac)
| Scenario | Without Cache | With Cache |
|---|---|---|
| First build (all services) | ~8 minutes | ~8 minutes |
| Code change (1 service) | ~45 seconds | ~30 seconds |
| Add app package (1 service) | ~60 seconds | ~45 seconds |
| Add global package | ~3 minutes | ~3 minutes |
| Shared lib change | ~2 minutes | ~1 minute |
Bundle Sizes (Production Images)
| Service | Image Size | Notes |
|---|---|---|
| API Gateway | ~150 MB | No database dependencies |
| Auth Service | ~180 MB | WorkOS + Prisma |
| File Service | ~220 MB | S3 SDK + Stripe + Prisma |
| KIRA Service | ~200 MB | OpenAI SDK + Prisma |
All sizes optimized via:
- Tree-shaking (Webpack)
- Production-only dependencies
- Multi-stage builds
- Alpine Linux base image
Enabling BuildKit
BuildKit is enabled by default in Docker 23.0+.
For older versions:
export DOCKER_BUILDKIT=1
docker-compose up --build
Or in docker-compose.yml:
services:
auth-service:
build:
context: .
dockerfile: apps/auth-service/Dockerfile
environment:
DOCKER_BUILDKIT: 1
Troubleshooting
"buildkit not supported"
Solution: Update Docker to 23.0+
docker --version # Should be 23.0+
"Cannot find module 'package-name'"
Cause: Package in app package.json but not installed
Solution: Rebuild the service
docker-compose up service-name --build
Builds are slow
Check: Is BuildKit enabled?
docker info | grep BuildKit
Enable:
export DOCKER_BUILDKIT=1
Service won't start
Check logs:
docker-compose logs service-name
Common issues:
- Database not ready (wait for healthcheck)
- Port conflict (check ports with
docker ps) - Environment variable missing
Best Practices
✅ DO
- Put shared dependencies in root package.json
- Put service-specific dependencies in app package.json
- Use BuildKit (it's faster)
- Build only affected services locally
- Use docker-compose for local dev
- Use Nx affected detection in CI
❌ DON'T
- Don't put every package in root
- Don't disable BuildKit caching
- Don't rebuild all services for code changes
- Don't use docker-compose in production
- Don't expose backend service ports in production
Summary
The architecture achieves:
- ✅ Fast local development (only rebuild what changed)
- ✅ True microservice isolation (app-specific packages)
- ✅ Shared foundation (common packages in root)
- ✅ Optimal caching (BuildKit + Docker layers)
- ✅ Cost-efficient CI/CD (Nx affected detection)
- ✅ Small production images (tree-shaking + multi-stage)
Trade-offs:
- Global package changes rebuild everything (acceptable)
- Need Docker 23.0+ for BuildKit
- Slightly more complex than monolithic approach
The benefits far outweigh the trade-offs for a microservices architecture.