Docker for TypeScript Devs: The Only Guide That Doesn't Waste Your Time
Multi-stage builds, dev containers, production optimization — Docker patterns specifically for TypeScript/Node.js projects.
Most Docker tutorials are written for Python or Go developers and then hand-wave at Node.js with a COPY . . and npm start. That’s how you end up with 1.2GB images that take 4 minutes to build and ship your entire development dependency tree to production. If you’re a TypeScript developer who’s been burned by bloated images, slow builds, or mysterious container crashes — this page is for you.
“Containers don’t contain. They package. The value isn’t isolation — it’s reproducibility.”
— Kelsey Hightower
You might wonder why you need Docker when you can just run node index.js. Three reasons, and they’re more practical than most articles suggest:
Problem Without Docker
How Docker Solves It
”Works on my machine” — Node version mismatches, OS-level library differences
The container is the contract. Same environment everywhere.
Dev/prod drift — it works locally but breaks on Alpine because libvips is missing
Your local dev runs the same base image as production.
Platform lock-in — tied to one hosting provider’s deploy mechanism
If it runs containers, your app runs there. ECS, Kubernetes, Cloud Run, Railway.
Docker gives you a portable, reproducible unit of deployment. For TypeScript specifically, it also solves the “do I ship source or compiled JS?” question cleanly — you compile inside the build, and only the output reaches production.
Think of a multi-stage Dockerfile like an assembly line. Each stage has one job, and only the final output moves to the next stage. Everything else — tools, temporary files, dev dependencies — gets left behind.
Stage
Job
What’s in It
What Gets Carried Forward
1. deps
Install all dependencies
node_modules (dev + prod)
The full node_modules folder
2. builder
Compile TypeScript, prune dev deps
Source code, compiled JS, pruned modules
Only dist/ and production node_modules
3. runner
Run the production app
Minimal Alpine image + compiled JS + prod deps
Nothing — this is the final image
Why bother? Because your final image contains zero TypeScript source, zero build tools, zero test libraries, and zero dev dependencies. That’s how you go from a 1.2GB image to ~120MB.
The order of COPY instructions matters enormously for cache efficiency. Always copy files that change least frequently first: lockfile → package.json → source code. A single misordering can turn a 10-second cached build into a 3-minute full rebuild.
Image size affects pull speed, deploy speed, and attack surface. Here’s what each optimization step buys you:
Approach
Typical Size
Notes
node:20 + copy everything
~1.2 GB
Ships dev deps, source, .git — never do this
node:20-alpine + copy everything
~450 MB
Smaller base, but still wasteful
Multi-stage + Alpine + prod deps only
~120 MB
The sweet spot for most services
Multi-stage + distroless base
~80 MB
Best security posture, harder to debug
That’s a 10x reduction from naive to optimized. In practical terms, it’s the difference between deploys taking minutes versus seconds.
Never mount node_modules from your host machine into the container. Native dependencies like esbuild, sharp, or bcrypt are compiled for your host OS. They will crash inside the Linux container. Use an anonymous volume (/app/node_modules) to keep container dependencies isolated.
These are the issues I see again and again. Check your setup against this list:
Missing .dockerignore — Without it, Docker copies node_modules, .git, test files, and that 500MB data dump you forgot to delete. One project I worked on had a 6-minute build caused entirely by a 2GB .git directory in the build context.
Using shell form for CMD — CMD node dist/index.js spawns a shell as PID 1. When Docker sends SIGTERM, the signal hits the shell, not your app. Your app never shuts down gracefully. Use exec form: CMD ["node", "dist/index.js"].
Running as root — If you don’t add a USER instruction, your app runs as root inside the container. Add a non-root user and switch to it.
No health check — Without a HEALTHCHECK, your orchestrator can’t tell if your app is actually responding. Add one that hits a /health endpoint.
Baking in environment variables — The same image should run in dev, staging, and production. Only environment variables should change between environments. Validate them at startup with a schema library so missing variables fail fast.
Ignoring signal handling — Node.js inside Docker needs explicit SIGTERM handling. Without it, Docker waits 10 seconds then SIGKILLs your process — open database connections leak, in-flight requests are dropped.
No image tagging strategy — Tag images with the commit SHA, not just latest. When something breaks in production, latest tells you nothing. A SHA tag tells you the exact commit running.
Before shipping any Dockerized TypeScript service, walk through this:
Check
Why It Matters
Multi-stage build with prod-only deps
Smaller, more secure image
Non-root user (USER appuser)
Limits blast radius of container compromise
Health check endpoint wired to HEALTHCHECK
Orchestrator can detect unhealthy containers
Graceful SIGTERM shutdown handler
Clean connection teardown, no dropped requests
.dockerignore excludes tests, docs, .git, .env
Faster builds, no secrets in the image
Env vars validated at startup
Fail fast with clear errors, not mysterious runtime crashes
Image scanned for vulnerabilities (trivy or similar)
Catch known CVEs before production
Tagged with commit SHA
Full traceability from running container to source code
Docker is a skill that compounds. Once you’ve built the pattern for one TypeScript service, every subsequent service is copy-paste with minor tweaks. Invest the time to get it right once, and you’ll reuse it for years.
The goal isn’t to become a Docker expert. It’s to have a reliable, repeatable way to package your TypeScript services so they run the same way everywhere — on your machine, in CI, and in production. These patterns get you there.