Microsoft ports the TypeScript compiler to Go with up to 10x faster builds
Microsoft is porting the TypeScript compiler from its current Node.js runtime to Go under a project called Corsa. The headline number is hard to miss: up to 10x faster builds and type-checking, with lower memory use. If that number holds up outside M...
Microsoft’s Go-based TypeScript compiler could finally make tsc feel fast
Microsoft is porting the TypeScript compiler from its current Node.js runtime to Go under a project called Corsa. The headline number is hard to miss: up to 10x faster builds and type-checking, with lower memory use.
If that number holds up outside Microsoft’s own demos, this is one of the biggest TypeScript infrastructure changes in years.
TypeScript is now baked into large-company plumbing. It runs through monorepos, CI pipelines, editor extensions, code generators, test runners, build tools, and internal developer platforms. When tsc slows down, the cost shows up everywhere: local feedback loops, PR validation, autocomplete latency, and cloud build bills.
For teams at scale, a faster compiler changes how much TypeScript pain they’ll tolerate.
Why Microsoft is doing this now
The current compiler is written in TypeScript and runs on V8 through Node.js. That has real advantages. The compiler stays close to the language, and the team has been able to evolve it quickly. It also made TypeScript unusually hackable for a compiler project.
But that setup hits a wall on very large codebases.
The same problems keep coming up:
- Startup cost from spinning up Node and loading the compiler runtime
- GC pauses when the compiler creates and discards large numbers of AST nodes and symbol tables
- Memory overhead from storing compiler data structures as dynamic JavaScript objects
That’s manageable on small projects. In monorepos, tsc --noEmit can turn into a multi-minute gate.
Go makes sense here for practical reasons. Native binaries, fast startup, decent garbage collection, straightforward concurrency, and easy cross-platform deployment. Microsoft seems to want a better compiler without dragging in a much stranger toolchain.
Where the speedup probably comes from
The 10x figure won’t come from magic. It comes from removing overhead the current architecture pays all the time.
In the existing compiler, syntax trees and related metadata live as JavaScript objects. That’s flexible. It’s also expensive when you’re dealing with millions of nodes. Object headers, hidden classes, pointer churn, and GC behavior add up fast.
A Go implementation can store those nodes in compact structs with predictable layouts. That matters. Compilers are dominated by data structures. Better memory locality and lower per-node overhead can cut a lot of wall-clock time before concurrency even enters the picture.
Startup is another easy win. A native binary launches faster than a Node-based toolchain. On a laptop that’s nice. In CI, where fresh containers and workers keep getting spun up, it saves money.
Then there’s parallelism. The Node-based compiler has largely lived inside a single-threaded execution model. A Go port can spread work across cores with goroutines, whether that ends up happening per module, per project slice, or elsewhere in the analysis pipeline. Type-checking doesn’t parallelize perfectly, but enough of it should for modern multi-core machines to matter.
It’s still worth being careful with the headline. Writing a compiler in Go does not somehow guarantee a 10x jump. Gains like that usually come from a stack of engineering decisions: data layout, caching, contention control, incremental invalidation, and throwing out abstractions that were acceptable in JavaScript but expensive at this scale.
The IDE story may matter more than the CLI
Microsoft has also outlined a different model for the language service. Instead of embedding the TypeScript server in a Node process, editors could talk to a Go daemon over an RPC layer such as gRPC or JSON-RPC.
That’s a fairly big shift.
Today, TypeScript tooling often feels like one overloaded process doing everything at once: parsing, checking, watching files, tracking project state, answering editor requests, and trying to recover when plugins misbehave. A separate native service could give Microsoft cleaner isolation and shared caches.
That could help with:
- Editor responsiveness on large workspaces
- Crash isolation when extensions misbehave
- Shared project state across terminals, editor windows, and other tools
- A cleaner path for non-Node tools that want compiler-grade analysis
For VS Code users, this may end up being the part they notice most. Faster CI checks are useful. Near-instant rename, find-references, and autocomplete in a huge repo changes daily work.
What large teams get out of this
The immediate winners are the companies already sitting on giant TypeScript codebases.
If you run a monorepo with hundreds of packages, a faster compiler pays off quickly.
Shorter feedback loops
This is the obvious one. If a full type-check drops from minutes to seconds, people run it more often. Slow feedback leads to bad habits. Developers skip checks locally, push speculative commits, and wait for CI to tell them things their laptop should’ve caught.
Speed changes that.
Lower CI cost
TypeScript compile and check steps show up in a lot of PR pipelines. If runtime drops anywhere near the claimed amount, compute time and queue pressure drop with it. For teams spending real money on hosted runners, that matters.
More room for strict settings
This gets less attention, but it’s real. Teams often back away from expensive compiler settings and deep project references because the pain stacks up. If Corsa makes strict checking cheap enough, some of those teams will turn things back on.
That’s one of the few kinds of performance work that can improve code quality indirectly.
The risky part
Porting a mature compiler to another language is risky. TypeScript’s current implementation has years of edge-case handling and ecosystem assumptions baked into it. Matching behavior exactly is hard.
A few pressure points stand out.
Compatibility with existing tooling
The TypeScript ecosystem depends heavily on implementation details, not just official APIs. Language service plugins, custom wrappers, build tools, and editor integrations may assume a Node-hosted server model. If Corsa changes process boundaries or protocol behavior, some of that tooling will need real work.
Feature parity
A fast command-line type-checker is the easy part. Fully replacing the current compiler and language service is harder. Microsoft has to match weird project setups, incremental rebuild paths, editor behavior, and all the corners of the API surface people depend on without meaning to.
Microsoft says a CLI preview comes first, with a fuller compiler and language service later. That order makes sense. It also tells you parity work is going to take time.
Plugins and transforms
A native compiler sounds great until someone needs the old JavaScript-level extensibility. If Microsoft moves toward an RPC-based plugin model, it may end up cleaner and safer. In the short term, it could annoy developers who are used to reaching directly into the current stack.
Concurrency bugs
Go makes parallel work easier to write. It doesn’t make shared compiler state easy to reason about. Incremental type-checking, project references, symbol resolution, and cache invalidation all get complicated once multiple workers are involved. Microsoft can handle that work, but it’s still a reason to be skeptical of neat performance claims until more people try this on real repos.
Why Go?
A lot of developers expected Rust for a rewrite like this. Go is the more boring choice. That’s probably why it fits.
Go gives Microsoft:
- cross-platform static binaries
- fast enough native performance
- straightforward memory management
- easy concurrency primitives
- a familiar model for daemons and tooling
For a compiler that has to run anywhere developers run TypeScript, boring is a feature.
Rust might give tighter control over memory layout and maybe better raw performance in some paths. It also raises the complexity bar for a team maintaining a very large, fast-moving compiler. Go lands in a practical middle ground. Much closer to systems-level performance than Node, without turning the compiler into a language-lawyer project.
What to watch next
The interesting questions are practical ones.
Can Corsa get anywhere near the promised speedup on ugly enterprise repos, not polished benchmarks?
How much memory does it actually save under editor workloads?
Does the language service feel better in VS Code, or does the RPC split add its own latency and failure modes?
And how much ecosystem breakage shows up when people try to swap a native compiler into tooling built around Node assumptions?
If Microsoft pulls this off, TypeScript development gets materially less annoying. That’s enough. Developers don’t need a grand story. They need tsc to stop being the slowest part of writing TypeScript. Corsa might finally do that.
Useful next reads and implementation paths
If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.
Build product interfaces, internal tools, and backend systems around real workflows.
How a field service platform reduced dispatch friction and improved throughput.
TypeScript 5.8 matters for two reasons. First, it gets stricter around return statements with conditional expressions. TypeScript has been looser here than most people assume. That leaves room for bugs in utility code, cache layers, and parser functi...
India’s February 24 blocking order under Section 69A is disrupting access to parts of Supabase inside the country, and the impact goes well beyond a few developers losing a dashboard. Reports show .supabase.co endpoints are intermittently unreachable...
Flint has emerged from stealth with $5 million in seed funding from Accel, Sandberg Bernthal Venture Partners, and Neo. Its pitch is straightforward: web teams take too long to ship pages. The company was founded by former Warp growth lead Michelle L...