Rocket.new raises $15M to tackle what AI coding tools miss after the first build
Rocket.new, a startup out of India, has raised a $15 million seed round led by Salesforce Ventures, with Accel and Together Fund also participating. Its pitch is simple enough: plenty of AI coding tools can get you to a flashy first version, then fal...
Rocket.new raises $15M by betting that AI app builders need to survive day two
Rocket.new, a startup out of India, has raised a $15 million seed round led by Salesforce Ventures, with Accel and Together Fund also participating. Its pitch is simple enough: plenty of AI coding tools can get you to a flashy first version, then fall apart when the app needs to be usable, maintainable, and safe.
That pitch is finding buyers. Rocket.new says it has users in 180 countries, $4.5 million in ARR, and about half a million generated apps just a few months after beta. Those are big numbers for a seed-stage company. So are the targets. It says it wants to reach $20 million to $25 million ARR by year-end, then $60 million to $70 million by next June.
The more interesting part is the product thesis. Rocket.new is intentionally slower than most vibe-coding tools. It says first app generation takes around 25 minutes, versus roughly three minutes for many rivals. In consumer software, that would look broken. In production software, it can also mean the system is doing more than spitting out a demo.
Speed stopped being enough
The first wave of AI coding tools sold instant gratification. Type a prompt, get an app, feel productive. That still matters. Developers want fast feedback, and non-developers want to see something real appear from text.
But teams keep hitting the same problem. A generated app without auth, sane data modeling, testing, deployment config, secrets handling, and integration plumbing is still a prototype. A polished one, maybe. Still a prototype.
Rocket.new is built around that gap. Salesforce Ventures put it plainly: there’s a lot of distance between AI codegen magic and code that’s actually production-ready at enterprise scale. For once, that isn’t investor filler. It’s a fair read on the category.
The startup’s user mix sharpens the point. About 12% of generated apps are in e-commerce, 10% in fintech, 5% to 6% in B2B tooling, and 4% to 5% in mental health. Those aren’t lightweight categories. They come with payments, user data, audit trails, uptime demands, and policy controls. You can fake your way through a landing page generator. Fintech punishes that fast.
What the stack probably looks like
Rocket.new hasn’t published a detailed architecture. Still, there’s enough in the public material to infer the rough shape.
The company says it uses Anthropic, OpenAI, and Google Gemini models through a router, plus proprietary deep learning trained on datasets from DhiWise, the founders’ previous developer tooling company. That points to a multi-stage orchestration system, not one giant prompt.
A sensible version of that stack probably includes:
- one model or agent for requirements parsing and planning
- another for code synthesis
- validators for schemas, contracts, and generated structure
- tooling for linting, testing, static analysis, and possibly security checks
- templates or opinionated blueprints based on recurring app patterns
That matters because app generation gets brittle quickly when everything is stuffed into one long prompt. You need intermediate artifacts: domain models, DB schema, API contracts, auth rules, UI trees, CI config, environment setup, tests. If those pieces never become explicit, errors tend to stay hidden until deployment. That’s where a lot of AI-generated projects start to wobble.
Rocket.new’s slower generation time fits that architecture. If the system is doing planning, synthesis, validation, and repair across multiple stages, 25 minutes is plausible. That may still be too slow for some workflows, but at least the wait maps to a real engineering trade-off.
The Supabase pattern stands out
One early usage detail is worth paying attention to. People often prototype websites in Lovable or Replit, then move to Rocket.new to generate native mobile apps tied to an existing Supabase backend.
That’s a useful signal.
Supabase has become a default backend layer for AI-assisted app building because it compresses a lot of annoying work into one product: Postgres, auth, storage, edge functions, and row-level security. For an agentic builder, that’s ideal. The more opinionated and legible the backend, the easier it is to generate code against it without drifting into nonsense.
That’s also why Firebase and Supabase keep showing up in AI app workflows. They shrink the decision surface. Fewer open-ended choices usually means more consistent output. Developers give up some flexibility and get speed and predictability in return.
Rocket.new seems to get that. It isn’t trying to support every backend architecture imaginable. It appears to be leaning into the stacks AI can support reasonably well right now.
Opinionated systems tend to age better
Some developers will hate that. Fair enough. Engineers like freedom until they’re stuck maintaining a codebase built from conflicting abstractions and half-finished decisions.
The trade-off is obvious. If Rocket.new leans heavily on templates and known-good patterns from DhiWise data, it should produce more consistent apps, especially for common stacks like TypeScript, React or Next.js, React Native, Node, and Supabase. The downside is predictable too. Once you move outside those lanes, things may get awkward fast.
That’s a product choice, and probably the right one.
One of the bigger problems in AI coding right now is that many tools don’t impose enough structure. They generate plausible fragments without enforcing a coherent system. That feels flexible in the moment and expensive six months later.
If Rocket.new can keep teams inside well-supported paths while still letting engineers edit the output cleanly, it has a shot at becoming part of an actual delivery workflow instead of a demo generator people abandon after sprint one.
The economics are fine, with limits
Rocket.new offers a free tier with 1 million tokens, then charges $25 a month for 5 million. It says gross margins are currently around 50% to 55%, with a target of 60% to 70%.
That tells you a couple of things.
The product is expensive to run. No surprise there. Multi-model routing, long agent loops, and validation-heavy workflows burn tokens quickly. If the system really spends 25 minutes assembling an app scaffold with tests, integrations, and deployment setup, cost control becomes part of the product itself.
The margins are still workable if the output saves meaningful engineering time. That caveat matters. Buyers won’t care about token allotments if they still need a week to fix the generated code.
For technical leads, the usual SaaS math is too shallow here. Don’t compare the subscription price to one developer seat. Compare it to the setup, plumbing, and QA work your team avoids, and to the cleanup work the tool creates later.
“Agentic” still needs supervision
Rocket.new also talks about broader automation, including competitive research and product development flows. That’s where the claims get softer.
“Agentic system” now covers everything from useful workflow orchestration to very expensive autocomplete pretending to reason. The practical version for software teams is narrower: turn requirements into structured plans, generate code in bounded steps, validate the output, then let humans review and refine it.
That’s enough. It’s useful enough too.
Teams should not assume that a system which can scaffold an app can also make product, security, or architecture decisions without oversight. The categories Rocket.new highlights, especially fintech and health-adjacent apps, are exactly where confidence can outrun reliability.
A production-oriented AI builder should be judged on boring things:
- does it generate sensible auth flows?
- does it set up RBAC and data access policies correctly?
- does it produce migration files you’d trust?
- are secrets and environment variables handled cleanly?
- are tests meaningful or just decorative?
- can your team understand and modify the code after generation?
Those are the questions that matter.
Why Salesforce Ventures matters
The funding round stands out partly because Salesforce Ventures led it. That points to a specific market view: enterprises want app generation tied to systems they already use, with governance and operational controls layered on top.
Rocket.new is also opening a Palo Alto headquarters while doubling engineering and product headcount in India over the next year. That’s a familiar pattern. Build where the engineering talent is strong, get closer to enterprise buyers in the US, and go after bigger contracts.
The revenue split already supports that strategy. The US accounts for 26% of revenue, Europe around 15% to 20%, and India roughly 10%. This is a global demand pattern around the same problem: code generation that doesn’t stop at scaffolding.
What developers should take from this
Rocket.new looks interesting for teams that want AI to handle repetitive app setup across web and mobile, especially on opinionated stacks with Supabase in the middle. It looks less compelling if your product depends on unusual infrastructure, strict internal platform standards, or heavy custom backend logic from day one.
The company’s core claim is that it cares about day two. Good. The market needs more of that and less screenshot bait.
This category is still full of tools that look solid until the generated code hits normal product entropy. Integrations break. Policies drift. Tests go stale. Teams fork away from the generated baseline and never return.
So the standard should be high. If Rocket.new wants to separate itself from Lovable, Bolt, Cursor, and the rest, it has to show that the code stays usable after the first generation, after the third feature request, and after the first ugly production bug report.
That’s where AI app builders usually crack. If Rocket.new holds up there, a 25-minute wait starts to look pretty reasonable.
Useful next reads and implementation paths
If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.
Build AI-backed products and internal tools around clear product and delivery constraints.
How analytics infrastructure reduced decision lag across teams.
Modelence has raised a $3 million seed round led by Y Combinator, with Rebel Fund, Acacia Venture Capital Partners, Formosa VC, and Vocal Ventures also participating. The pitch is clear enough: AI can generate components, endpoints, and decent-lookin...
Flint has emerged from stealth with $5 million in seed funding from Accel, Sandberg Bernthal Venture Partners, and Neo. Its pitch is straightforward: web teams take too long to ship pages. The company was founded by former Warp growth lead Michelle L...
AI can write code faster than most teams can safely ship it. That gap costs real money. Harness has raised $240 million in a Series E at a $5.5 billion valuation, with $200 million in primary capital led by Goldman Sachs and a planned $40 million ten...