Meridian raises $17 million for an AI IDE aimed at financial modeling
Meridian has raised $17 million at a $100 million post-money valuation to build what it calls an IDE for agentic financial modeling. The round was led by Andreessen Horowitz and The General Partnership, with QED Investors, FPV Ventures, and Litquidit...
Meridian raises $17M to build a spreadsheet for AI agents, and the audit trail is the whole point
Meridian has raised $17 million at a $100 million post-money valuation to build what it calls an IDE for agentic financial modeling. The round was led by Andreessen Horowitz and The General Partnership, with QED Investors, FPV Ventures, and Litquidity Ventures also participating. The company emerged from stealth this week and says it signed $5 million in contracts in December, with customers including teams at Decagon and OffDeal.
The funding matters. The product decision matters more. Meridian is building a standalone workspace for finance models with the feel of a developer tool, and it is putting reproducibility at the center.
That’s a sensible read of the market. Finance teams don’t avoid AI because they dislike automation. They avoid tools that give slightly different answers on each run and can’t explain where a number came from. In any process that gets reviewed by a CFO, that’s dead on arrival.
Why this category still has room
“AI for spreadsheets” has been pitched into the ground. Most products fall into two camps.
One is assistant software inside Excel or Google Sheets that writes formulas, cleans tables, or answers questions. Useful enough. Not a platform shift.
The other promises full automation and then collapses the moment someone asks what changed, where a number came from, or whether the model can be rerun next quarter with the same logic.
Meridian is going after that failure point. CEO John Ling told TechCrunch that finance modeling needs to be predictable and auditable. He also made a point anyone near banking or corp dev will recognize. Ask 10 analysts at Goldman for a valuation model and you’ll get 10 versions of almost the same thing. The work is standardized. The output has to hold up under scrutiny. Variation is usually a bug.
That’s where generic LLM behavior starts to hurt. Language models are good at proposing structure and summarizing documents. They are not reliable numerics engines unless you box them in hard.
So Meridian’s bet is straightforward: use agents where they help, then lock down the parts finance teams actually care about.
The architecture is the interesting part
The important detail in Meridian’s setup is the product shape. It behaves more like an IDE than a spreadsheet plugin.
That changes what you can build.
Inside Excel, you inherit decades of UI assumptions, file formats, macro weirdness, and a cell-grid model that was never built for LLM orchestration, lineage tracking, or deterministic execution. Excel is still excellent at what it does. It is also a bad place to run serious agent systems if you need control over tool calls, state, validation, and version history.
A standalone workspace gives Meridian room to do a few things that matter.
Structured tools, not free-form text generation
If an agent is building a DCF, you do not want it dumping chain-of-thought into a worksheet and improvising data extraction. You want bounded tools such as:
get_sec_filingextract_seriesfetch_price_historynormalize_sheet
Those tools should return typed outputs with schemas and validation. That cuts down the error surface fast. The model becomes a planner and orchestrator instead of the source of truth for arithmetic.
This is already the pattern in serious agent systems. It takes more work than stuffing prompts into an LLM and hoping the output looks plausible, but it gives you something you can operate.
An explicit dependency graph
Traditional spreadsheets are graphs, but usually opaque ones. If Meridian models calculations as an explicit DAG, it gets lineage, caching, selective recompute, and easier debugging.
That sounds dry until a finance team changes a revenue assumption and needs to see exactly what moved downstream, which source data fed the model, and what stayed fixed. A DAG-based engine can show that. In a normal spreadsheet, you’re often digging through tabs and named ranges trying to reconstruct the logic by hand.
Freeze the logic after generation
This is the key technical move.
An LLM can draft the model structure, write formulas, or propose a workflow once. After that, the system can freeze those artifacts into deterministic code or spreadsheet expressions. Future runs don’t depend on sampling the model again. They run the same steps against updated inputs.
That’s how you get repeatability. Set temperature=0, restrict outputs to typed function calls, snapshot external data where possible, and treat generated logic as a versioned artifact. The system starts to behave like a build pipeline.
For finance, that’s the line between a toy and a usable product.
Auditability is a systems problem
Meridian’s pitch leans hard on provenance, and it should. Every finance tool eventually runs into the same demand: show your work.
In practice, every meaningful value needs metadata attached to it:
- source document or API
- transformation function
- parent calculations
- assumptions used
- timestamp or data snapshot version
If a terminal value changes, a reviewer should be able to trace it back through the discount rate, growth assumptions, extracted cash flow series, and underlying filing. Straightforward on paper. Painful in practice. A lot of AI products fall apart here because provenance is expensive if it wasn’t part of the execution model from day one.
For developers, this is a workflow engine problem. Not a prompt-writing problem.
You need immutable logs, versioned specs, typed outputs, and some opinionated rule system around accounting constraints. You also need a clean separation between natural-language steps and numerical execution. The LLM can help choose a template or map a filing section to a field. It should not freestyle the final cash flow math.
Why the IDE framing lands
The “Cursor for finance” line is easy startup shorthand, but the idea underneath it is sound.
Developers already know the value of:
- version control
- diffs
- repeatable builds
- test suites
- code review
- explicit dependencies
Finance teams have worked without much of that because spreadsheets were the default and switching costs were high. Once generation enters the workflow, those informal habits start breaking down.
A finance modeling IDE could bring software discipline into a part of enterprise work that badly needs it. Git-backed model specs, pull request review for assumptions, unit tests for discount() and compute_terminal_value(), data snapshots for reruns. None of that is exotic to engineers. In finance, it’s overdue.
The hard part is adoption. Finance professionals live in Excel. They export to Excel, present from Excel, and trust what they can inspect in Excel. Meridian has to prove that a separate environment saves enough time and removes enough risk to justify leaving that muscle memory behind.
Where this can go wrong
The pitch is good. The hard parts stay hard.
Data quality goes first
If the agent pipeline pulls from SEC filings, warehouse tables, CRM extracts, and market data feeds, the weakest source wins. Deterministic garbage is still garbage. An audit trail won’t fix bad mapping, stale snapshots, or conflicting definitions across systems.
“Deterministic” gets overstated fast
Setting temperature=0 helps. It does not solve everything. External APIs change. Retrieval systems drift. Model providers update behavior. If Meridian wants real reproducibility, it needs tight controls around model versions, tool outputs, and data snapshots. Otherwise the same prompt on a different day can still produce subtle differences.
Governance gets very real
Finance leaders will like the audit story. Security and compliance teams will ask the obvious follow-ups: where sensitive deal data is stored, what gets sent to model providers, what is cached, what is retained, and whether any of it can leak across tenants.
That is standard enterprise AI diligence now, but Meridian is working with highly sensitive numbers. Pipeline hygiene matters.
Standalone tools still have to beat incumbents
Meridian is betting that Excel’s installed base is also its architectural limit. That may be right. Microsoft still has distribution, deep Office integration, and a captive user base. Google has similar advantages with cloud-native teams. Meridian has to be materially better, not just cleaner on paper.
What engineers should watch
Even outside finance, Meridian is a useful case study for where agent products are going.
The pattern is getting familiar:
- use LLMs for planning, classification, and schema filling
- route execution through bounded tools
- freeze generated artifacts
- track lineage for every meaningful output
- version prompts, specs, and data like code
- add tests because prompt quality won’t carry the whole system
That is a solid design pattern anywhere correctness matters more than conversational polish. Finance just makes the requirement impossible to ignore.
It also says something about practical AI software in 2026. The products that look most credible are starting to resemble domain-specific operating environments rather than chat windows. Better scaffolding. Tighter constraints. Clearer logs.
If Meridian can make finance teams faster without asking them to trust stochastic spreadsheets, it has a real opening. If not, it joins the long list of AI spreadsheet demos that looked polished and fell apart under real use.
Useful next reads and implementation paths
If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.
Add engineers who can turn coding assistants and agentic dev tools into safer delivery workflows.
How an embedded pod helped ship a delayed automation roadmap.
CopilotKit has raised a $27 million Series A led by Glilot Capital, NFX, and SignalFire. Its argument is simple: a chat panel is a bad interface for a lot of software. A lot of enterprise AI still comes down to "user asks in natural language, model r...
Greptile, a startup building AI-assisted code review, is reportedly raising a $30 million Series A led by Benchmark at a $180 million valuation. For a company founded in 2023, that’s fast. It also points to a specific shift in the market. AI coding c...
Reload has raised $2.275 million and launched Epic, a product meant to keep AI coding agents working from the same project context over time. That sounds modest. It isn’t. A lot of agent-driven development falls apart for exactly this reason. The fai...