Nexos.ai raises €30M Series A to build enterprise AI infrastructure
Nexos.ai has raised a €30 million Series A at a €300 million valuation, with Index Ventures and Evantic Capital co-leading the round. The startup was founded by Nord Security co-founders Tomas Okmanas and Eimantas Sabaliauskas, and its pitch is clear...
Nexos.ai’s €30M raise shows where enterprise AI is heading: gateways, policy engines, and less chaos
Nexos.ai has raised a €30 million Series A at a €300 million valuation, with Index Ventures and Evantic Capital co-leading the round. The startup was founded by Nord Security co-founders Tomas Okmanas and Eimantas Sabaliauskas, and its pitch is clear enough: sit between employees, internal apps, and the model providers doing the actual inference.
That may sound like infrastructure plumbing. It is. And right now, that’s where a lot of enterprise AI projects get stuck.
Most rollouts don’t fail because nobody knows how to call an LLM API. They fail because each team picks a different model, pushes data through a different vendor, logs prompts in a different place, and then has no clean answer when legal, security, or finance starts asking basic questions. Where did this data go? Which model handled it? Why did spend jump 10x last month? Can this workload stay in the EU? Can we swap providers without rewriting half the stack?
Nexos is betting companies will pay for a dedicated AI gateway and governance layer. That looks like a solid bet.
Why this category is heating up
The company describes itself as a neutral layer for model access, or, in its own branding, “Switzerland for LLMs.” The slogan’s a bit much, but the product idea is simple: one policy-aware endpoint in front of a messy, fast-moving model market.
That matters because enterprises are already spreading work across OpenAI, Anthropic, Google Vertex AI, AWS Bedrock, Azure-hosted models, and open-weight models running in a VPC through vLLM or Triton. A year ago, plenty of teams could standardize on one provider and move on. That already looks limiting.
The cost spread between models is wide. Latency varies. Some workloads need structured output and strict schema adherence. Some need EU data residency. Some shouldn’t touch an external SaaS model at all.
So the control point is moving up the stack. The value isn’t just in the model anymore. It’s in the layer that picks models, applies guardrails, tracks spend, and produces an audit trail security teams can live with.
That’s the slot Nexos wants.
What Nexos is building
The product has two main parts.
First, an AI Workspace for employees. Think of it as the managed version of the standard enterprise pattern where everyone quietly uses ChatGPT anyway.
Second, and more important for engineering teams, an AI Gateway that puts roughly 200 models behind a unified API surface. The gateway handles policy enforcement, routing, telemetry, and cost controls. Nexos also says private model support is on the roadmap, which matters if it wants serious regulated workloads.
The architecture will look familiar to anyone who’s worked with API gateways, service meshes, or cloud policy systems:
- a control plane for identity, access rules, residency, DLP, budgeting, and audit
- a data plane for request transformation, model selection, failover, caching, and inference routing
That split makes sense. Put this logic inside every app and you get policy drift, duplicated code, and a maintenance mess. Centralize it and platform teams get one place to manage rules and inspect traffic.
In practice, a request through an AI gateway usually looks something like this:
- authenticate the caller through SSO, SCIM, service accounts, mTLS, or API keys
- inspect the payload for PII, secrets, financial data, or health data
- apply rules on which models are allowed and where inference can run
- redact, tokenize, or mask sensitive fields if needed
- route the request based on cost, latency, quality scores, or region
- validate the response format and run output filters
- emit logs, traces, token usage, and billing attribution
That’s a lot of plumbing. Most enterprises still don’t have it.
Why developers should care
For engineers, the strongest argument for an AI gateway isn’t governance. It’s churn reduction.
Model APIs change. Pricing changes. Context windows change. Provider reliability changes. If every app is tightly coupled to a vendor SDK and prompt format, switching models gets ugly fast. A gateway gives you a stable internal contract while the provider layer keeps shifting.
That helps even if compliance isn’t your first concern.
Say your support assistant uses one model for cheap summarization, another for higher-quality drafting, and a private model for anything containing customer account data. You can define that centrally. Same for fallback order when one provider rate-limits or goes down. Same for forcing JSON schema validation on extraction jobs. Same for token budgets by team.
This is one place where the control plane framing actually holds up. A good gateway becomes the point where platform engineering, security, and FinOps meet.
It also gives teams better observability. An OpenTelemetry-friendly trace for model calls, with prompt metadata, token counts, latency, and model versioning, is far more useful than scraping logs from six providers and three internal services. If you care about evals, routing decisions, or spend attribution, you want that data in one place.
The EU angle matters
Nexos is a European company, and that matters beyond the funding-round profile.
In Europe, data residency and cross-border processing shape architecture decisions. They aren’t side concerns. If you’re selling into finance, healthcare, government, or large enterprise in the EU, you need firm answers on where prompts are processed, where logs are stored, who can access them, and what happens to retained data.
That gives a neutral gateway an opening against vertically integrated cloud stacks. AWS, Google, and Microsoft all have governance tooling around their AI platforms, and it’s getting better. But plenty of enterprises don’t want their entire AI control layer tied to one cloud vendor, especially if they’re already multi-cloud or expect to mix proprietary and self-hosted models.
A gateway can enforce a policy like this: finance requests with IBANs must stay in eu-west-1 or eu-central-1, use only approved model endpoints, mask identifiers before inference, and log only headers for audit retention. That’s concrete. It maps cleanly to the pressure coming from the EU AI Act and broader compliance requirements.
The trade-offs
There’s a reason this space is crowded and still unsettled.
A gateway adds another hop. That can mean more latency, more operational overhead, and one more system to secure. If the gateway goes down, every AI feature behind it can go down too. If the policy engine is clumsy, developers will route around it the first time it blocks a release.
There’s product risk too. The gateway has to stay genuinely neutral while model providers keep adding their own guardrails, eval tooling, tracing, and orchestration features. Bedrock, Vertex AI, and Azure all want to be the place where governance happens. Independent vendors like Nexos need to be meaningfully better across providers, not roughly similar with nicer packaging.
And the hard part isn’t the proxy. It’s policy quality. “Mask PII” sounds straightforward until a downstream system needs partial de-tokenization, or legal wants different retention rules per workflow, or the best-performing model only exists in a region your policy blocks. Centralized control helps, but it also centralizes every annoying exception.
So yes, the category matters. It’s also easy to oversell.
Where Nexos sits
Nexos is entering a market that already includes cloud-native governance products, AI proxy vendors, observability platforms, and eval tooling. The question is whether buyers want one vendor in the middle or a stack assembled from best-of-breed parts.
There’s a fair case for the central platform model. If every prompt and token passes through the gateway anyway, that’s the natural place for policy enforcement, telemetry, caching, retries, routing, and audit. It’s cleaner than bolting point tools around the edges.
But execution is what decides this market. Enterprise buyers will want proof on a few things:
- how fine-grained the policy model really is
- whether private model support is robust or still mostly roadmap
- how much overhead the gateway adds under load
- whether observability is deep enough for production debugging
- how easily teams can integrate without rewriting application logic
Early customers include companies in the Tesonet orbit and Payhawk, which is a decent start. It doesn’t prove the product can handle large, messy enterprises with ugly compliance requirements and sprawling legacy systems. That part is still ahead.
What this round signals
This raise suggests investors think the enterprise AI stack is settling into a familiar shape. First came raw model access. Then orchestration. Then governance, observability, and cost control, because free-form experimentation tends to fall apart once procurement, security review, and budget season show up.
Gateways aren’t glamorous. They are becoming necessary.
For technical leaders, the takeaway is practical. If your company is already using multiple models, dealing with residency constraints, or trying to explain AI spend to finance, you probably need some version of this layer whether you buy it or build it. Leaving those concerns scattered across app code and provider consoles is how you end up with expensive, brittle systems and a security team that stops trusting the whole effort.
Nexos has money, credible founders, and good timing. The harder part is turning “we centralize AI access” into a platform developers are willing to route through every day. That’s where this category gets decided.
Useful next reads and implementation paths
If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.
Build AI-backed products and internal tools around clear product and delivery constraints.
How analytics infrastructure reduced decision lag across teams.
Jennifer Neundorfer, managing partner at January Ventures, is set to speak at TechCrunch All Stage on July 15 at Boston’s SoWa Power Station about how AI is changing startup construction. The useful part of that argument isn’t the familiar point abou...
ScaleOps has raised a $130 million Series C at an $800 million valuation, with Insight Partners leading and Lightspeed, NFX, Glilot Capital Partners, and Picture Capital also participating. The headline is funding. The actual point is simpler: compan...
Runpod says it has reached a $120 million annual revenue run rate, with 500,000 developers on the platform and infrastructure across 31 regions. For a company that started in 2021 from a Reddit post and some reused crypto mining gear, that's a sharp ...