Generative AI May 24, 2025

Microsoft Build 2025 and the Open Agentic Web: What It Means for .NET, VS Code, and GitHub Copilot

Microsoft used Build 2025 to make a much bigger pitch than AI features in developer tools. It wants agents sitting across the web, enterprise software, and Windows development, with enough identity, policy, and runtime plumbing to act across tools an...

Microsoft Build 2025 and the Open Agentic Web: What It Means for .NET, VS Code, and GitHub Copilot

Microsoft Build 2025 turns Copilot, VS Code, and Azure into an agent stack

Microsoft used Build 2025 to make a much bigger pitch than AI features in developer tools. It wants agents sitting across the web, enterprise software, and Windows development, with enough identity, policy, and runtime plumbing to act across tools and data. Every large vendor is chasing some version of that idea. Microsoft showed more of the stack than most.

The short version: GitHub Copilot’s code in VS Code is going open source, Visual Studio gets .NET 10 support and better diagnostics, Azure Foundry is shaping up as Microsoft’s preferred runtime for enterprise agents, and the company is tying identity, governance, local inference, model routing, and distribution into one platform story.

Some of this is genuinely useful. Some of it is Microsoft tightening platform control. Both apply.

The tooling updates are practical

The least controversial Build news is also the easiest to care about.

Visual Studio gets .NET 10 support, live preview at design time, better Git tooling, and a cross-platform debugger. None of that will rewrite your workflow overnight. It still matters if you run mixed Windows and Linux workloads and you’re tired of the debugger behaving differently depending on where the code runs.

VS Code hits its 100th open-source release and adds features that sound minor until you need them every day, including multi-window support and a built-in staging view. Sensible additions. A few were overdue.

The bigger move is GitHub Copilot in VS Code going open source as part of the editor repo. That matters. It narrows the gap between the editor and the AI layer, and it gives enterprises and extension authors a clearer surface to inspect, adapt, and extend. Microsoft also gets the optics win. It can say the editor and AI workflow still sit in an open development model, even if much of the model layer remains closed or commercially gated.

Developers should read this as a control move too. Microsoft wants VS Code to stay the default place where AI coding happens.

Microsoft wants agents everywhere

The central Build theme was the "open agentic web," Microsoft’s term for agents that can work across apps, services, and data stores instead of sitting inside one chat box. Behind the branding, there’s real product work.

A few examples stood out:

  • An app modernization agent that can upgrade Java 8 to 21, move .NET 6 apps to .NET 9, and help migrate on-prem software to the cloud.
  • An SRE agent that detects incidents, investigates root cause, applies mitigation, and opens GitHub issues with repair tasks.
  • A shift from Copilot as pair programmer to Copilot as what Microsoft calls a peer programmer, where developers assign bug fixes, features, and maintenance work as bounded tasks.

That last point matters because it changes the interface. Microsoft is pushing beyond autocomplete and chat toward a work-queue model for AI coding. That can work when tasks are narrow, testable, and easy to verify. It gets messy fast when the work involves architecture, ambiguous business logic, or production risk.

The SRE agent makes the trade-off obvious. Automated incident triage is useful. Writing up remediation items and sending them to GitHub is useful too. Automated mitigation in production is where things get serious. If the blast radius is large, every step needs guardrails, auditability, and tight identity boundaries. Microsoft knows that, which is why so much of the rest of the announcement leans on Entra ID, Purview, and Defender.

Azure Foundry is becoming the real product

There’s plenty of branding here, but the technical shape is clear. Azure AI Foundry is turning into Microsoft’s control plane for agent development, deployment, model access, and runtime management.

The Foundry Agent Service is now generally available. That matters more than another keynote demo. It gives teams a managed place to define agents, workflows, and models, then publish and observe them in production. Microsoft’s example is simple enough:

name: hello-world-agent
description: "A Foundry agent that echoes messages."
models:
primary: openai/gpt-4
workflows:
- name: echo
steps:
- action: receive_message
- action: generate_response
model: primary
prompt: "{{ message }}"
- action: send_response

And then:

az foundry agent create --config agent.yaml --resource-group my-rg
az foundry agent publish --name hello-world-agent

By itself, that isn’t revolutionary. The point is standardization. Microsoft wants enterprises defining agents as deployable infrastructure, not as ad hoc scripts tied to a chatbot UI. For platform teams, that’s appealing. You get repeatability, policy controls, observability hooks, and one procurement path.

The downside is obvious. The deeper you go into Foundry conventions, the more your agent architecture starts to mirror Azure’s assumptions. Portability depends on how much of your logic stays in plain code versus service-specific orchestration.

Multimodel routing is one of the smarter parts

One of the better ideas here is the model router. Instead of forcing teams to pick one model for everything, Microsoft is pushing task-based routing across OpenAI and third-party models.

Its example is straightforward:

{
"modelRouter": {
"routes": [
{ "task": "debug", "model": "gpt-4" },
{ "task": "code-review", "model": "grock-xai" }
]
}
}

The underlying logic is solid. Different jobs need different trade-offs in latency, cost, reasoning depth, and style. You shouldn’t spend top-tier model money on boilerplate transformations if a smaller or faster model gets the same result.

Microsoft also said xAI’s Grok is coming to Azure’s multimodel setup through Foundry. That’s notable less because Grok is essential and more because Azure is positioning itself as a model marketplace with routing above it. If that works, the strategic value shifts away from any single model and toward orchestration, permissions, telemetry, and the procurement wrapper around it.

That’s exactly where hyperscalers want the market.

The enterprise story is identity and governance

The least flashy Build announcements are probably the most important if you plan to deploy agents inside a company.

Microsoft is giving agents identities through Entra ID, with permissions, policy controls, and an agent directory. It also ties agent provisioning into systems like ServiceNow and Workday. That is a very Microsoft move, and it’s the right one. Agents that can read documents, touch tickets, query databases, and call internal APIs need first-class identity management. Otherwise you end up with brittle service accounts wrapped in optimistic prompt engineering.

Then there’s Purview and Defender coverage for agent workloads. Again, not exciting demo material, but necessary. If an agent can access sensitive data or perform actions, you need the same governance stack you use for humans and applications, plus better logging because autonomous systems create more ambiguity around why something happened.

For data, Microsoft is tying Cosmos DB and Fabric together for conversational state, retrieval data, analytics, and digital twin scenarios. That could be useful if you already live in the Azure data stack. If you don’t, it looks like a fairly obvious attempt to pull AI memory, BI, and operational data into the same orbit.

The practical advice is simple: don’t give agents broad access and hope to clean it up later. Start with least privilege, isolate retrieval sources, and assume the audit trail will matter the first time an agent does something expensive or wrong.

Local and edge support look better than expected

One of Microsoft’s stronger themes this year is local development.

Foundry Local gives developers a runtime, agent services, and CLI tooling to build and test on Windows and macOS without immediately bouncing to Azure. That helps with iteration speed, offline work, and basic debugging sanity. The more your agent workflows depend on cloud-only services, the slower and more expensive they get to test.

Windows AI Foundry goes further by supporting MCP servers and registries and running on CPU, GPU, or NPU. Microsoft also said it’s open-sourcing WSL, which fits the broader pattern. If Windows wants to be taken seriously as an AI development environment, Linux compatibility can’t feel bolted on.

Then there’s NL Web, an open-source framework for turning a website or API into an agentic interface with minimal code. That could be a useful adapter layer. It could also produce a lot of shallow "chat over CRUD" projects. It depends on how disciplined teams are. A natural-language wrapper on top of a weak API contract still gives you a weak system, just one that fails in more interesting ways.

Teams and the Agent Store are about distribution

Microsoft is also trying to solve the distribution problem. Building agents is one thing. Getting them in front of users inside tools they already use is another.

Teams is now the front end for chat, enterprise search across systems like Confluence and Jira, notebooks, creation workflows, and agents. The Teams AI Library supports multiplayer agents, MCP, and memory patterns backed by Azure Search.

Then there’s the new Agent Store, which Microsoft wants to use as a publishing channel across Copilot and Teams. If it gets real adoption, it could matter quite a bit. Distribution is often the missing piece in internal AI projects. Teams already has the audience. Microsoft wants agent developers targeting that installed base the same way traditional developers target app stores or package registries.

I’d still be cautious. Stores fill up quickly. Discovery gets noisy. Governance gets harder when every department can publish a semi-autonomous assistant with broad language access and fuzzy business logic. A crowded agent store could become the enterprise version of browser-extension chaos if Microsoft doesn’t police quality and permissions hard.

One quieter signal: Microsoft Discovery

The final interesting piece is Microsoft Discovery, a scientific R&D platform built on Foundry. It wasn’t the loudest announcement, but it says a lot about where Microsoft thinks agents have staying power.

Science workflows are messy, iterative, and full of multimodal data, literature review, simulation, and planning tasks. If Microsoft thinks the same agent substrate can support research as well as enterprise productivity, then it’s betting that orchestration and tool use matter at least as much as chat UX.

That seems right.

Build 2025 didn’t settle whether the "agentic web" becomes a durable platform shift or another umbrella term that falls apart under real workloads. Microsoft still shipped a serious stack: open-source editor integration, enterprise identity, multimodel routing, local runtimes, managed deployment, and a built-in distribution channel.

For developers, the near-term value is concrete. VS Code and Visual Studio get better. Foundry is worth watching if you need managed agent infrastructure. The model router could cut cost and latency if it’s used carefully. And if you’re responsible for security, Entra plus Purview plus Defender is the part to study before anyone turns an SRE agent loose in production.

The open question isn’t whether Microsoft has an agent strategy. It clearly does. The question is whether teams want one vendor sitting in the middle of coding, deployment, identity, data access, and AI runtime orchestration all at once. For some shops, that will sound efficient. For others, it’s too much control in one place.

Keep going from here

Useful next reads and implementation paths

If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.

Relevant service
AI agents development

Design agentic workflows with tools, guardrails, approvals, and rollout controls.

Related proof
AI support triage automation

How AI-assisted routing cut manual support triage time by 47%.

Related article
Microsoft pushes Copilot into the workflow with GitHub agent mode

Microsoft used its 50th-anniversary event to keep pushing the same big idea: Copilot should live inside actual work, not sit off to the side as a branded chat box. For developers, the main news was the wider rollout of GitHub Copilot agent mode in Vi...

Related article
OpenClaw's open source momentum looks bigger than its technical leap

OpenClaw had one of those open source surges the AI industry likes to treat as a breakthrough. Huge GitHub numbers, viral demos, a growing skill marketplace, and agents turning up in places developers already spend time: Slack, Discord, WhatsApp, iMe...

Related article
IBM adds Anthropic Claude to its software stack, starting with an IDE

IBM is adding Anthropic’s Claude models to parts of its software portfolio, starting with an IDE that’s already in limited release with select customers. The two companies are also publishing joint guidance on building and running enterprise AI agent...