Autonomous AI Agents in .NET: The Microsoft Agent Framework Explained
Welcome to part 3 of the building blocks for AI in .NET series. In part one, we explored Microsoft Extensions for AI (MEAI) for a unified LLM interface. Part two covered VectorData for semantic search and RAG. Now we dive into the Microsoft Agent Framework, which enables you to build autonomous AI agents that can reason, use tools, and coordinate with others to solve complex tasks.
What is an AI agent and how does it differ from a chatbot?
An AI agent is fundamentally different from a simple chatbot. A chatbot operates in a reactive loop: it receives input, sends it to a language model, and returns the output. An agent, on the other hand, has autonomy. It can reason about a task, decide which tools to invoke, call those tools, evaluate the results, and determine the next action—all without requiring explicit step-by-step instructions for every scenario.
Source: devblogs.microsoft.com
Think of it as the difference between having a conversation with a colleague (chatbot) versus handing that colleague a to-do list and letting them figure out how to accomplish it (agent). The agent can search for information, run calculations, check external APIs, query databases, or use any tool you provide. This autonomy makes agents suitable for complex, multi-step workflows where the path forward isn't predetermined.
What is the Microsoft Agent Framework and when did it reach 1.0?
The Microsoft Agent Framework is a production-ready SDK for building intelligent agents in .NET (and Python, though the focus here is on C#). It reached its 1.0 release in April 2026. The framework is designed to support everything from simple single-agent scenarios to complex multi-agent workflows with graph-based orchestration. It builds directly on top of MEAI's IChatClient abstraction, making it familiar to developers already using the AI extensions for .NET.
By providing a standardized way to create, configure, and coordinate agents, the framework removes much of the boilerplate involved in tool use, memory management, and inter-agent communication. It's part of Microsoft's larger effort to provide composable building blocks for AI development in .NET.
How does the Agent Framework build on Microsoft Extensions for AI (MEAI)?
The Agent Framework is designed as a higher-level abstraction that leverages MEAI's IChatClient interface. MEAI provides a unified way to communicate with various large language models (OpenAI, Azure OpenAI, local models, etc.). The Agent Framework extends this by adding agent capabilities on top of a chat client. You can convert any IChatClient into an agent using an extension method like .AsAIAgent().
This means if you've already used MEAI to configure a chat client, you can immediately use it as the backbone of an agent. The agent inherits the same model configuration, retry policies, and logging. Moreover, the framework adds agent-specific features such as instructions (system prompts that define behavior), tool bindings, and memory management—all while staying compatible with the broader .NET AI ecosystem.
How can you create your first agent with the Microsoft Agent Framework?
Creating a basic agent requires just a few lines of code. First, install the NuGet package: dotnet add package Microsoft.Agents.AI. Then, in a console application, you can set up an Azure OpenAI client and convert it into an agent using the .AsAIAgent() extension method. Here's a minimal example:
using Azure.AI.OpenAI;
using Azure.Identity;
using Microsoft.Agents.AI;
var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT");
var deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_DEPLOYMENT_NAME") ?? "gpt-5.4-mini";
AIAgent agent = new AzureOpenAIClient(
new Uri(endpoint),
new DefaultAzureCredential())
.GetChatClient(deploymentName)
.AsAIAgent(
instructions: "You are good at telling jokes.",
name: "Joker");
Console.WriteLine(await agent.RunAsync("Tell me a joke about a pirate."));
The .AsAIAgent() call takes an instructions parameter (system prompt) and a name. The agent is now ready to accept tasks via RunAsync. This pattern mirrors the simplicity of MEAI's .AsIChatClient() but adds agent capabilities like tool use automatically if you define tools.
Source: devblogs.microsoft.com
What types of scenarios does the Agent Framework support (single-agent vs multi-agent)?
The framework is built to scale from trivial to enterprise-grade. For single-agent scenarios, you can create an agent that uses multiple tools to fulfill requests—ideal for tasks like customer support, data analysis, or content generation. The agent autonomously decides which tools to use and in what order.
For multi-agent scenarios, the framework provides graph-based orchestration. You can define a workflow where multiple specialized agents collaborate. For example, one agent might handle research, another performs calculations, and a third compiles results. The orchestration layer manages message routing, state, and error handling between agents. This is particularly powerful for complex business processes, automated research, or simulation environments.
What tools can agents use and how do they decide which to use?
Agents can use any tool you define, including calling external APIs, running database queries, executing calculations, or even invoking other agents. The agent decides which tool to use based on its reasoning and the tool descriptions you provide. The framework leverages the model's ability to understand tool schemas (similar to function calling in OpenAI).
When you register a tool, you provide its name, description, and input parameters. The agent's language model examines the user's request, matches it against tool descriptions, and decides whether to call a tool. After the tool returns a result, the agent evaluates the output and decides on the next step—call another tool, ask a clarifying question, or return the final answer. This loop continues until the agent determines the task is complete.
How does the Agent Framework handle orchestration in multi-agent workflows?
For multi-agent scenarios, the framework uses a graph-based orchestration model. You define a graph where nodes represent agents or actions, and edges represent communication flows or decision points. The orchestration engine manages the execution order, passes messages between agents, and handles errors and retries.
Each agent in the graph has its own set of instructions and tools. The orchestrator can support different topologies: sequential pipelines, parallel execution, conditional branching, or even recursive loops. You can also inject human-in-the-loop checkpoints where a person reviews or approves an agent's action before proceeding. This flexibility makes it suitable for both simple automations and complex, adaptive workflows that must respond to changing conditions.