01 march 2026
Microsoft Agent Framework: Architecture, Components, and Practical Assembly of an Agent System
In the previous article, I discussed the idea of multi-agent work itself: why it makes sense to split a long task into several roles, and why one universal agent starts to struggle fairly quickly. But how do we move from theory to practice?

After all, the question is not only about inventing the roles of orchestrator, executor, and critic and writing prompts for them. We need to understand how exactly the agents will interact, where the state will live, how tools will be connected, when a single agent is enough, and when an explicit workflow is already needed.

In the previous article, I discussed the idea of multi-agent work itself: why it makes sense to split a long task into several roles, and why one universal agent starts to struggle fairly quickly. But how do we move from theory to practice? After all, the question is not only about inventing the roles of orchestrator, executor, and critic and writing prompts for them. We need to understand how exactly the agents will interact, where the state will live, how tools will be connected, when a single agent is enough, and when an explicit workflow is already needed. One of the solutions is Microsoft Agent Framework. Let’s take a closer look at it.

Where Microsoft Agent Framework came from

Previously, any conversation about agent systems at Microsoft almost inevitably led either to AutoGen or to Semantic Kernel. Now, however, the main direction has shifted toward Microsoft Agent Framework. In the official overview, Microsoft directly describes Agent Framework as the direct successor to AutoGen and Semantic Kernel. From AutoGen, it takes simple agent abstractions and multi-agent patterns. From Semantic Kernel, it takes more mature capabilities such as session-based state management, type safety, middleware, filters, and telemetry. On top of that, it adds graph-based workflows as an explicit way to orchestrate multi-step and multi-agent scenarios. In this sense, Agent Framework represents a new stage in the evolution of Microsoft’s agent frameworks. In the documentation, Agent Framework is explicitly divided into two main categories of capabilities:

  • Agents — for individual agent roles that work with LLMs, tools, MCP, and conversation state.
  • Workflows — for explicit orchestration of steps, agents, and functions in multi-step scenarios.

Underneath them sits another layer of foundational building blocks: providers, sessions, memory/context providers, middleware, telemetry, hosting, and integration mechanics.

Microsoft Agent Framework layers and components

In practice, it is more useful to understand Agent Framework not as one large API, but as several layers of responsibility.

Agents

Agents are the layer where individual AI agents are described and run as independent working roles. Practically speaking, an agent can be understood as a software entity that receives a task, communicates with a model, uses available tools, and gradually arrives at a result. It is not just a single LLM call, but a higher-level abstraction: an agent has a role, instructions, connected tools, access to conversation state, and the ability to complete a task in multiple steps. The Agents layer usually includes:

  • the description of the agent itself and its role;
  • connection to the model through a provider or model client;
  • the set of tools available to the agent;
  • MCP servers for access to external systems;
  • conversation and session state;
  • multi-turn logic, where the work is done not in one request, but through several sequential steps.

Agents are the layer for scenarios where one agent, or one specialized role, can already produce a useful result on its own. For example: a DevOps agent that analyzes a pipeline; a support agent that helps a user solve a problem; or a review agent that checks configuration and suggests fixes.

Workflows

Workflows are the layer where work is described as an explicit execution process. If Agents are responsible for individual roles, Workflows are responsible for the order and logic of their interaction. A workflow can be understood as a graph of steps: first one node runs, then another, then there may be branching, result validation, a pause for human approval, or a transition to the next agent. For example, a workflow can describe a process like this: First, collect pipeline context; then pass it to an agent for analysis; after that, run a Dockerfile check; then send the result to a review agent; and before applying changes, request confirmation from a human. The Workflows layer usually includes:

  • an explicit order of step execution;
  • coordination of multiple agents, functions, or tools;
  • branches and transition conditions between steps;
  • checkpoints, so the process state can be saved;
  • human-in-the-loop, when a decision must be confirmed by a person;
  • support for long-running scenarios, where the process may run for a long time and does not fit into a single request.

Workflows are the layer for scenarios where not only the result matters, but also a controlled and verifiable path to that result. This is especially useful for DevOps, CI/CD, release management, incident response, and other tasks where the cost of an error is high and the process must be reproducible.

Foundational building blocks

Foundational building blocks are the base layer of the framework on which both agents and workflows rely. If Agents describe individual roles and Workflows define the execution process, then foundational building blocks are responsible for infrastructure capabilities: model connections, state storage, context management, tools, observability, and extensibility. This layer includes:

  • model providers and clients — connection to different LLM providers and models;
  • sessions — management of conversation or process state;
  • context providers and memory — passing the required context to the agent and preserving important information between steps;
  • middleware — the ability to intercept, modify, or enrich requests and responses;
  • observability through OpenTelemetry — tracing, metrics, and diagnostics for agents and workflows;
  • skills, tools, and MCP integrations — connection to external systems, APIs, commands, and services.
Hosting & Integrations

Hosting & Integrations is the layer responsible for where the agent application will run and which external systems it can interact with. This layer includes:

  • A2A — interaction between agents and agent systems;
  • Azure Functions — running agent logic as serverless components;
  • Durable Task — support for long-running, reliable, and recoverable processes;
  • DevUI — an interface for development, debugging, and observing agent behavior;
  • custom integrations — connections to internal APIs, corporate systems, DevOps tools, and other external services.
Key concepts

An Agent is a separate AI role that receives a task and tries to bring it to a result. The agent works through a selected model, can use tools, consider conversation context, and return an answer to the user or to the next step of the process. A provider or client is the way to connect an agent to a specific model or AI service. For example, through a provider, the framework understands where to send the request: Azure OpenAI, OpenAI, or another backend. Session and state are mechanisms for storing state. They are needed so the agent does not work “from scratch” every time, but can remember the context of the current conversation, continue a multi-turn scenario, and store intermediate data between steps. Tools, skills, and MCP are ways to give the agent access to actions outside text. Through tools, the agent can call functions, APIs, commands, or external services. Skills can be understood as reusable capabilities, while MCP is a standard way to connect external tools and systems. Workflow, executors, and edges are entities for multi-step processes. When a task becomes complex, the important thing is no longer just the individual agent, but the entire execution route: which steps go first, which data is passed onward, where branching exists, and who is responsible for each stage. Middleware is an intermediate processing layer around an agent or workflow. It is needed when you want to centrally add logging, filtering, security policies, error handling, or request and response transformation without rewriting the logic of every agent. Observability and hosting are what make it possible to use an agent system not only in a demo, but in a real application. Observability provides tracing, metrics, and diagnostics, while hosting and integrations help deploy the solution, connect it to other systems, and embed it into production infrastructure.

When to use agents and when to use workflows

This is one of the most useful decision points in the entire Agent Framework documentation. An agent is better suited when the task is not yet very strictly formalized. For example, the user asks an open-ended question, the system needs to understand the context, call a tool, make a small decision along the way, and return a result. In this scenario, the agent acts as one independent role: it receives the task, reasons through it, uses tools if needed, and produces an answer. An agent is appropriate where one “smart executor” is enough. For example:

  • analyze a pipeline;
  • find a configuration problem;
  • prepare a draft solution;
  • answer a question with context;
  • call one or more tools during the work.

A workflow is better suited when the process can already be decomposed into clear steps. If it is important to define the execution order, branches, checks, checkpoints, or human participation in advance, then it is better to describe the task as a workflow. Workflows are effectively where the multi-agent system lives. For example:

  • first collect data;
  • then pass it to an agent for analysis;
  • then run an automatic check;
  • after that, send the result for review;
  • before the final action, request confirmation from a human.

An agent is good for work inside one role; a workflow is good for managing a process made of several steps.

Single run in Microsoft Agent Framework
What one pass inside Agent Framework looks like

One pass usually looks like this:

  1. The application receives a task and decides whether it goes to a single agent or into a workflow.
  2. The framework creates a session and the related conversation or run state.
  3. The agent calls the model provider and, if necessary, invokes tools or MCP.
  4. If the task is represented as a workflow, the data continues through the graph via executors and edges.
  5. Middleware can intercept the processing flow, while observability records telemetry and tracing.
  6. If the scenario is long, checkpointing, persistence, or human-in-the-loop are involved.
  7. The system returns not just text, but a result together with execution context, state, and observability trace.

Agent Framework manages the full lifecycle of an agent or workflow scenario.

Example of a simple agent

Let’s look at what the simplest agent looks like in Agent Framework. First, add the required packages:

dotnet add package Azure.AI.Projects --prerelease dotnet add package Azure.Identity dotnet add package Microsoft.Agents.AI.Foundry --prerelease Now let’s create a minimal agent. In this example, the agent gets the role of a DevOps assistant, connects to a model through Azure AI Project, and responds to the user’s task. using System; using Azure.AI.Projects; using Azure.Identity; using Microsoft.Agents.AI; var endpoint = Environment.GetEnvironmentVariable("AZURE_AI_PROJECT_ENDPOINT") ?? throw new InvalidOperationException("AZURE_AI_PROJECT_ENDPOINT is not set."); var deploymentName = Environment.GetEnvironmentVariable("AZURE_AI_MODEL_DEPLOYMENT_NAME") ?? "<model name>"; // Create a client for Azure AI Project. // AzureCliCredential is convenient for local development if you are already logged in via Azure CLI. var projectClient = new AIProjectClient( new Uri(endpoint), new AzureCliCredential()); // Convert the Azure AI Project client into an AI agent. // The agent gets a model, a name, and instructions that define its role. AIAgent agent = projectClient.AsAIAgent( model: deploymentName, name: "DevOpsAssistant", instructions: """ You are a DevOps assistant. You analyze deployment and release issues. Keep answers concrete. Do not invent facts if the context is missing. """); // Run the agent with a user task. // RunAsync is the simplest way to send a task and get the final response. var result = await agent.RunAsync( "Find out why the release pipeline started failing after the container image was changed.");

Console.WriteLine(result); What matters here:

  • Agent is an application object with a defined role.
  • Provider / client is the layer for connecting to a specific AI backend. In the example, AIProjectClient is used.
  • Credential layer is the authentication mechanism. Here, AzureCliCredential is used, which is convenient for local development.
  • Instructions are the role contract. They explain how the agent should behave, what style it should use, and what it should not do.
  • RunAsync(...) is the shortest path from task to answer. You pass a task to the agent, and it returns a result.

Such an agent is suitable for simple scenarios: ask a question, analyze a problem, get a draft analysis, or ask the model to use connected tools. For example, in a DevOps scenario, such an agent can be used for initial analysis:

var answer = await agent.RunAsync(""" The deployment failed after we changed the base image from ubuntu:22.04 to alpine. What should we check first? """); Console.WriteLine(answer); Here, the agent is not yet managing a complex process. It simply acts as one role that helps the engineer understand the situation. If you need to receive the answer gradually instead of waiting for the final result all at once, you can use streaming mode through RunStreamingAsync(...). This is convenient for UIs, chats, and long answers. Minimal workflow example

Now let’s look at a workflow. If an agent is one role, then a workflow is already an execution graph. In it, we explicitly describe the steps, execution order, and data transfer between them. The example below is intentionally simple. Its goal is to show the mechanics: there is a first step, there is a second step, and there is a connection between them.

using System; using System.Threading; using Microsoft.Agents.AI.Workflows; // This function represents the first workflow step. // It normalizes the input before passing it to the next step. Func<string, string> normalize = text => text.Trim(); // Bind the function as a workflow executor. // Executor is a workflow node that can receive input and produce output. var normalizeExecutor = normalize.BindAsExecutor("NormalizeExecutor"); // This executor represents the second workflow step. // It checks whether the deployment plan contains a rollback strategy. class RiskTagExecutor() : Executor<string, string>("RiskTagExecutor") { public override ValueTask<string> HandleAsync( string message, IWorkflowContext context, CancellationToken cancellationToken = default) { // This is intentionally simple demo logic. // In a real system, this could call an agent, a policy engine, or an external API. var output = message.Contains("rollback", StringComparison.OrdinalIgnoreCase) ? $"READY: {message}" : $"CHECK_MANUALLY: {message}"; return ValueTask.FromResult(output); } }

RiskTagExecutor riskTag = new();

// Create a workflow that starts with NormalizeExecutor. WorkflowBuilder builder = new(normalizeExecutor); // Add an edge from the first executor to the second executor. // This means the output of NormalizeExecutor becomes input for RiskTagExecutor. builder.AddEdge(normalizeExecutor, riskTag).WithOutputFrom(riskTag); // Build the workflow graph. var workflow = builder.Build(); // Run the workflow in the current process. // The input goes to the first executor and then flows through the graph. await using Run run = await InProcessExecution.RunAsync( workflow, "Deploy plan includes rollback and smoke tests."); // Read workflow events. // ExecutorCompletedEvent tells us that a workflow node has finished execution. foreach (WorkflowEvent evt in run.NewEvents) { if (evt is ExecutorCompletedEvent completed) { Console.WriteLine($"{completed.ExecutorId}: {completed.Data}"); } }

This workflow has two steps. The first step is NormalizeExecutor. It normalizes the input text. The second step is RiskTagExecutor. It checks whether the plan contains the word rollback. If rollback exists, the result is marked as READY. If rollback is not found, the result is marked as CHECK_MANUALLY. So the process looks like this:

Input -> NormalizeExecutor -> RiskTagExecutor -> Output

This simple example clearly shows the main difference between a workflow and an agent. To an agent, we would say: “Look at the deployment plan and decide whether everything is okay.” But in a workflow, we explicitly define the process:

  • first normalize the input data;
  • then check for the presence of a rollback strategy;
  • then return the result;
  • if necessary, later add review, approval, or the next step.

Now let’s make the scheme more complex and look at additional framework components.

1. Tools: the agent starts not only answering, but acting

Without tools, an agent can only reason based on the text it has been given. With tools, it can perform actions: call functions, access APIs, retrieve pipeline status, check a deployment, or read external data. For example, let’s add a simple DevOps tool to the agent that returns release status.

using System; using System.ComponentModel; using Azure.AI.Projects; using Azure.Identity; using Microsoft.Agents.AI; using Microsoft.Extensions.AI; var endpoint = Environment.GetEnvironmentVariable("AZURE_AI_PROJECT_ENDPOINT") ?? throw new InvalidOperationException("AZURE_AI_PROJECT_ENDPOINT is not set."); var deploymentName = Environment.GetEnvironmentVariable("AZURE_AI_MODEL_DEPLOYMENT_NAME") ?? "gpt-4o-mini"; [Description("Get the release status by release id.")] static string GetReleaseStatus( [Description("Release id, for example release-1042.")] string releaseId) { // Demo implementation. In production, call Azure DevOps, GitHub Actions, or another release system. return releaseId switch { "release-1042" => "Failed: container image pull error. Registry returned unauthorized.", "release-1043" => "Succeeded.", _ => "Release was not found." }; } AIAgent agent = new AIProjectClient( new Uri(endpoint), new AzureCliCredential()) .AsAIAgent( model: deploymentName, name: "DevOpsAssistant", instructions: """ You are a DevOps assistant. Use available tools when you need factual release information. Keep answers concrete and do not invent facts. """, tools: [AIFunctionFactory.Create(GetReleaseStatus)]);

Console.WriteLine(await agent.RunAsync("""

Analyze release-1042.

Find out why it failed and suggest the first thing to check. """)); Here, the important architectural change is that the agent is no longer limited to the user’s text. It can call GetReleaseStatus, get the factual result, and then produce an answer based on it. Previously, it looked like this:

User prompt ↓ Agent ↓ Model ↓ Text answer Now it looks like this: User prompt ↓ Agent ↓ Model decides to call a tool ↓ GetReleaseStatus(...) ↓ Tool result ↓ Model ↓ Grounded answer For DevOps scenarios, this is a fundamental difference. The agent can now work not only as an “advisor”, but also as a participant in the diagnostic process. 2. Sessions and state: the agent stops starting every request from scratch

The next problem of a simple agent is the lack of stable context between calls. If you call RunAsync(...) without a session every time, each run is a separate operation. But troubleshooting almost always happens in multiple steps: first the user describes the problem, then brings a log, then clarifies the error, then asks to check a hypothesis. For this, Agent Framework has AgentSession.

AgentSession session = await agent.CreateSessionAsync();

Console.WriteLine(await agent.RunAsync("""

Release release-1042 failed after we changed the container image. """, session));

Console.WriteLine(await agent.RunAsync("""

The error says: unauthorized when pulling image from the registry.

What should I check next? """, session)); The second call receives the same session. That means the agent continues the same conversation and can take the previous context into account. According to the documentation, AgentSession is a container for conversation state; it can contain history, memory, or references to external storage, and RunAsync(...) updates the session with input and output messages. In practice, this turns the agent from a one-off model call into a multi-turn assistant:

Run 1:

User: Release failed after image change. Agent: Check registry access, image tag, service connection.

Run 2 with same session:

User: Error is unauthorized when pulling image. Agent: Since we are already investigating image pull failure, check registry credentials... Without a session, the second answer would be less precise because the agent would have to reconstruct the context again.

3. Memory and context providers: the agent receives external context

Session solves the problem of conversation history. But often the agent needs not only the dialogue, but additional facts: a runbook, release rules, ownership, known issues, architecture notes. In Agent Framework, this is handled by context providers. They are connected to the agent through options and can add additional instructions, messages, or tools to the request before the model call. In the documentation, this is described through AIContextProvider; it can store session-specific state inside AgentSession, rather than in the provider instance itself. A simplified context provider connection looks like this:

using Microsoft.Agents.AI; AIAgent agentWithContext = new AIProjectClient( new Uri(endpoint), new AzureCliCredential()) .AsAIAgent( model: deploymentName, options: new ChatClientAgentOptions() { ChatOptions = new() { Instructions = """ You are a DevOps assistant. Use provided operational context when analyzing deployment issues. """ }, AIContextProviders = [ new DeploymentRunbookContextProvider() ] });

The provider itself can be a separate component that adds the required operational context before the agent run.

using System.Threading; using Microsoft.Agents.AI; using Microsoft.Extensions.AI; internal sealed class DeploymentRunbookContextProvider : AIContextProvider { public DeploymentRunbookContextProvider() : base(null, null) { } protected override ValueTask<AIContext> ProvideAIContextAsync( InvokingContext context, CancellationToken cancellationToken = default) { // In production, load this from Git, wiki, vector search, or an internal API. var runbook = """ Deployment policy: - Production releases require a rollback plan. - Smoke tests must run after deployment. - Registry access errors should be checked against service connection permissions. - Database migrations require manual approval. """; return new ValueTask<AIContext>(new AIContext { Messages = [ new ChatMessage( ChatRole.User, $"Operational context for this investigation:\n{runbook}") ] }); } } Now the agent receives not only the user’s question, but also additional engineering context. This greatly reduces the risk of generic advice and makes the answers closer to the rules of a specific system. 4. Middleware: control appears around the agent

When an agent has tools and state, the next question arises: how do we control execution? For example, we may need to:

  • log all agent runs;
  • count the number of input and output messages;
  • block dangerous actions;
  • intercept tool calls;
  • measure latency;
  • handle errors centrally.

In Agent Framework, middleware is added through a builder:

var middlewareEnabledAgent = agent .AsBuilder() .Use(runFunc: LoggingMiddleware, runStreamingFunc: null) .Build(); The middleware for an agent run itself looks like this: using System.Collections.Generic; using System.Diagnostics; using System.Linq; using System.Threading; using System.Threading.Tasks; using Microsoft.Agents.AI; using Microsoft.Extensions.AI; async Task<AgentResponse> LoggingMiddleware( IEnumerable<ChatMessage> messages, AgentSession? session, AgentRunOptions? options, AIAgent innerAgent, CancellationToken cancellationToken) { // Log input message count before the agent runs. Console.WriteLine($"Input messages: {messages.Count()}"); var stopwatch = Stopwatch.StartNew();

try {

// Continue agent execution. var response = await innerAgent .RunAsync(messages, session, options, cancellationToken) .ConfigureAwait(false);

stopwatch.Stop();

// Log output message count after the agent has completed. Console.WriteLine($"Output messages: {response.Messages.Count}"); Console.WriteLine($"Agent duration: {stopwatch.ElapsedMilliseconds} ms"); return response; } catch (Exception ex) { stopwatch.Stop(); // Log errors in one central place. Console.WriteLine($"Agent failed after {stopwatch.ElapsedMilliseconds} ms: {ex.Message}"); throw; } } After that, we run not the original agent, but the agent with middleware: Console.WriteLine(await middlewareEnabledAgent.RunAsync("""

Analyze release-1042 and explain why the deployment failed. """)); This is an important architectural transition. Logging, control, policies, and error handling are no longer spread across prompts and business logic. They become a separate layer around the agent.

5. Observability: agent work can be analyzed after execution

There is another useful level: middleware not for the whole agent run, but for specific function calls. For example, you can log every tool the agent calls:

using System; using System.Threading; using System.Threading.Tasks; using Microsoft.Agents.AI; async ValueTask<object?> FunctionCallingMiddleware( AIAgent agent, FunctionInvocationContext context, Func<FunctionInvocationContext, CancellationToken, ValueTask<object?>> next, CancellationToken cancellationToken) { // Log the function name before execution. Console.WriteLine($"Function call: {context.Function.Name}"); var result = await next(context, cancellationToken); // Log the function result after execution. Console.WriteLine($"Function result: {result}"); return result; } It is connected like this: var agentWithFunctionMiddleware = agent .AsBuilder() .Use(FunctionCallingMiddleware) .Build();

Console.WriteLine(await agentWithFunctionMiddleware.RunAsync("""

Check release-1042 and explain the failure. """)); This is especially useful when tools call external systems: Azure DevOps, GitHub, Kubernetes, ServiceNow, internal APIs. You can see which tool was called, what result it returned, and you can add a security policy or approval before dangerous actions. Microsoft separately describes function calling middleware as a mechanism for intercepting function calls; to continue execution, the middleware must call the provided next. How the architecture changes

A minimal agent:

User ↓ Agent ↓ Model ↓ Answer An agent with tools, session, and middleware: User ↓ Agent run middleware ↓ AgentSession ↓ Context providers ↓ Agent ↓ Model ↓ Function calling middleware ↓ Tools ↓ Tool result ↓ Model ↓ Answer And this is where Agent Framework becomes interesting not as “yet another wrapper over an LLM,” but as an engineering framework.

Tools give the agent the ability to act. Session allows work to continue over multiple steps. Context providers add the required facts from external sources. Middleware provides control, logging, security, and error handling. Observability then allows you to analyze all this work like a normal production system: which calls were made, where the error occurred, which tool ran, and how long each stage took. As a result, the agent stops being just a prompt with a name. It becomes a managed application component around which real DevOps, support, release management, and automation scenarios can be built.

Building a multi-agent system step by step
How to gradually build a multi-agent system on Microsoft Agent Framework

In practice, Microsoft Agent Framework is more convenient to use not immediately as a “large multi-agent platform,” but as a set of levels that you can move through gradually. It is better to start with the simplest option: one agent and one clear provider. At this stage, the goal is not to build a complex architecture, but to validate the basic scenario: what role the agent performs, which model it uses, which instructions it needs, and what result it should return. The next step is to add tools or MCP integrations. This is needed only when the agent truly requires access beyond text: retrieving data from an API, checking pipeline status, reading configuration, going to GitHub, Azure DevOps, Kubernetes, or an internal system. At this level, the agent becomes not just a conversational assistant, but a participant in the engineering process. Then state management appears: sessions, multi-turn conversations, memory, and context providers. This is needed when the task does not end with one answer. For example, if the agent investigates an incident, helps with a migration, analyzes several files, or guides the user through a sequence of clarifications. State allows the agent not to start each step from scratch, but to continue working within one task. After that, it is worth adding middleware, telemetry, and a policy layer. The more capabilities an agent has, the more important it becomes to control its behavior. Middleware helps centrally add logging, filtering, error handling, safety checks, restrictions on tool calls, and rules for production actions. Telemetry and tracing make it possible to see what the agent did, which tools it called, how long the steps took, and where an error occurred. Only when it becomes clear that one role is no longer enough does it make sense to move to multiple agents. For example, one agent collects context, the second analyzes infrastructure, the third checks risks, and the fourth prepares the final solution. At this stage, the key is not simply to “create many agents,” but to split responsibility between them so that each role is understandable and verifiable. When a stable process appears between these roles, the next step is a workflow. A workflow is needed where the execution order can no longer be left to one model’s discretion. For example: first collect data, then perform analysis, then conduct a security review, then request confirmation from a human, and only then perform an action. Here, the multi-agent system becomes an execution graph with explicit steps, edges, checkpoints, and human-in-the-loop. The final stage is hosting and integrations. When the system stops being a local experiment, it needs to be deployed, connected to external services, embedded into existing infrastructure, and properly operated. At this stage, Azure Functions, Durable Task, A2A, DevUI, integrations with corporate systems, and production observability become important. The practical trajectory looks like this:

Single Agent ↓ Tools / MCP ↓ Sessions / State / Memory ↓ Middleware / Telemetry / Policies ↓ Multiple Agents ↓ Workflows ↓ Hosting / Integrations The main idea is that a multi-agent system does not need to be designed immediately top-down as a complex graph of a dozen roles. It is more reliable to start with one useful agent, then gradually add tools, state, behavior control, and only after that separate new agent roles and formalize the process as a workflow.

This is how Microsoft Agent Framework allows you to grow from a simple agent to a full multi-agent system without a sharp change in architectural model.

Summary

Microsoft Agent Framework is a very mature stage in the evolution of Microsoft’s multi-agent frameworks. It separates different levels of the task well: where a simple agent is needed, where an explicit workflow is needed, where state is needed, where policies, telemetry, and hosting are needed, and where it is better to use a regular function instead. All of this makes it possible to effectively implement the ideas and approaches we discussed in the previous article. However, Microsoft Agent Framework is still an SDK. That means you need to be a developer to build something with it. And the result will be a standalone application. This is good when you are building an independent and self-contained system. But it is not a perfect fit for everyday DevOps tasks, because we often need more freedom and less “programming.” Ideally, we would like something already available: take it and use it. And such a tool also exists: GitHub Copilot in Visual Studio Code. The most interesting part is that it also makes it possible to fully implement all the ideas of the multi-agent approach. That is exactly what we will discuss in the next article.