In our previous episode, we made one thing painfully clear that agents should not be driving your business processes. Workflows drive. Agents act. Sequential orchestration was our "first seatbelt" which was predictable, traceable and (probably) boring in the best way.
Now comes the moment every enterprise hits right after they get a sequential pipeline working: it's slow. Because real onboarding isn't a single lane. It's a multi-lane intersection where Security, Compliance and Finance all want to review the same request and nobody wants to wait for the others to finish.
That's exactly why Concurrent Orchestration exists.
Concurrent Orchestration in Microsoft Agent Framework
Concurrent orchestration means multiple agents (or executors) work at the same time on the same task, then their results are collected and aggregated. That's the pattern Microsoft Agent Framework documents directly for Workflows orchestrations which is "Concurrent agents execute in parallel."
The Azure Architecture Center describes it using names engineers already recognise such as fan-out/fan-in, scatter-gather, even "map-reduce vibes" (without pretending it's actually MapReduce). The point is not "more agents are better." It's parallel responsibilities with a deterministic merge strategy.
Introducing OnboardFlow
A sales rep (or customer success manager) writes a messy onboarding request into the system:
"Customer is in EU, wants SSO + SCIM, plans to ingest support tickets, might need Salesforce integration, invoicing via PO, and security wants SOC2 + data residency..."
In a sequential world, this becomes a queue with Security review first then Compliance then Finance. Everyone waits. Everyone re-asks the same questions. The customer waits longest. In a concurrent world, the workflow says: "Three reviewers, go."
Security looks for integration risk and controls. Compliance checks data residency and contract flags. Finance evaluates billing and invoicing needs. All at once. Then we aggregate into a single artefact and name it as an Onboarding Decision Pack with an overall recommendation, conditions, conflicts and next actions. Same correctness. Dramatically better throughput.
Fan-out, then fan-in
This is the orchestration shape we're building:
If you only remember one thing from this blog post, fan-out is easy. Fan-in is the work.
Anyone can dispatch three tasks. The engineering challenge is what happens when they come back. What if one times out? What if two reviewers flag the same concern with different severity? What if Security says "block" but Finance says "approved with conditions"? Those questions are the fan-in design problem and if you don't answer them explicitly, your system will answer them implicitly, usually by silently dropping information.
Aggregation is not "paste outputs together"
If you just dump three reviewer outputs into one blob, you haven't built a concurrent orchestration. You've built three parallel opinions with no resolution strategy. OnboardFlow treats aggregation as a first-class step.
The aggregated Decision Pack is strict JSON with:
- An overall recommendation (
Approve,Approve with Conditions,Reject) - Merged findings with severity and source attribution
- Explicit conflicts (where reviewers disagree)
- Required next actions
- Warnings when a reviewer timed out or failed
{
"overallRecommendation": "Approve with Conditions",
"conditions": ["Data residency must be confirmed for EU before go-live"],
"mergedFindings": [
{
"category": "Security",
"title": "SSO integration requires OAuth2 review",
"detail": "SCIM provisioning introduces identity sync risk",
"severity": "Medium",
"sources": ["SecurityReviewer"]
}
],
"conflicts": [
{
"topic": "Salesforce integration timeline",
"opinions": [
{
"agent": "SecurityReviewer",
"text": "Requires pen-test before enabling"
},
{
"agent": "FinanceReviewer",
"text": "Customer expects integration at launch"
}
]
}
],
"requiredNextActions": [
"Confirm data residency region with customer",
"Schedule OAuth2 security review for SSO"
],
"warnings": []
}
This is where "enterprise-grade" begins. Once you can consistently produce an aggregated object you can route it, store it, diff it across reruns and turn it into UI that anyone can actually read.
How concurrency stays predictable
One of the best hidden gems of Microsoft Agent Framework (MAF) Workflows is how execution happens in supersteps. This is not marketing language, SuperStepStartedEvent, SuperStepCompletedEvent and ISuperStepRunner are real classes in the framework. I know right!
A superstep works like this:
- Collect pending messages from the previous stage
- Route messages to target executors based on edges
- Run all target executors concurrently
- Wait for all to complete (barrier synchronisation)
- Advance to the next superstep
That means concurrency is not chaos by default. It's concurrency with a synchronisation point. The fan-out happens within a superstep and the fan-in is the barrier that ensures all results are available before the next stage starts.
This is why AddFanInBarrierEdge exists in the WorkflowBuilder API. This barrier is a first-class concept, not something you bolt on with Task.WhenAll and hope for the best.
Workflow Builder API
OnboardFlow doesn't run the entire pipeline as a single MAF workflow. The sequential steps (Intake, Extract Profile, Customer Next Steps, Final Package) are orchestrated by OnboardFlowOrchestrator using regular IStepExecutor implementations. The MAF WorkflowBuilder is used only for the concurrent review stage which is the part that actually needs fan-out/fan-in.
public static Workflow Build(IChatClient chatClient, ILlmClient llmClient)
{
var start = new ReviewStartExecutor();
ChatClientAgent securityReviewer = new(chatClient,
name: "SecurityReviewer",
instructions: "Assess SSO/SCIM, encryption, network security...");
ChatClientAgent complianceReviewer = new(chatClient,
name: "ComplianceReviewer",
instructions: "Assess data residency, GDPR/CCPA/HIPAA...");
ChatClientAgent financeReviewer = new(chatClient,
name: "FinanceReviewer",
instructions: "Assess billing feasibility, credit risk...");
var aggregation = new ReviewAggregationExecutor(llmClient);
return new WorkflowBuilder(start)
.WithName("OnboardFlow-ConcurrentReview")
.AddFanOutEdge(start, [securityReviewer, complianceReviewer, financeReviewer])
.AddFanInBarrierEdge(
[securityReviewer, complianceReviewer, financeReviewer], aggregation)
.WithOutputFrom(aggregation)
.Build();
}
The ReviewStartExecutor is a plain Executor that receives the applicant profile JSON and sends a review prompt plus a TurnToken to trigger all three ChatClientAgent reviewers concurrently. Three methods define the concurrent shape:
AddFanOutEdge(source, targets[]): one executor dispatches to many. The framework sends the same message to all targets and they execute in parallel within the same superstep.AddFanInBarrierEdge(sources[], target): many executors converge to one. The barrier buffers results until all sources have sent at least one message. Only then does the aggregator receive the collectedList<ChatMessage>.WithOutputFrom(executor): declares where the workflow's final output comes from.
The barrier's internal state tracking is elegant. It maintains an Unseen set of source IDs and removes each source as results arrive. When Unseen.Count == 0, the barrier releases all buffered messages to the aggregator in a single batch, grouped by source. This is a deterministic synchronisation point and not a race condition waiting to happen.
The orchestrator calls ConcurrentReviewWorkflow.Build(chatClient, llmClient) and runs it via InProcessExecution.RunStreamingAsync, watching for WorkflowOutputEvent to extract the aggregated Decision Pack JSON. This hybrid approach — standard orchestration for sequential steps, MAF for the concurrent stage — keeps the architecture clean without forcing every step into a framework abstraction.
The aggregator executor
The ReviewAggregationExecutor extends Executor<List<ChatMessage>> and takes an ILlmClient dependency. It collects reviews incrementally. The barrier delivers messages per source, so the aggregator's HandleAsync is called once per reviewer. Only when all three have arrived does it merge them into a single Decision Pack.
internal sealed class ReviewAggregationExecutor : Executor<List<ChatMessage>>
{
private readonly ILlmClient _llmClient;
private readonly List<(string Agent, string ReviewJson)> _reviews = [];
private const int ExpectedReviewerCount = 3;
public ReviewAggregationExecutor(ILlmClient llmClient)
: base("ReviewAggregation")
{
_llmClient = llmClient;
}
public override async ValueTask HandleAsync(
List<ChatMessage> messages,
IWorkflowContext context,
CancellationToken ct = default)
{
var assistantMessage = messages
.LastOrDefault(m => m.Role == ChatRole.Assistant);
if (assistantMessage != null)
{
_reviews.Add((
assistantMessage.AuthorName ?? "Unknown",
assistantMessage.Text ?? ""));
}
if (_reviews.Count >= ExpectedReviewerCount)
{
string decisionPackJson = await MergeReviewsAsync(ct);
await context.YieldOutputAsync(decisionPackJson, ct);
}
}
private async Task<string> MergeReviewsAsync(CancellationToken ct)
{
string allReviews = string.Join("\n\n", _reviews.Select(r =>
$"--- {r.Agent} ---\n{r.ReviewJson}\n--- END {r.Agent} ---"));
string prompt = $"Merge the following reviewer assessments "
+ $"into a single Decision Pack:\n\n{allReviews}";
return await _llmClient.InvokeForJsonAsync(prompt,
"You are a senior decision aggregator...", ct);
}
}
Notice what's different from sequential. The aggregator doesn't just pass through. It reasons across multiple inputs and produces a new artefact that none of the individual reviewers could produce alone. The incremental collection pattern also means the aggregator is stateful. The reviews field accumulates across invocations until the expected count is met.
Handling failures
Real concurrent systems need to handle partial failure. What happens when the Security reviewer times out but Compliance and Finance both returned valid results?
OnboardFlow handles this with step-level failure tracking and graceful degradation:
- If a reviewer fails or times out, its step is marked
Failedand aStepFailedSignalR event is emitted - The orchestrator catches step-level errors and marks the overall run as
Failed - The Decision Pack schema includes a
warningsarray to surface gaps:["Security reviewer timed out, manual security review required"] - Future iterations can implement partial-success semantics where the aggregator proceeds with 2 of 3 reviews
This is a design decision, not an accident. In enterprise workflows, "I got 2 out of 3 reviews" is almost always more useful than "everything failed because one reviewer was slow." The aggregator must be aware of this possibility and the schema must surface it.
PII redaction: before any LLM sees the data
Onboarding requests contain emails, phone numbers, sometimes even billing details. Before any LLM call, OnboardFlow applies regex-based PII redaction:
- Emails →
[EMAIL_1],[EMAIL_2], etc. - Phone numbers →
[PHONE_1],[PHONE_2], etc. - The original input is stored in
InputTextOriginalfor audit - Each redacted item records its type, placeholder, and position in a
RedactedItemslist - All LLM processing uses
InputTextRedacted - SignalR events and UI displays only reference redacted content
This is table stakes for enterprise workflows. If your demo sends raw PII to an LLM without redaction, your demo is anything but not enterprise-ready.
Excluding CheckpointManager for now
We're still not introducing durable checkpointing (CheckpointManager) in this episode. Reruns are implemented using SQLite-persisted step outputs and context snapshots which are good enough for the demo and straightforward for readers to understand. Checkpointing earns its own episode when we start building long-running flows or human-in-the-loop paths.
We also keep the LLM integration behind an ILlmClient adapter so the app can evolve with Microsoft Foundry endpoint and auth patterns over time.
The rerun model
OnboardFlow uses the same immutable history approach as PolicyPack Builder:
- Every rerun creates a new
WorkflowRunwithParentRunIdpointing to the original andRootRunIdpointing to the first run in the lineage - Context is reconstructed by querying the parent run's completed steps via
GetCompletedStepsBeforeAsyncand deserialising the last step'sOutputSnapshotback into anOnboardingContext - Steps before the rerun point are marked
Skipped; steps from the rerun point forward re-execute - Historical runs are never mutated — you can always trace lineage back to the root
For concurrent steps, rerunning from a reviewer means re-executing all three reviewers plus the aggregator. You can't meaningfully rerun just one reviewer without re-aggregating, because the Decision Pack depends on all inputs.
Let's talk about Demo!
We're not touching PolicyPack Builder. It stays as the anchor sample for sequential orchestration. The first pattern, the first discipline, the first "shippable workflow."
OnboardFlow is B2B SaaS account onboarding by design. Not bank KYC, not regulatory determinations or anything. Just a realistic enterprise flow that still needs governance, flags, conditions and auditability. The entire sample is open-sourced on GitHub.
The workflow pipeline
| Step | Name | Type | Purpose |
|---|---|---|---|
| 1 | Intake + Normalize | Non-LLM | Clean whitespace, redact PII, validate input |
| 2 | Extract Applicant Profile | LLM | Structured JSON: company, contact, features, integrations |
| 3a | Security Reviewer | LLM (concurrent) | Risks, controls, questions to ask |
| 3b | Compliance Reviewer | LLM (concurrent) | Flags, data residency, contract terms |
| 3c | Finance Reviewer | LLM (concurrent) | Billing requirements, credit risks, invoice notes |
| 4 | Aggregate Findings | LLM | Merged Decision Pack with conflicts and recommendations |
| 5 | Customer Next Steps | LLM | ≤200 word customer-facing message |
| 6 | Final Package | Non-LLM | HTML export with full audit trail |
Steps 3a to 3c are the concurrent stage. They share the same input (the extracted applicant profile) and execute within the same superstep. The fan-in barrier at step 4 ensures the aggregator only fires when all three reviewers have completed (or timed out).
Architecture
OnboardFlow uses the same Clean Architecture scaffold as PolicyPack Builder:
Each layer has a single responsibility. The orchestration logic lives in the Application layer. The infrastructure details as how we talk to Microsoft Foundry, how we store runs, are pluggable. The concurrent review stage is an orchestration shape not just an architectural deviation.
Concurrency you can see with UI
The frontend reuses the same React + TypeScript + Mantine stack. But the stepper gets one new "aha" feature: the parallel review group.
When you run an onboarding request, you see Security, Compliance and Finance executing simultaneously, each with its own status badge and duration counter then a single Aggregator step that merges them into the Decision Pack.
That is concurrency you can see, not just read about.
A decision guide

We've gone from "one step at a time" to "three reviewers at once." The workflow stayed predictable because of superstep barriers and a real aggregation strategy instead of hope.
In our next post, we'll tackle the pattern where the problem isn't parallelism or sequencing, it's ownership. When the real question is "who should handle this right now?", you need Handoff orchestration; conversational control moving between specialists, including escalation to humans.
We'll also ask the two questions that make handoff systems production-ready: How does the system know when to hand off? and How does the system know when it's done?
Until next time.
