Clean Architecture + CQRS with MediatR in .NET 10
After building enterprise applications for years, I have settled on a combination that I keep reaching for: Clean Architecture with CQRS, powered by MediatR. This is not theoretical architecture astronautics — this is a pattern I use in production at the internal platform, an application with a broad API surface serving internal users.
Let me walk you through exactly how I structure it, why I make specific choices, and where the pattern shines (and where it does not).
Why CQRS in Enterprise Apps
CQRS — Command Query Responsibility Segregation — sounds academic, but the core idea is practical: reads and writes have different requirements, so separate them. In the internal platform, our read operations query Oracle views optimized for display, while our write operations go through complex validation, fire domain events, and write to normalized tables.
Without CQRS, you end up with service classes that have methods like GetDocument, GetDocumentSummary, CreateDocument, UpdateDocumentStatus, ApproveDocument — and they grow to thousands of lines. With CQRS, each operation is its own class with its own handler. The Single Responsibility Principle at the use-case level.
The MediatR Pipeline
MediatR 12.4.1 gives us an in-process message bus. Every request goes through a pipeline of behaviors before reaching its handler:
csharp// Program.cs - Pipeline registration order matters
services.AddMediatR(cfg =>
{
cfg.RegisterServicesFromAssembly(typeof(ApplicationAssembly).Assembly);
// Pipeline behaviors execute in registration order
cfg.AddBehavior(typeof(IPipelineBehavior<,>), typeof(LoggingBehavior<,>));
cfg.AddBehavior(typeof(IPipelineBehavior<,>), typeof(ValidationBehavior<,>));
cfg.AddBehavior(typeof(IPipelineBehavior<,>), typeof(AuthorizationBehavior<,>));
cfg.AddBehavior(typeof(IPipelineBehavior<,>), typeof(CachingBehavior<,>));
cfg.AddBehavior(typeof(IPipelineBehavior<,>), typeof(TransactionBehavior<,>));
});
Every request flows through: Logging -> Validation -> Authorization -> Caching -> Transaction -> Handler. This pipeline is the backbone of the application. Let me break down each behavior.
Validation Behavior with FluentValidation
Every command and query can have an associated FluentValidation validator. The validation behavior collects all validators for the request type and runs them:
csharppublic class ValidationBehavior<TRequest, TResponse>
: IPipelineBehavior<TRequest, TResponse>
where TRequest : IRequest<TResponse>
{
private readonly IEnumerable<IValidator<TRequest>> _validators;
public ValidationBehavior(IEnumerable<IValidator<TRequest>> validators)
=> _validators = validators;
public async Task<TResponse> Handle(
TRequest request,
RequestHandlerDelegate<TResponse> next,
CancellationToken cancellationToken)
{
if (!_validators.Any()) return await next();
var context = new ValidationContext<TRequest>(request);
var validationResults = await Task.WhenAll(
_validators.Select(v => v.ValidateAsync(context, cancellationToken)));
var failures = validationResults
.SelectMany(r => r.Errors)
.Where(f => f != null)
.ToList();
if (failures.Count != 0)
throw new ValidationException(failures);
return await next();
}
}
The validators themselves are clean and readable:
csharppublic class CreateDocumentCommandValidator
: AbstractValidator<CreateDocumentCommand>
{
public CreateDocumentCommandValidator()
{
RuleFor(x => x.Title)
.NotEmpty().WithMessage("Document title is required")
.MaximumLength(500).WithMessage("Title cannot exceed 500 characters");
RuleFor(x => x.DepartmentId)
.GreaterThan(0).WithMessage("Valid department is required");
RuleFor(x => x.WorkflowTypeId)
.Must(BeValidWorkflowType).WithMessage("Invalid workflow type");
}
private bool BeValidWorkflowType(int workflowTypeId)
=> Enumeration.GetAll<WorkflowType>()
.Any(wt => wt.Id == workflowTypeId);
}
Caching Behavior
For queries, we use Redis caching with a marker interface:
csharppublic interface ICacheable
{
string CacheKey { get; }
TimeSpan? CacheDuration { get; }
}
public class GetDepartmentListQuery : IRequest<List<DepartmentDto>>, ICacheable
{
public string CacheKey => "departments:all";
public TimeSpan? CacheDuration => TimeSpan.FromMinutes(30);
}
public class CachingBehavior<TRequest, TResponse>
: IPipelineBehavior<TRequest, TResponse>
where TRequest : IRequest<TResponse>
{
private readonly IDistributedCache _cache;
public async Task<TResponse> Handle(
TRequest request,
RequestHandlerDelegate<TResponse> next,
CancellationToken cancellationToken)
{
if (request is not ICacheable cacheable)
return await next();
var cached = await _cache.GetStringAsync(
cacheable.CacheKey, cancellationToken);
if (cached is not null)
return JsonSerializer.Deserialize<TResponse>(cached)!;
var response = await next();
await _cache.SetStringAsync(
cacheable.CacheKey,
JsonSerializer.Serialize(response),
new DistributedCacheEntryOptions
{
AbsoluteExpirationRelativeToNow =
cacheable.CacheDuration ?? TimeSpan.FromMinutes(5)
},
cancellationToken);
return response;
}
}
Commands vs Queries: A Real Example
Here is how a complete feature looks. A user wants to approve a document in a workflow:
csharp// The command - represents the intent
public record ApproveDocumentCommand(
int DocumentId,
string Comments,
int NextStepId) : IRequest<ApprovalResultDto>;
// The handler - contains the business logic
public class ApproveDocumentCommandHandler
: IRequestHandler<ApproveDocumentCommand, ApprovalResultDto>
{
private readonly IWorkflowRepository _workflowRepo;
private readonly ICurrentUserService _currentUser;
private readonly IMediator _mediator;
public async Task<ApprovalResultDto> Handle(
ApproveDocumentCommand request, CancellationToken ct)
{
var workflow = await _workflowRepo
.GetActiveWorkflowAsync(request.DocumentId, ct);
if (workflow.CurrentApprover.UserId != _currentUser.UserId)
throw new ForbiddenAccessException("Not the current approver");
workflow.Approve(request.Comments, request.NextStepId);
await _workflowRepo.UpdateAsync(workflow, ct);
// Publish domain event for side effects
await _mediator.Publish(
new DocumentApprovedEvent(workflow.DocumentId, workflow.CurrentStep),
ct);
return new ApprovalResultDto(workflow.Status, workflow.CurrentStep);
}
}
The corresponding query for reading the same document is completely separate:
csharppublic record GetDocumentDetailQuery(int DocumentId)
: IRequest<DocumentDetailDto>;
public class GetDocumentDetailQueryHandler
: IRequestHandler<GetDocumentDetailQuery, DocumentDetailDto>
{
private readonly IOracleConnectionFactory _db;
public async Task<DocumentDetailDto> Handle(
GetDocumentDetailQuery request, CancellationToken ct)
{
using var conn = _db.Create();
// Read from a denormalized Oracle view optimized for display
return await conn.QueryFirstOrDefaultAsync<DocumentDetailDto>(
"SELECT * FROM VW_DOCUMENT_DETAIL WHERE DOC_ID = :docId",
new { docId = request.DocumentId });
}
}
The command writes to normalized tables through the domain model. The query reads from a denormalized view through Dapper. Each is optimized for its purpose.
The Controller Layer
Controllers become thin dispatchers:
csharp[ApiController]
[Route("api/v{version:apiVersion}/[controller]")]
[ApiVersion("1.0")]
public class DocumentsController : ControllerBase
{
private readonly IMediator _mediator;
[HttpGet("{id}")]
public async Task<ActionResult<DocumentDetailDto>> Get(int id)
=> Ok(await _mediator.Send(new GetDocumentDetailQuery(id)));
[HttpPost("{id}/approve")]
public async Task<ActionResult<ApprovalResultDto>> Approve(
int id, [FromBody] ApproveDocumentRequest request)
=> Ok(await _mediator.Send(new ApproveDocumentCommand(
id, request.Comments, request.NextStepId)));
}
No business logic in controllers. No service injection beyond MediatR. Every controller action is a single line.
Where This Pattern Struggles
I am not going to pretend this is perfect. At a certain scale, here is where it gets painful:
- File count explosion. Each feature needs a command/query, handler, validator, and DTO. That is multiple files per endpoint. Vertical slice architecture with folder-per-feature helps, but it is still a lot of files.
- Simple CRUD feels over-engineered. For a very simple lookup table, the full CQRS pipeline is overkill. I allow exceptions for simple reference data endpoints.
- Debugging can be indirect. When something fails in a pipeline behavior, the stack trace does not always make the flow obvious. Good logging in each behavior is essential.
Despite these trade-offs, the pattern scales. When a new developer joins the team, they learn the pattern once and can implement any feature by following the same structure. Consistency across a broad API surface is worth the file count.
Key Takeaways
- Use CQRS when your reads and writes have genuinely different requirements
- MediatR pipeline behaviors replace cross-cutting concerns that would otherwise clutter your handlers
- FluentValidation in the pipeline means handlers can trust their input
- Keep controllers as thin dispatchers — one line per action
- Accept the trade-offs: more files, but each file has a single responsibility