Clean code in C#

C# Clean Code

Practical guidelines for writing readable, maintainable, and testable C# code that scales with your team and product.


1) Name things well

  • Choose intention‑revealing names so that readers understand “what” and “why” without opening an implementation. In C#, methods should read as verbs (CalculateInvoiceTotals), classes and records as nouns (InvoiceSummary), and booleans as clear predicates (isExpired, hasItems). Avoid cryptic prefixes/suffixes and strive for consistency across the codebase so developers can predict names before searching.

    Prefer domain terminology over generic labels to align code with business concepts. Replace vague names like data or obj with precise nouns such as customerProfile or orderLineItems. When naming async methods, append Async (e.g., GetByIdAsync) and ensure files, types, and namespaces mirror the same language for frictionless navigation.

  • Avoid abbreviations, short forms, and “noise words” that dilute meaning (Mgr, Util, Helper). Names should be long enough to be precise and short enough to be readable. In public APIs, bias toward clarity; these names become part of your contract with callers and tooling (IntelliSense, analyzers) relies on them for guidance.

    Keep conventions consistent: PascalCase for types and methods, camelCase for locals and parameters, and UPPER_CASE for constants if you use them. When using DI, name constructor parameters after the abstraction (orderRepository for IOrderRepository) so intent is always obvious.

2) Keep functions small

  • Aim for one reason to change per method. Smaller functions reduce cognitive load, improve testability, and make refactoring safe. Replace nested if pyramids with early returns to keep the happy path flat and readable. When a method grows beyond a screenful, that’s a strong signal to refactor into smaller pieces.

    In ASP.NET controllers or minimal APIs, move orchestration into application services and keep endpoints thin. This keeps web concerns (binding/IO) separate from business rules, making both easier to evolve independently.

  • Extract complex branches into well‑named helpers that encode the decision being made. A helper like ShouldApplyLoyaltyDiscount(order) communicates intent better than a block of conditions scattered inline. Use private methods or strategy objects when branches represent different policies.

    Where appropriate, replace boolean flags with separate methods to avoid parameter-driven complexity. Instead of Process(order, applyDiscount: true), expose ProcessWithDiscount(order) to make the caller’s intent explicit.

3) Embrace SOLID

  • Use interfaces to introduce seams for testing and evolution. Depending on abstractions (IClock, IEmailSender) rather than concretions enables mocking in unit tests and swapping implementations without touching callers. This is the Dependency Inversion Principle in practice.

    Favor constructor injection to make dependencies explicit and immutable for the lifetime of the object. In .NET, wire these in the DI container and avoid service locators, which conceal dependencies and hinder maintainability.

  • Keep types open for extension but closed for modification. When behavior varies by policy, use composition and strategies rather than switch statements sprinkled through the code. New behaviors arrive as new classes rather than edits to a central conditional hub.

    Pair the Open/Closed Principle with Liskov Substitution: derived types must honor the expectations of their base abstractions. Prefer role-based interfaces and small contracts to avoid fragile hierarchies.

4) Guard your domain

  • Represent invariants with value objects (e.g., Money, Email, Percentage) so invalid states are unrepresentable. Validate arguments at boundaries, throw precise exceptions, and use the Null Object pattern where absence is a valid behavior rather than a special case.

    Encapsulation protects rules from accidental violation. Expose operations that maintain invariants instead of setters that allow arbitrary mutation. Keep mapping/serialization concerns outside the core domain.

  • Push I/O to the edges using ports and adapters: the domain defines ports (interfaces) and infrastructure supplies adapters (EF Core, HTTP clients, file IO). This isolates business logic from technology churn and makes testing fast and deterministic.

    Adopt a clean layering scheme (e.g., Application → Domain → Infrastructure). Each inward layer knows less about frameworks, keeping the core portable and future‑proof.

5) Prefer composition over inheritance

  • Composition yields flexible assemblies of behavior without tight coupling to a base class. Inheritance can work for true is‑a relationships, but most reuse is better served by delegating to collaborators. Fragile base class problems emerge when base changes ripple into derived types.

    Compose objects from small, focused services that are easy to test. Favor policies and strategies over deep hierarchies; this keeps the call graph explicit and change impact localized.

  • Use records for immutable, value‑centric data models that support with‑expressions and value equality. Separate DTOs from domain entities so transport concerns do not leak into your core logic and vice versa.

    When mapping between layers (e.g., EF entities ↔ DTOs ↔ domain), do so in the application layer to keep boundaries crisp and dependencies flowing outward.

6) Make testing easy

  • Design for testability: pure functions, small units, and dependency injection. When collaborators are injected via interfaces, you can test behavior in isolation without spinning up databases or web servers.

    Favor deterministic inputs and outputs. Use in‑memory fakes for repositories and clocks to make tests stable and fast, then add a few focused integration tests to validate the wiring.

  • Write tests that describe behavior, not implementation details. Assert on outcomes and observable effects rather than private calls. Over‑mocking couples tests to structure and makes refactoring painful.

    Express Given–When–Then in test names and structure to communicate intent. Use data‑driven tests to cover variations succinctly.

7) Performance pragmatism

  • Measure before optimizing. Use BenchmarkDotNet for microbenchmarks and Application Insights/CloudWatch for production telemetry. Let real data guide changes and validate improvements so you avoid cargo‑cult optimizations.

    Once a hotspot is confirmed, apply focused techniques: caching, pooling, avoiding boxing, and leveraging Span<T>/Memory<T> to reduce allocations in tight loops and parsing code.

  • Be mindful of allocations and copying in hot paths. Prefer streaming APIs, lazy evaluation, and struct usage only when measurements show clear wins—structs can hurt if copied frequently or boxed accidentally.

    Keep performance changes reversible and documented. Guard with tests so that readability and correctness remain the top priority while achieving your SLOs/SLA targets.