Overslaan naar inhoud

Performance Improvements in .NET 10

My kids love “Frozen”. They can sing every word, re-enact every scene, and provide detailed notes on the proper sparkle of Elsa’s ice dress. I’ve seen the movie more times than I can recount, to the point where, if you’ve seen me do any live coding, you’ve probably seen my subconscious incorporate an Arendelle reference or two. After so many viewings, I began paying closer attention to the details, like how at the very beginning of the film the ice harvesters are singing a song that subtly foreshadows the story’s central conflicts, the characters’ journeys, and even the key to resolving the climax. I’m slightly ashamed to admit I didn’t comprehend this connection until viewing number ten or so, at which point I also realized I had no idea if this ice harvesting was actually “a thing” or if it was just a clever vehicle for Disney to spin a yarn. Turns out, as I subsequently researched, it’s quite real.

In the 19th century, before refrigeration, ice was an incredibly valuable commodity. Winters in the northern United States turned ponds and lakes into seasonal gold mines. The most successful operations ran with precision: workers cleared snow from the surface so the ice would grow thicker and stronger, and they scored the surface into perfect rectangles using horse-drawn plows, turning the lake into a frozen checkerboard. Once the grid was cut, teams with long saws worked to free uniform blocks weighing several hundred pounds each. These blocks were floated along channels of open water toward the shore, at which point men with poles levered the blocks up ramps and hauled them into storage. Basically, what the movie shows.

The storage itself was an art. Massive wooden ice houses, sometimes holding tens of thousands of tons, were lined with insulation, typically straw. Done well, this insulation could keep the ice solid for months, even through summer heat. Done poorly, you would open the doors to slush. And for those moving ice over long distances, typically by ship, every degree, every crack in the insulation, every extra day in transit meant more melting and more loss.

Enter Frederic Tudor, the “Ice King” of Boston. He was obsessed with systemic efficiency. Where competitors saw unavoidable loss, Tudor saw a solvable problem. After experimenting with different insulators, he leaned on cheap sawdust, a lumber mill byproduct that outperformed straw, packing it densely around the ice to cut melt losses significantly. For harvesting efficiency, his operations adopted Nathaniel Jarvis Wyeth’s grid-scoring system, which produced uniform blocks that could be packed tightly, minimizing air gaps that would otherwise increase exposure in a ship’s hold. And to shorten the critical time between shore and ship, Tudor built out port infrastructure and depots near docks, allowing ships to load and unload much faster. Each change, from tools to ice house design to logistics, amplified the last, turning a risky local harvest into a reliable global trade. With Tudor’s enhancements, he had solid ice arriving in places like Havana, Rio de Janeiro, and even Calcutta (a voyage of four months in the 1830s). His performance gains allowed the product to survive journeys that were previously unthinkable.

What made Tudor’s ice last halfway around the world wasn’t one big idea. It was a plethora of small improvements, each multiplying the effect of the last. In software development, the same principle holds: big leaps forward in performance rarely come from a single sweeping change, rather from hundreds or thousands of targeted optimizations that compound into something transformative. .NET 10’s performance story isn’t about one Disney-esque magical idea; it’s about carefully shaving off nanoseconds here and tens of bytes there, streamlining operations that are executed trillions of times.

In the rest of this post, just as we did in Performance Improvements in .NET 9, .NET 8, .NET 7, .NET 6, .NET 5, .NET Core 3.0, .NET Core 2.1, and .NET Core 2.0, we’ll dig into hundreds of the small but meaningful and compounding performance improvements since .NET 9 that make up .NET 10’s story (if you instead stay on LTS releases and thus are upgrading from .NET 8 instead of from .NET 9, you’ll see even more improvements based on the aggregation from all the improvements in .NET 9 as well). So, without further ado, go grab a cup of your favorite hot beverage (or, given my intro, maybe something a bit more frosty), sit back, relax, and “Let It Go”!

Or, hmm, maybe, let’s push performance “Into the Unknown”?

Let .NET 10 performance “Show Yourself”?

“Do You Want To Build a Snowman Fast Service?”

I’ll see myself out.

Benchmarking Setup

As in previous posts, this tour is chock full of micro-benchmarks intended to showcase various performance improvements. Most of these benchmarks are implemented using BenchmarkDotNet 0.15.2, with a simple setup for each.

To follow along, make sure you have .NET 9 and .NET 10 installed, as most of the benchmarks compare the same test running on each. Then, create a new C# project in a new benchmarks directory:

dotnet new console -o benchmarks
cd benchmarks

That will produce two files in the benchmarks directory: benchmarks.csproj, which is the project file with information about how the application should be compiled, and Program.cs, which contains the code for the application. Finally, replace everything in benchmarks.csproj with this:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFrameworks>net10.0;net9.0</TargetFrameworks>
    <LangVersion>Preview</LangVersion>
    <ImplicitUsings>enable</ImplicitUsings>
    <Nullable>enable</Nullable>
    <ServerGarbageCollection>true</ServerGarbageCollection>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="BenchmarkDotNet" Version="0.15.2" />
  </ItemGroup>

</Project>

With that, we’re good to go. Unless otherwise noted, I’ve tried to make each benchmark standalone; just copy/paste its whole contents into the Program.cs file, overwriting everything that’s there, and then run the benchmarks. Each test includes at its top a comment for the dotnet command to use to run the benchmark. It’s typically something like this:

dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

which will run the benchmark in release on both .NET 9 and .NET 10 and show the compared results. The other common variation, used when the benchmark should only be run on .NET 10 (typically because it’s comparing two approaches rather than comparing one thing on two versions), is the following:

dotnet run -c Release -f net10.0 --filter "*"

Throughout the post, I’ve shown many benchmarks and the results I received from running them. Unless otherwise stated (e.g. because I’m demonstrating an OS-specific improvement), the results shown are from running them on Linux (Ubuntu 24.04.1) on an x64 processor.

BenchmarkDotNet v0.15.2, Linux Ubuntu 24.04.1 LTS (Noble Numbat)
11th Gen Intel Core i9-11950H 2.60GHz, 1 CPU, 16 logical and 8 physical cores
.NET SDK 10.0.100-rc.1.25451.107
  [Host]     : .NET 9.0.9 (9.0.925.41916), X64 RyuJIT AVX-512F+CD+BW+DQ+VL+VBMI

As always, a quick disclaimer: these are micro-benchmarks, timing operations so short you’d miss them by blinking (but when such operations run millions of times, the savings really add up). The exact numbers you get will depend on your hardware, your operating system, what else your machine is juggling at the moment, how much coffee you’ve had since breakfast, and perhaps whether Mercury is in retrograde. In other words, don’t expect your results to match mine exactly, but I’ve picked tests that should still be reasonably reproducible in the real world.

Now, let’s start at the bottom of the stack. Code generation.

JIT

Among all areas of .NET, the Just-In-Time (JIT) compiler stands out as one of the most impactful. Every .NET application, whether a small console tool or a large-scale enterprise service, ultimately relies on the JIT to turn intermediate language (IL) code into optimized machine code. Any enhancement to the JIT’s generated code quality has a ripple effect, improving performance across the entire ecosystem without requiring developers to change any of their own code or even recompile their C#. And with .NET 10, there’s no shortage of these improvements.

Deabstraction

As with many languages, .NET historically has had an “abstraction penalty,” those extra allocations and indirections that can occur when using high-level language features like interfaces, iterators, and delegates. Each year, the JIT gets better and better at optimizing away layers of abstraction, so that developers get to write simple code and still get great performance. .NET 10 continues this tradition. The result is that idiomatic C# (using interfaces, foreach loops, lambdas, etc.) runs even closer to the raw speed of meticulously crafted and hand-tuned code.

Object Stack Allocation

One of the most exciting areas of deabstraction progress in .NET 10 is the expanded use of escape analysis to enable stack allocation of objects. Escape analysis is a compiler technique to determine whether an object allocated in a method escapes that method, meaning determining whether that object is reachable after the method returns (for example, by being stored in a field or returned to the caller) or used in some way that the runtime can’t track within the method (like passed to an unknown callee). If the compiler can prove an object doesn’t escape, then that object’s lifetime is bounded by the method, and it can be allocated on the stack instead of on the heap. Stack allocation is much cheaper (just pointer bumping for allocation and automatic freeing when the method exits) and reduces GC pressure because, well, the object doesn’t need to be tracked by the GC. .NET 9 had already introduced some limited escape analysis and stack allocation support; .NET 10 takes this significantly further.

dotnet/runtime#115172 teaches the JIT how to perform escape analysis related to delegates, and in particular that a delegate’s Invoke method (which is implemented by the runtime) does not stash away the this reference. Then if escape analysis can prove that the delegate’s object reference is something that otherwise hasn’t escaped, the delegate can effectively evaporate. Consider this benchmark:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "y")]
public partial class Tests
{
    [Benchmark]
    [Arguments(42)]
    public int Sum(int y)
    {
        Func<int, int> addY = x => x + y;
        return DoubleResult(addY, y);
    }

    private int DoubleResult(Func<int, int> func, int arg)
    {
        int result = func(arg);
        return result + result;
    }
}

If we just run this benchmark and compare .NET 9 and .NET 10, we can immediately tell something interesting is happening.

Method Runtime Mean Ratio Code Size Allocated Alloc Ratio
Sum .NET 9.0 19.530 ns 1.00 118 B 88 B 1.00
Sum .NET 10.0 6.685 ns 0.34 32 B 24 B 0.27

The C# code for Sum belies complicated code generation by the C# compiler. It needs to create a Func<int, int>, which is “closing over” the y “local”. That means the compiler needs to “lift” y to no longer be an actual local, and instead live as a field on an object; the delegate can then point to a method on that object, giving it access to y. This is approximately what the IL generated by the C# compiler looks like when decompiled to C#:

public int Sum(int y)
{
    <>c__DisplayClass0_0 c = new();
    c.y = y;

    Func<int, int> func = new(c.<Sum>b__0);

    return DoubleResult(func, c.y);
}

private sealed class <>c__DisplayClass0_0
{
    public int y;

    internal int <Sum>b__0(int x) => x + y;
}

From that, we can see the closure is resulting in two allocations: an allocation for the “display class” (what the C# compiler calls these closure types) and an allocation for the delegate that points to the <Sum>b__0 method on that display class instance. That’s what’s accounting for the 88 bytes of allocation in the .NET 9 results: the display class is 24 bytes, and the delegate is 64 bytes. In the .NET 10 version, though, we only see a 24 byte allocation; that’s because the JIT has successfully elided the delegate allocation. Here is the resulting assembly code:

; .NET 9
; Tests.Sum(Int32)
       push      rbp
       push      r15
       push      rbx
       lea       rbp,[rsp+10]
       mov       ebx,esi
       mov       rdi,offset MT_Tests+<>c__DisplayClass0_0
       call      CORINFO_HELP_NEWSFAST
       mov       r15,rax
       mov       [r15+8],ebx
       mov       rdi,offset MT_System.Func<System.Int32, System.Int32>
       call      CORINFO_HELP_NEWSFAST
       mov       rbx,rax
       lea       rdi,[rbx+8]
       mov       rsi,r15
       call      CORINFO_HELP_ASSIGN_REF
       mov       rax,offset Tests+<>c__DisplayClass0_0.<Sum>b__0(Int32)
       mov       [rbx+18],rax
       mov       esi,[r15+8]
       cmp       [rbx+18],rax
       jne       short M00_L01
       mov       rax,[rbx+8]
       add       esi,[rax+8]
       mov       eax,esi
M00_L00:
       add       eax,eax
       pop       rbx
       pop       r15
       pop       rbp
       ret
M00_L01:
       mov       rdi,[rbx+8]
       call      qword ptr [rbx+18]
       jmp       short M00_L00
; Total bytes of code 112

; .NET 10
; Tests.Sum(Int32)
       push      rbx
       mov       ebx,esi
       mov       rdi,offset MT_Tests+<>c__DisplayClass0_0
       call      CORINFO_HELP_NEWSFAST
       mov       [rax+8],ebx
       mov       eax,[rax+8]
       mov       ecx,eax
       add       eax,ecx
       add       eax,eax
       pop       rbx
       ret
; Total bytes of code 32

In both .NET 9 and .NET 10, the JIT successfully inlined DoubleResult, such that the delegate doesn’t escape, but then in .NET 10, it’s able to stack allocate it. Woo hoo! There’s obviously still future opportunity here, as the JIT doesn’t elide the allocation of the closure object, but that should be addressable with some more effort, hopefully in the near future.

dotnet/runtime#104906 from @hez2010 and dotnet/runtime#112250 extend this kind of analysis and stack allocation to arrays. How many times have you written code like this?

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Runtime.CompilerServices;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    [Benchmark]
    public void Test()
    {
        Process(new string[] { "a", "b", "c" });

        static void Process(string[] inputs)
        {
            foreach (string input in inputs)
            {
                Use(input);
            }

            [MethodImpl(MethodImplOptions.NoInlining)]
            static void Use(string input) { }
        }
    }
}

Some method I want to call accepts an array of inputs and does something for each input. I need to allocate an array to pass my inputs in, either explicitly, or maybe implicitly due to using params or a collection expression. Ideally moving forward there would be an overload of such a Process method that accepted a ReadOnlySpan<string> instead of a string[], and I could then avoid the allocation by construction. But for all of these cases where I’m forced to create an array, .NET 10 comes to the rescue.

Method Runtime Mean Ratio Allocated Alloc Ratio
Test .NET 9.0 11.580 ns 1.00 48 B 1.00
Test .NET 10.0 3.960 ns 0.34 0.00

The JIT was able to inline Process, see that the array never leaves the frame, and stack allocate it.

Of course, now that we’re able to stack allocate arrays, we also want to be able to deal with a common way those arrays are used: via spans. dotnet/runtime#113977 and dotnet/runtime#116124 teach escape analysis to be able to reason about the fields in structs, which includes Span<T>, as it’s “just” a struct that stores a ref T field and an int length field.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Runtime.CompilerServices;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private byte[] _buffer = new byte[3];

    [Benchmark]
    public void Test() => Copy3Bytes(0x12345678, _buffer);

    [MethodImpl(MethodImplOptions.NoInlining)]
    private static void Copy3Bytes(int value, Span<byte> dest) =>
        BitConverter.GetBytes(value).AsSpan(0, 3).CopyTo(dest);
}

Here, we’re using BitConverter.GetBytes, which allocates a byte[] containing the bytes from the input (in this case, it’ll be a four-byte array for the int), then we slice off three of the four bytes, and we copy them to the destination span.

Method Runtime Mean Ratio Allocated Alloc Ratio
Test .NET 9.0 9.7717 ns 1.04 32 B 1.00
Test .NET 10.0 0.8718 ns 0.09 0.00

In .NET 9, we get the 32-byte allocation we’d expect for the byte[] in GetBytes (every object on 64-bit is at least 24 bytes, which will include the four bytes for the array’s length, and then the four bytes for the data will be in slots 24-27, and the size will be padded up to the next word boundary, for 32). In .NET 10, with GetBytes and AsSpan inlined, the JIT can see that the array doesn’t escape, and a stack allocated version of it can be used to seed the span, just as if it were created from any other stack allocation (like stackalloc). (This case also needed a little help from dotnet/runtime#113093, which taught the JIT that certain span operations, like the Memmove used internally by CopyTo, are non-escaping.)

Devirtualization

Interfaces and virtual methods are a critical aspect of .NET and the abstractions it enables. Being able to unwind these abstractions and “devirtualize” is then an important job for the JIT, which has taken notable leaps in capabilities here in .NET 10.

While arrays are one of the most central features provided by C# and .NET, and while the JIT exerts a lot of energy and does a great job optimizing many aspects of arrays, one area in particular has caused it pain: an array’s interface implementations. The runtime manufactures a bunch of interface implementations for T[], and because they’re implemented differently from literally every other interface implementation in .NET, the JIT hasn’t been able to apply the same devirtualization capabilities it’s applied elsewhere. And, for anyone who’s dived deep into micro-benchmarks, this can lead to some odd observations. Here’s a performance comparison between iterating over a ReadOnlyCollection<T> using a foreach loop (going through its enumerator) and using a for loop (indexing on each element).

// dotnet run -c Release -f net9.0 --filter "*"
// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Collections.ObjectModel;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private ReadOnlyCollection<int> _list = new(Enumerable.Range(1, 1000).ToArray());

    [Benchmark]
    public int SumEnumerable()
    {
        int sum = 0;
        foreach (var item in _list)
        {
            sum += item;
        }
        return sum;
    }

    [Benchmark]
    public int SumForLoop()
    {
        ReadOnlyCollection<int> list = _list;
        int sum = 0;
        int count = list.Count;
        for (int i = 0; i < count; i++)
        {
            sum += _list[i];
        }
        return sum;
    }
}

When asked “which of these will be faster”, the obvious answer is “SumForLoop“. After all, SumEnumerable is going to allocate an enumerator and has to make twice the number of interface calls (MoveNext+Current per iteration vs this[int] per iteration). As it turns out, the obvious answer is also wrong. Here are the timings on my machine for .NET 9:

Method Mean
SumEnumerable 949.5 ns
SumForLoop 1,932.7 ns

What the what?? If I change the ToArray to instead be ToList, however, the numbers are much more in line with our expectations.

Method Mean
SumEnumerable 1,542.0 ns
SumForLoop 894.1 ns

So what’s going on here? It’s super subtle. First, it’s necessary to know that ReadOnlyCollection<T> just wraps an arbitrary IList<T>, the ReadOnlyCollection<T>‘s GetEnumerator() returns _list.GetEnumerator() (I’m ignoring for this discussion the special-case where the list is empty), and ReadOnlyCollection<T>‘s indexer just indexes into the IList<T>‘s indexer. So far presumably this all sounds like what you’d expect. But where things gets interesting is around what the JIT is able to devirtualize. In .NET 9, it struggles to devirtualize calls to the interface implementations specifically on T[], so it won’t devirtualize either the _list.GetEnumerator() call nor the _list[index] call. However, the enumerator that’s returned is just a normal type that implements IEnumerator<T>, and the JIT has no problem devirtualizing its MoveNext and Current members. Which means that we’re actually paying a lot more going through the indexer, because for N elements, we’re having to make N interface calls, whereas with the enumerator, we only need the one with GetEnumerator interface call and then no more after that.

Thankfully, this is now addressed in .NET 10. dotnet/runtime#108153, dotnet/runtime#109209, dotnet/runtime#109237, and dotnet/runtime#116771 all make it possible for the JIT to devirtualize array’s interface method implementations. Now when we run the same benchmark (reverted back to using ToArray), we get results much more in line with our expectations, with both benchmarks improving from .NET 9 to .NET 10, and with SumForLoop on .NET 10 being the fastest.

Method Runtime Mean Ratio
SumEnumerable .NET 9.0 968.5 ns 1.00
SumEnumerable .NET 10.0 775.5 ns 0.80
SumForLoop .NET 9.0 1,960.5 ns 1.00
SumForLoop .NET 10.0 624.6 ns 0.32

One of the really interesting things about this is how many libraries are implemented on the premise that it’s faster to use an IList<T>‘s indexer for iteration than it is to use its IEnumerable<T> for iteration, and that includes System.Linq. All these years, where LINQ has had specialized code paths for working with IList<T> when possible, while in many cases it’s been a welcome optimization, in some cases (such as when the concrete type is a ReadOnlyCollection<T>), it’s actually been a deoptimization.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Collections.ObjectModel;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private ReadOnlyCollection<int> _list = new(Enumerable.Range(1, 1000).ToArray());

    [Benchmark]
    public int SkipTakeSum() => _list.Skip(100).Take(800).Sum();
}
Method Runtime Mean Ratio
SkipTakeSum .NET 9.0 3.525 us 1.00
SkipTakeSum .NET 10.0 1.773 us 0.50

Fixing devirtualization for array’s interface implementation then also has this transitive effect on LINQ.

Guarded Devirtualization (GDV) is also improved in .NET 10, such as from dotnet/runtime#116453 and dotnet/runtime#109256. With dynamic PGO, the JIT is able to instrument a method’s compilation and then use the resulting profiling data as part of emitting an optimized version of the method. One of the things it can profile are which types are used in a virtual dispatch. If one type dominates, it can special-case that type in the code gen and emit a customized implementation specific to that type. That then enables devirtualization in that dedicated path, which is “guarded” by the relevant type check, hence “GDV”. In some cases, however, such as if a virtual call was being made in a shared generic context, GDV would not kick in. Now it will.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Runtime.CompilerServices;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    [Benchmark]
    public bool Test() => GenericEquals("abc", "abc");

    [MethodImpl(MethodImplOptions.NoInlining)]
    private static bool GenericEquals<T>(T a, T b) => EqualityComparer<T>.Default.Equals(a, b);
}
Method Runtime Mean Ratio
Test .NET 9.0 2.816 ns 1.00
Test .NET 10.0 1.511 ns 0.54

dotnet/runtime#110827 from @hez2010 also helps more methods to be inlined by doing another pass looking for opportunities after later phases of devirtualization. The JIT’s optimizations are split up into multiple phases; each phase can make improvements, and those improvements can expose additional opportunities. If those opportunities would only be capitalized on by a phase that already ran, they can be missed. But for phases that are relatively cheap to perform, such as doing a pass looking for additional inlining opportunities, those phases can be repeated once enough other optimization has happened that it’s likely productive to do so again.

Bounds Checking

C# is a memory-safe language, an important aspect of modern programming languages. A key component of this is the inability to walk off the beginning or end of an array, string, or span. The runtime ensures that any such invalid attempt produces an exception, rather than being allowed to perform the invalid memory access. We can see what this looks like with a small benchmark:

// dotnet run -c Release -f net10.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private int[] _array = new int[3];

    [Benchmark]
    public int Read() => _array[2];
}

This is a valid access: the _array contains three elements, and the Read method is reading its last element. However, the JIT can’t be 100% certain that this access is in bounds (something could have changed what’s in the _array field to be a shorter array), and thus it needs to emit a check to ensure we’re not walking off the end of the array. Here’s what the generated assembly code for Read looks like:

; .NET 10
; Tests.Read()
       push      rax
       mov       rax,[rdi+8]
       cmp       dword ptr [rax+8],2
       jbe       short M00_L00
       mov       eax,[rax+18]
       add       rsp,8
       ret
M00_L00:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 25

The this reference is passed into the Read instance method in the rdi register, and the _array field is at offset 8, so the mov rax,[rdi+8] instruction is loading the address of the array into the rax register. Then the cmp is loading the value at offset 8 from that address; it so happens that’s where the length of the array is stored in the array object. So, this cmp instruction is the bounds check; it’s comparing 2 against that length to ensure it’s in bounds. If the array were too short for this access, the next jbe instruction would branch to the M00_L00 label, which calls the CORINFO_HELP_RNGCHKFAIL helper function that throws an IndexOutOfRangeException. Any time you see this pair of call CORINFO_HELP_RNGCHKFAIL/int 3 at the end of a method, there was at least one bounds check emitted by the JIT in that method.

Of course, we not only want safety, we also want great performance, and it’d be terrible for performance if every single read from an array (or string or span) incurred such an additional check. As such, the JIT strives to avoid emitting these checks when they’d be redundant, when it can prove by construction that the accesses are safe. For example, let me tweak my benchmark slightly, moving the array from an instance field into a static readonly field:

// dotnet run -c Release -f net10.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private static readonly int[] s_array = new int[3];

    [Benchmark]
    public int Read() => s_array[2];
}

We now get this assembly:

; .NET 10
; Tests.Read()
       mov       rax,705D5419FA20
       mov       eax,[rax+18]
       ret
; Total bytes of code 14

The static readonly field is immutable, arrays can’t be resized, and the JIT can guarantee that the field is initialized prior to generating the code for Read. Therefore, when generating the code for Read, it can know with certainty that the array is of length three, and we’re accessing the element at index two. Therefore, the specified array index is guaranteed to be within bounds, and there’s no need for a bounds check. We simply get two movs, the first mov to load the address of the array (which, thanks to improvements in previous releases, is allocated on a heap that doesn’t need to be compacted such that the array lives at a fixed address), and the second mov to read the int value at the location of index two (these are ints, so index two lives 2 * sizeof(int) = 8 bytes from the start of the array’s data, which itself on 64-bit is offset 16 bytes from the start of the array reference, for a total offset of 24 bytes, or in hex 0x18, hence the rax+18 in the disassembly).

Every release of .NET, more and more opportunities are found and implemented to eschew bounds checks that were previously being generated. .NET 10 continues this trend.

Our first example comes from dotnet/runtime#109900, which was inspired by the implementation of BitOperations.Log2. The operation has intrinsic hardware support on many architectures, and generally BitOperations.Log2 will use one of the hardware intrinsics available to it for a very efficient implementation (e.g. Lscnt.LeadingZeroCount, ArmBase.LeadingZeroCount, or X86Base.BitScanReverse), however as a fallback implementation it uses a lookup table. The lookup table has 32 elements, and the operation involves computing a uint value and then shifting it down by 27 in order to get the top 5 bits. Any possible result is guaranteed to be a non-negative number less than 32, but indexing into the span with that result still produced a bounds check, and, as this is a critical path, “unsafe” code (meaning code that eschews the guardrails the runtime supplies by default) was then used to avoid the bounds check.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "value")]
public partial class Tests
{
    [Benchmark]
    [Arguments(42)]
    public int Log2SoftwareFallback2(uint value)
    {
        ReadOnlySpan<byte> Log2DeBruijn =
        [
            00, 09, 01, 10, 13, 21, 02, 29,
            11, 14, 16, 18, 22, 25, 03, 30,
            08, 12, 20, 28, 15, 17, 24, 07,
            19, 27, 23, 06, 26, 05, 04, 31
        ];

        value |= value >> 01;
        value |= value >> 02;
        value |= value >> 04;
        value |= value >> 08;
        value |= value >> 16;

        return Log2DeBruijn[(int)((value * 0x07C4ACDDu) >> 27)];
    }
}

Now in .NET 10, the bounds check is gone (note the presence of the call CORINFO_HELP_RNGCHKFAIL in the .NET 9 assembly and the lack of it in the .NET 10 assembly).

; .NET 9
; Tests.Log2SoftwareFallback2(UInt32)
       push      rax
       mov       eax,esi
       shr       eax,1
       or        esi,eax
       mov       eax,esi
       shr       eax,2
       or        esi,eax
       mov       eax,esi
       shr       eax,4
       or        esi,eax
       mov       eax,esi
       shr       eax,8
       or        esi,eax
       mov       eax,esi
       shr       eax,10
       or        eax,esi
       imul      eax,7C4ACDD
       shr       eax,1B
       cmp       eax,20
       jae       short M00_L00
       mov       rcx,7913CA812E10
       movzx     eax,byte ptr [rax+rcx]
       add       rsp,8
       ret
M00_L00:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 74

; .NET 10
; Tests.Log2SoftwareFallback2(UInt32)
       mov       eax,esi
       shr       eax,1
       or        esi,eax
       mov       eax,esi
       shr       eax,2
       or        esi,eax
       mov       eax,esi
       shr       eax,4
       or        esi,eax
       mov       eax,esi
       shr       eax,8
       or        esi,eax
       mov       eax,esi
       shr       eax,10
       or        eax,esi
       imul      eax,7C4ACDD
       shr       eax,1B
       mov       rcx,7CA298325E10
       movzx     eax,byte ptr [rcx+rax]
       ret
; Total bytes of code 58

This improvement then enabled dotnet/runtime#118560 to simplify the code in the real Log2SoftwareFallback, avoiding manual use of unsafe constructs.

dotnet/runtime#113790 implements a similar case, where the result of a mathematical operation is guaranteed to be in bounds. In this case, it’s the result of Log2. The change teaches the JIT to understand the maximum possible value that Log2 could produce, and if that maximum is in bounds, then any result is guaranteed to be in bounds as well.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "value")]
public partial class Tests
{
    [Benchmark]
    [Arguments(12345)]
    public nint CountDigits(ulong value)
    {
        ReadOnlySpan<byte> log2ToPow10 =
        [
            1,  1,  1,  2,  2,  2,  3,  3,  3,  4,  4,  4,  4,  5,  5,  5,
            6,  6,  6,  7,  7,  7,  7,  8,  8,  8,  9,  9,  9,  10, 10, 10,
            10, 11, 11, 11, 12, 12, 12, 13, 13, 13, 13, 14, 14, 14, 15, 15,
            15, 16, 16, 16, 16, 17, 17, 17, 18, 18, 18, 19, 19, 19, 19, 20
        ];

        return log2ToPow10[(int)ulong.Log2(value)];
    }
}

We can see the bounds check present in the .NET 9 output and absent in the .NET 10 output:

; .NET 9
; Tests.CountDigits(UInt64)
       push      rax
       or        rsi,1
       xor       eax,eax
       lzcnt     rax,rsi
       xor       eax,3F
       cmp       eax,40
       jae       short M00_L00
       mov       rcx,7C2D0A213DF8
       movzx     eax,byte ptr [rax+rcx]
       add       rsp,8
       ret
M00_L00:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 45

; .NET 10
; Tests.CountDigits(UInt64)
       or        rsi,1
       xor       eax,eax
       lzcnt     rax,rsi
       xor       eax,3F
       mov       rcx,71EFA9400DF8
       movzx     eax,byte ptr [rcx+rax]
       ret
; Total bytes of code 29

My choice of benchmark in this case was not coincidental. This pattern shows up in the FormattingHelpers.CountDigits internal method that’s used by the core primitive types in their ToString and TryFormat implementations, in order to determine how much space will be needed to store rendered digits for a number. As with the previous example, this routine is considered core enough that it was using unsafe code to avoid the bounds check. With this fix, the code was able to be changed back to using a simple span access, and even with the simpler code, it’s now also faster.

Now, consider this code:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "ids")]
public partial class Tests
{
    public IEnumerable<int[]> Ids { get; } = [[1, 2, 3, 4, 5, 1]];

    [Benchmark]
    [ArgumentsSource(nameof(Ids))]
    public bool StartAndEndAreSame(int[] ids) => ids[0] == ids[^1];
}

I have a method that’s accepting an int[] and checking to see whether it starts and ends with the same value. The JIT has no way of knowing whether the int[] is empty or not, so it does need a bounds check; otherwise, accessing ids[0] could walk off the end of the array. However, this is what we see on .NET 9:

; .NET 9
; Tests.StartAndEndAreSame(Int32[])
       push      rax
       mov       eax,[rsi+8]
       test      eax,eax
       je        short M00_L00
       mov       ecx,[rsi+10]
       lea       edx,[rax-1]
       cmp       edx,eax
       jae       short M00_L00
       mov       eax,edx
       cmp       ecx,[rsi+rax*4+10]
       sete      al
       movzx     eax,al
       add       rsp,8
       ret
M00_L00:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 41

Note there are two jumps to the M00_L00 label that handles failed bounds checks… that’s because there are two bounds checks here, one for the start access and one for the end access. But that shouldn’t be necessary. ids[^1] is the same as ids[ids.Length - 1]. If the code has successfully accessed ids[0], that means the array is at least one element in length, and if it’s at least one element in length, ids[ids.Length - 1] will always be in bounds. Thus, the second bounds check shouldn’t be needed. Indeed, thanks to dotnet/runtime#116105, this is what we now get on .NET 10 (one branch to M00_L00 instead of two):

; .NET 10
; Tests.StartAndEndAreSame(Int32[])
       push      rax
       mov       eax,[rsi+8]
       test      eax,eax
       je        short M00_L00
       mov       ecx,[rsi+10]
       dec       eax
       cmp       ecx,[rsi+rax*4+10]
       sete      al
       movzx     eax,al
       add       rsp,8
       ret
M00_L00:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 34

What’s really interesting to me here is the knock-on effect of having removed the bounds check. It didn’t just eliminate the cmp/jae pair of instructions that’s typical of a bounds check. The .NET 9 version of the code had this:

lea edx,[rax-1]
cmp edx,eax
jae short M00_L00
mov eax,edx

At this point in the assembly, the rax register is storing the length of the array. It’s calculating ids.Length - 1 and storing the result into edx, and then checking to see whether ids.Length-1 is in bounds of ids.Length (the only way it wouldn’t be is if the array were empty such that ids.Length-1 wrapped around to uint.MaxValue); if it’s not, it jumps to the fail handler, and if it is, it stores the already computed ids.Length - 1 into eax. By removing the bounds check, we get rid of those two intervening instructions, leaving these:

lea edx,[rax-1]
mov eax,edx

which is a little silly, as this sequence is just computing a decrement, and as long as it’s ok that flags get modified, it could instead just be:

dec eax

which, as you can see in the .NET 10 output, is exactly what .NET 10 now does.

dotnet/runtime#115980 addresses another case. Let’s say I have this method:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "start", "text")]
public partial class Tests
{
    [Benchmark]
    [Arguments("abc", "abc.")]
    public bool IsFollowedByPeriod(string start, string text) =>
        start.Length < text.Length && text[start.Length] == '.';
}

We’re validating that one input’s length is less than the other, and then checking to see what comes immediately after it in the other. We know that string.Length is immutable, so a bounds check here is redundant, but until .NET 10, the JIT couldn’t see that.

; .NET 9
; Tests.IsFollowedByPeriod(System.String, System.String)
       push      rbp
       mov       rbp,rsp
       mov       eax,[rsi+8]
       mov       ecx,[rdx+8]
       cmp       eax,ecx
       jge       short M00_L00
       cmp       eax,ecx
       jae       short M00_L01
       cmp       word ptr [rdx+rax*2+0C],2E
       sete      al
       movzx     eax,al
       pop       rbp
       ret
M00_L00:
       xor       eax,eax
       pop       rbp
       ret
M00_L01:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 42

; .NET 10
; Tests.IsFollowedByPeriod(System.String, System.String)
       mov       eax,[rsi+8]
       mov       ecx,[rdx+8]
       cmp       eax,ecx
       jge       short M00_L00
       cmp       word ptr [rdx+rax*2+0C],2E
       sete      al
       movzx     eax,al
       ret
M00_L00:
       xor       eax,eax
       ret
; Total bytes of code 26

The removal of the bounds check almost halves the size of the function. If we don’t need to do a bounds check, we get to elide the cmp/jae. Without that branch, nothing is targeting M00_L01, and we can remove the call/int pair that were only necessary to support a bounds check. Then without the call in M00_L01, which was the only call in the whole method, the prologue and epilogue can be elided, meaning we also don’t need the opening and closing push and pop instructions.

dotnet/runtime#113233 improved handling “assertions” (facts the JIT claims and based on which the JIT makes optimizations) to be less order dependent. In .NET 9, this code:

static bool Test(ReadOnlySpan<char> span, int pos) =>
    pos > 0 &&
    pos <= span.Length - 42 &&
    span[pos - 1] != '\n';

was successfully removing the bounds check on the span access, but the following variant, which just switches the order of the first two conditions, was still incurring the bounds check.

static bool Test(ReadOnlySpan<char> span, int pos) =>
    pos <= span.Length - 42 &&
    pos > 0 &&
    span[pos - 1] != '\n';

Note that both conditions contribute an assertion (fact) that need to be merged in order to know the bounds check can be avoided. Now in .NET 10, the bounds check is elided, regardless of the order.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private string _s = new string('s', 100);
    private int _pos = 10;

    [Benchmark]
    public bool Test()
    {
        string s = _s;
        int pos = _pos;
        return
            pos <= s.Length - 42 &&
            pos > 0 &&
            s[pos - 1] != '\n';
    }
}
; .NET 9
; Tests.Test()
       push      rbp
       mov       rbp,rsp
       mov       rax,[rdi+8]
       mov       ecx,[rdi+10]
       mov       edx,[rax+8]
       lea       edi,[rdx-2A]
       cmp       edi,ecx
       jl        short M00_L00
       test      ecx,ecx
       jle       short M00_L00
       dec       ecx
       cmp       ecx,edx
       jae       short M00_L01
       cmp       word ptr [rax+rcx*2+0C],0A
       setne     al
       movzx     eax,al
       pop       rbp
       ret
M00_L00:
       xor       eax,eax
       pop       rbp
       ret
M00_L01:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 55

; .NET 10
; Tests.Test()
       push      rbp
       mov       rbp,rsp
       mov       rax,[rdi+8]
       mov       ecx,[rdi+10]
       mov       edx,[rax+8]
       add       edx,0FFFFFFD6
       cmp       edx,ecx
       jl        short M00_L00
       test      ecx,ecx
       jle       short M00_L00
       dec       ecx
       cmp       word ptr [rax+rcx*2+0C],0A
       setne     al
       movzx     eax,al
       pop       rbp
       ret
M00_L00:
       xor       eax,eax
       pop       rbp
       ret
; Total bytes of code 45

dotnet/runtime#113862 addresses a similar case where assertions weren’t being handled as precisely as they could have been. Consider this code:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private int[] _arr = Enumerable.Range(0, 10).ToArray();

    [Benchmark]
    public int Sum()
    {
        int[] arr = _arr;
        int sum = 0;

        int i;
        for (i = 0; i < arr.Length - 3; i += 4)
        {
            sum += arr[i + 0];
            sum += arr[i + 1];
            sum += arr[i + 2];
            sum += arr[i + 3];
        }

        for (; i < arr.Length; i++)
        {
            sum += arr[i];
        }

        return sum;
    }
}

The Sum method is trying to do manual loop unrolling. Rather than incurring a branch on each element, it’s handling four elements per iteration. Then, for the case where the length of the input isn’t evenly divisible by four, it’s handling the remaining elements in a separate loop. In .NET 9, the JIT successfully elides the bounds checks in the main unrolled loop:

; .NET 9
; Tests.Sum()
       push      rbp
       mov       rbp,rsp
       mov       rax,[rdi+8]
       xor       ecx,ecx
       xor       edx,edx
       mov       edi,[rax+8]
       lea       esi,[rdi-3]
       test      esi,esi
       jle       short M00_L02
M00_L00:
       mov       r8d,edx
       add       ecx,[rax+r8*4+10]
       lea       r8d,[rdx+1]
       add       ecx,[rax+r8*4+10]
       lea       r8d,[rdx+2]
       add       ecx,[rax+r8*4+10]
       lea       r8d,[rdx+3]
       add       ecx,[rax+r8*4+10]
       add       edx,4
       cmp       esi,edx
       jg        short M00_L00
       jmp       short M00_L02
M00_L01:
       cmp       edx,edi
       jae       short M00_L03
       mov       esi,edx
       add       ecx,[rax+rsi*4+10]
       inc       edx
M00_L02:
       cmp       edi,edx
       jg        short M00_L01
       mov       eax,ecx
       pop       rbp
       ret
M00_L03:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 92

You can see this in the M00_L00 section, which has the five add instructions (four for the summed elements, and one for the index). However, we still see the CORINFO_HELP_RNGCHKFAIL at the end, indicating this method has a bounds check. That’s coming from the final loop, due to the JIT losing track of the fact that i is guaranteed to be non-negative. Now in .NET 10, that bounds check is removed as well (again, just look for the lack of the CORINFO_HELP_RNGCHKFAIL call).

; .NET 10
; Tests.Sum()
       push      rbp
       mov       rbp,rsp
       mov       rax,[rdi+8]
       xor       ecx,ecx
       xor       edx,edx
       mov       edi,[rax+8]
       lea       esi,[rdi-3]
       test      esi,esi
       jle       short M00_L01
M00_L00:
       mov       r8d,edx
       add       ecx,[rax+r8*4+10]
       lea       r8d,[rdx+1]
       add       ecx,[rax+r8*4+10]
       lea       r8d,[rdx+2]
       add       ecx,[rax+r8*4+10]
       lea       r8d,[rdx+3]
       add       ecx,[rax+r8*4+10]
       add       edx,4
       cmp       esi,edx
       jg        short M00_L00
M00_L01:
       cmp       edi,edx
       jle       short M00_L03
       test      edx,edx
       jl        short M00_L04
M00_L02:
       mov       esi,edx
       add       ecx,[rax+rsi*4+10]
       inc       edx
       cmp       edi,edx
       jg        short M00_L02
M00_L03:
       mov       eax,ecx
       pop       rbp
       ret
M00_L04:
       mov       esi,edx
       add       ecx,[rax+rsi*4+10]
       inc       edx
       cmp       edi,edx
       jg        short M00_L04
       jmp       short M00_L03
; Total bytes of code 102

Another nice improvement comes from dotnet/runtime#112824, which teaches the JIT to turn facts it already learned from earlier checks into concrete numeric ranges, and then use those ranges to fold away later relational tests and bounds checks. Consider this example:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Runtime.CompilerServices;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private int[] _array = new int[10];

    [Benchmark]
    public void Test() => SetAndSlice(_array);

    [MethodImpl(MethodImplOptions.NoInlining)]
    private static Span<int> SetAndSlice(Span<int> src)
    {
        src[5] = 42;
        return src.Slice(4);
    }
}

We have to incur a bounds check for the src[5], as the JIT has no evidence that src is at least six elements long. However, by the time we get to the Slice call, we know the span has a length of at least six, or else writing into src[5] would have failed. We can use that knowledge to remove the length check from within the Slice call (note the removal of the call qword ptr [7F8DDB3A7810]/int 3 sequence, which is the manual length check and call to a throw helper method in Slice).

; .NET 9
; Tests.SetAndSlice(System.Span`1<Int32>)
       push      rbp
       mov       rbp,rsp
       cmp       esi,5
       jbe       short M01_L01
       mov       dword ptr [rdi+14],2A
       cmp       esi,4
       jb        short M01_L00
       add       rdi,10
       mov       rax,rdi
       add       esi,0FFFFFFFC
       mov       edx,esi
       pop       rbp
       ret
M01_L00:
       call      qword ptr [7F8DDB3A7810]
       int       3
M01_L01:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 48

; .NET 10
; Tests.SetAndSlice(System.Span`1<Int32>)
       push      rax
       cmp       esi,5
       jbe       short M01_L00
       mov       dword ptr [rdi+14],2A
       lea       rax,[rdi+10]
       lea       edx,[rsi-4]
       add       rsp,8
       ret
M01_L00:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 31

Let’s look at one more, which has a very nice impact on bounds checking, even though technically the optimization is broader than just that. dotnet/runtime#113998 creates assertions from switch targets. This means that the body of a switch case statement inherits facts about what was switched over based on what the case was, e.g. in a case 3 for switch (x), the body of that case will now “know” that x is three. This is great for very popular patterns with arrays, strings, and spans, where developers switch over the length and then index into available indices in the appropriate branches. Consider this:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Runtime.CompilerServices;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private int[] _array = [1, 2];

    [Benchmark]
    public int SumArray() => Sum(_array);

    [MethodImpl(MethodImplOptions.NoInlining)]
    public int Sum(ReadOnlySpan<int> span)
    {
        switch (span.Length)
        {
            case 0: return 0;
            case 1: return span[0];
            case 2: return span[0] + span[1];
            case 3: return span[0] + span[1] + span[2];
            default: return -1;
        }
    }
}

On .NET 9, each of those six span dereferences ends up with a bounds check:

; .NET 9
; Tests.Sum(System.ReadOnlySpan`1<Int32>)
       push      rbp
       mov       rbp,rsp
M01_L00:
       cmp       edx,2
       jne       short M01_L02
       test      edx,edx
       je        short M01_L04
       mov       eax,[rsi]
       cmp       edx,1
       jbe       short M01_L04
       add       eax,[rsi+4]
M01_L01:
       pop       rbp
       ret
M01_L02:
       cmp       edx,3
       ja        short M01_L03
       mov       eax,edx
       lea       rcx,[783DA42091B8]
       mov       ecx,[rcx+rax*4]
       lea       rdi,[M01_L00]
       add       rcx,rdi
       jmp       rcx
M01_L03:
       mov       eax,0FFFFFFFF
       pop       rbp
       ret
       test      edx,edx
       je        short M01_L04
       mov       eax,[rsi]
       cmp       edx,1
       jbe       short M01_L04
       add       eax,[rsi+4]
       cmp       edx,2
       jbe       short M01_L04
       add       eax,[rsi+8]
       jmp       short M01_L01
       test      edx,edx
       je        short M01_L04
       mov       eax,[rsi]
       jmp       short M01_L01
       xor       eax,eax
       pop       rbp
       ret
M01_L04:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 103

You can see the tell-tale bounds check sign (CORINFO_HELP_RNGCHKFAIL) under M01_L04, and no fewer than six jumps targeting that label, one for each span[...] access. But on .NET 10, we get this:

; .NET 10
; Tests.Sum(System.ReadOnlySpan`1<Int32>)
       push      rbp
       mov       rbp,rsp
M01_L00:
       cmp       edx,2
       jne       short M01_L02
       mov       eax,[rsi]
       add       eax,[rsi+4]
M01_L01:
       pop       rbp
       ret
M01_L02:
       cmp       edx,3
       ja        short M01_L03
       mov       eax,edx
       lea       rcx,[72C15C0F8FD8]
       mov       ecx,[rcx+rax*4]
       lea       rdx,[M01_L00]
       add       rcx,rdx
       jmp       rcx
M01_L03:
       mov       eax,0FFFFFFFF
       pop       rbp
       ret
       xor       eax,eax
       pop       rbp
       ret
       mov       eax,[rsi]
       jmp       short M01_L01
       mov       eax,[rsi]
       add       eax,[rsi+4]
       add       eax,[rsi+8]
       jmp       short M01_L01
; Total bytes of code 70

The CORINFO_HELP_RNGCHKFAIL and all the jumps to it have evaporated.

Cloning

There are other ways the JIT can remove bounds checking even when it can’t prove statically that every individual access is safe. Consider this method:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private int[] _arr = new int[16];

    [Benchmark]
    public void Test()
    {
        int[] arr = _arr;
        arr[0] = 2;
        arr[1] = 3;
        arr[2] = 5;
        arr[3] = 8;
        arr[4] = 13;
        arr[5] = 21;
        arr[6] = 34;
        arr[7] = 55;
    }
}

Here’s the assembly code generated on .NET 9:

; .NET 9
; Tests.Test()
       push      rax
       mov       rax,[rdi+8]
       mov       ecx,[rax+8]
       test      ecx,ecx
       je        short M00_L00
       mov       dword ptr [rax+10],2
       cmp       ecx,1
       jbe       short M00_L00
       mov       dword ptr [rax+14],3
       cmp       ecx,2
       jbe       short M00_L00
       mov       dword ptr [rax+18],5
       cmp       ecx,3
       jbe       short M00_L00
       mov       dword ptr [rax+1C],8
       cmp       ecx,4
       jbe       short M00_L00
       mov       dword ptr [rax+20],0D
       cmp       ecx,5
       jbe       short M00_L00
       mov       dword ptr [rax+24],15
       cmp       ecx,6
       jbe       short M00_L00
       mov       dword ptr [rax+28],22
       cmp       ecx,7
       jbe       short M00_L00
       mov       dword ptr [rax+2C],37
       add       rsp,8
       ret
M00_L00:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 114

Even if you’re not proficient at reading assembly, the pattern should still be obvious. In the C# code, we have eight writes into the array, and in the assembly code, we have eight repetitions of the same pattern: cmp ecx,LENGTH to compare the length of the array against the required LENGTH, jbe short M00_L00 to jump to the CORINFO_HELP_RNGCHKFAIL helper if the bounds check fails, and mov dword ptr [rax+OFFSET],VALUE to store VALUE into the array at byte offset OFFSET. Inside the Test method, the JIT can’t know how long _arr is, so it must include bounds checks. Moreover, it must include all of the bounds checks, rather than coalescing them, because it is forbidden from introducing behavioral changes as part of optimizations. Imagine instead if it chose to coalesce all of the bounds checks into a single check, and emitted this method as if it were the equivalent of the following:

if (arr.Length >= 8)
{
    arr[0] = 2;
    arr[1] = 3;
    arr[2] = 5;
    arr[3] = 8;
    arr[4] = 13;
    arr[5] = 21;
    arr[6] = 34;
    arr[7] = 55;
}
else
{
    throw new IndexOutOfRangeException();
}

Now, let’s say the array was actually of length four. The original program would have filled the array with values [2, 3, 5, 8] before throwing an exception, but this transformed code wouldn’t (there wouldn’t be any writes to the array). That’s an observable behavioral change. An enterprising developer could of course choose to rewrite their code to avoid some of these checks, e.g.

arr[7] = 55;
arr[0] = 2;
arr[1] = 3;
arr[2] = 5;
arr[3] = 8;
arr[4] = 13;
arr[5] = 21;
arr[6] = 34;

By moving the last store to the beginning, the developer has given the JIT extra knowledge. The JIT can now see that if the first store succeeds, the rest are guaranteed to succeed as well, and the JIT will emit a single bounds check. But, again, that’s the developer choosing to change their program in a way the JIT must not. However, there are other things the JIT can do. Imagine the JIT chose to rewrite the method like this instead:

if (arr.Length >= 8)
{
    arr[0] = 2;
    arr[1] = 3;
    arr[2] = 5;
    arr[3] = 8;
    arr[4] = 13;
    arr[5] = 21;
    arr[6] = 34;
    arr[7] = 55;
}
else
{
    arr[0] = 2;
    arr[1] = 3;
    arr[2] = 5;
    arr[3] = 8;
    arr[4] = 13;
    arr[5] = 21;
    arr[6] = 34;
    arr[7] = 55;
}

To our C# sensibilities, that looks unnecessarily complicated; the if and the else block contain exactly the same C# code. But, knowing what we now know about how the JIT can use known length information to elide bounds checks, it starts to make a bit more sense. Here’s what the JIT emits for this variant on .NET 9:

; .NET 9
; Tests.Test()
       push      rbp
       mov       rbp,rsp
       mov       rax,[rdi+8]
       mov       ecx,[rax+8]
       cmp       ecx,8
       jl        short M00_L00
       mov       rcx,300000002
       mov       [rax+10],rcx
       mov       rcx,800000005
       mov       [rax+18],rcx
       mov       rcx,150000000D
       mov       [rax+20],rcx
       mov       rcx,3700000022
       mov       [rax+28],rcx
       pop       rbp
       ret
M00_L00:
       test      ecx,ecx
       je        short M00_L01
       mov       dword ptr [rax+10],2
       cmp       ecx,1
       jbe       short M00_L01
       mov       dword ptr [rax+14],3
       cmp       ecx,2
       jbe       short M00_L01
       mov       dword ptr [rax+18],5
       cmp       ecx,3
       jbe       short M00_L01
       mov       dword ptr [rax+1C],8
       cmp       ecx,4
       jbe       short M00_L01
       mov       dword ptr [rax+20],0D
       cmp       ecx,5
       jbe       short M00_L01
       mov       dword ptr [rax+24],15
       cmp       ecx,6
       jbe       short M00_L01
       mov       dword ptr [rax+28],22
       cmp       ecx,7
       jbe       short M00_L01
       mov       dword ptr [rax+2C],37
       pop       rbp
       ret
M00_L01:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 177

The else block is compiled to the M00_L00 label, which contains those same eight repeated blocks we saw earlier. But the if block (above the M00_L00 label) is interesting. The only branch there is the initial array.Length >= 8 check I wrote in the C# code, emitted as the cmp ecx,8/jl short M00_L00 pair of instructions. The rest of the block is just mov instructions (and you can see there are only four writes into the array rather than eight… the JIT has optimized the eight four-byte writes into four eight-byte writes). In our rewrite, we’ve manually cloned the code, so that in what we expect to be the vast, vast, vast majority case (presumably we wouldn’t have written the array writes in the first place if we thought they’d fail), we only incur the single length check, and then we have our “hopefully this is never needed” fallback case for the rare situation where it is. Of course, you shouldn’t (and shouldn’t need to) do such manual cloning. But, the JIT can do such cloning for you, and does.

“Cloning” is an optimization long employed by the JIT, where it will do this kind of code duplication, typically of loops, when it believes that in doing so, it can heavily optimize a common case. Now in .NET 10, thanks to dotnet/runtime#112595, it can employ this same technique for these kinds of sequences of writes. Going back to our original benchmark, here’s what we now get on .NET 10:

; .NET 10
; Tests.Test()
       push      rbp
       mov       rbp,rsp
       mov       rax,[rdi+8]
       mov       ecx,[rax+8]
       mov       edx,ecx
       cmp       edx,7
       jle       short M00_L01
       mov       rdx,300000002
       mov       [rax+10],rdx
       mov       rcx,800000005
       mov       [rax+18],rcx
       mov       rcx,150000000D
       mov       [rax+20],rcx
       mov       rcx,3700000022
       mov       [rax+28],rcx
M00_L00:
       pop       rbp
       ret
M00_L01:
       test      edx,edx
       je        short M00_L02
       mov       dword ptr [rax+10],2
       cmp       ecx,1
       jbe       short M00_L02
       mov       dword ptr [rax+14],3
       cmp       ecx,2
       jbe       short M00_L02
       mov       dword ptr [rax+18],5
       cmp       ecx,3
       jbe       short M00_L02
       mov       dword ptr [rax+1C],8
       cmp       ecx,4
       jbe       short M00_L02
       mov       dword ptr [rax+20],0D
       cmp       ecx,5
       jbe       short M00_L02
       mov       dword ptr [rax+24],15
       cmp       ecx,6
       jbe       short M00_L02
       mov       dword ptr [rax+28],22
       cmp       ecx,7
       jbe       short M00_L02
       mov       dword ptr [rax+2C],37
       jmp       short M00_L00
M00_L02:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 179

This structure looks almost identical to what we got when we manually cloned: the JIT has emitted the same code twice, except in one case, there are no bounds checks, and in the other case, there are all the bounds checks, and a single length check determines which path to follow. Pretty neat.

As noted, the JIT has been doing cloning for years, in particular for loops over arrays. However, more and more code is being written against spans instead of arrays, and unfortunately this valuable optimization didn’t apply to spans. Now with dotnet/runtime#113575, it does! We can see this with a basic looping example:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private int[] _arr = new int[16];
    private int _count = 8;

    [Benchmark]
    public void WithSpan()
    {
        Span<int> span = _arr;
        int count = _count;

        for (int i = 0; i < count; i++)
        {
            span[i] = i;
        }
    }

    [Benchmark]
    public void WithArray()
    {
        int[] arr = _arr;
        int count = _count;

        for (int i = 0; i < count; i++)
        {
            arr[i] = i;
        }
    }
}

In both WithArray and WithSpan, we have the same loop, iterating from 0 to a _count with an unknown relationship to the length of _arr, so there has to be some kind of bounds checking emitted. Here’s what we get on .NET 9 for WithSpan:

; .NET 9
; Tests.WithSpan()
       push      rbp
       mov       rbp,rsp
       mov       rax,[rdi+8]
       test      rax,rax
       je        short M00_L03
       lea       rcx,[rax+10]
       mov       eax,[rax+8]
M00_L00:
       mov       edi,[rdi+10]
       xor       edx,edx
       test      edi,edi
       jle       short M00_L02
       nop       dword ptr [rax]
M00_L01:
       cmp       edx,eax
       jae       short M00_L04
       mov       [rcx+rdx*4],edx
       inc       edx
       cmp       edx,edi
       jl        short M00_L01
M00_L02:
       pop       rbp
       ret
M00_L03:
       xor       ecx,ecx
       xor       eax,eax
       jmp       short M00_L00
M00_L04:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 59

There’s some upfront assembly here associated with loading _array into a span, loading _count, and checking to see whether the count is 0 (in which case the whole loop can be skipped). Then the core of the loop is at M00_L01, which is repeatedly checking edx (which contains i) against the length of the span (in eax), jumping to CORINFO_HELP_RNGCHKFAIL if it’s an out-of-bounds access, writing edx (i) into the span at the next position, bumping up i, and then jumping back to M00_L01 to keep iterating if i is still less than count (stored in edi). In other words, we have two checks per iteration: is i still within the bounds of the span, and is i still less than count. Now here’s what we get on .NET 9 for WithArray:

; .NET 9
; Tests.WithArray()
       push      rbp
       mov       rbp,rsp
       mov       rax,[rdi+8]
       mov       ecx,[rdi+10]
       xor       edx,edx
       test      ecx,ecx
       jle       short M00_L01
       test      rax,rax
       je        short M00_L02
       cmp       [rax+8],ecx
       jl        short M00_L02
       nop       dword ptr [rax+rax]
M00_L00:
       mov       edi,edx
       mov       [rax+rdi*4+10],edx
       inc       edx
       cmp       edx,ecx
       jl        short M00_L00
M00_L01:
       pop       rbp
       ret
M00_L02:
       cmp       edx,[rax+8]
       jae       short M00_L03
       mov       edi,edx
       mov       [rax+rdi*4+10],edx
       inc       edx
       cmp       edx,ecx
       jl        short M00_L02
       jmp       short M00_L01
M00_L03:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 71

Here, label M00_L02 looks very similar to the loop we just saw in WithSpan, incurring both the check against count and the bounds check on every iteration. But note section M00_L00: it’s a clone of the same loop, still with the cmp edx,ecx that checks i against count on each iteration, but no additional bounds checking in sight. The JIT has cloned the loop, specializing one to not have bounds checks, and then in the upfront section, it determines which path to follow based on a single check against the array’s length (cmp [rax+8],ecx/jl short M00_L02). Now in .NET 10, here’s what we get for WithSpan:

; .NET 10
; Tests.WithSpan()
       push      rbp
       mov       rbp,rsp
       mov       rax,[rdi+8]
       test      rax,rax
       je        short M00_L04
       lea       rcx,[rax+10]
       mov       eax,[rax+8]
M00_L00:
       mov       edx,[rdi+10]
       xor       edi,edi
       test      edx,edx
       jle       short M00_L02
       cmp       edx,eax
       jg        short M00_L03
M00_L01:
       mov       eax,edi
       mov       [rcx+rax*4],edi
       inc       edi
       cmp       edi,edx
       jl        short M00_L01
M00_L02:
       pop       rbp
       ret
M00_L03:
       cmp       edi,eax
       jae       short M00_L05
       mov       esi,edi
       mov       [rcx+rsi*4],edi
       inc       edi
       cmp       edi,edx
       jl        short M00_L03
       jmp       short M00_L02
M00_L04:
       xor       ecx,ecx
       xor       eax,eax
       jmp       short M00_L00
M00_L05:
       call      CORINFO_HELP_RNGCHKFAIL
       int       3
; Total bytes of code 75

As with WithArray in .NET 9, WithSpan for .NET 10 has the loop cloned, with the M00_L03 block containing the bounds check on each iteration, and the M00_L01 block eliding the bounds check on each iteration.

The JIT gains more cloning abilities in .NET 10, as well. dotnet/runtime#110020, dotnet/runtime#108604, and dotnet/runtime#110483 make it possible for the JIT to clone try/finally blocks, whereas previously it would immediately bail out of cloning any regions containing such constructs. This might seem niche, but it’s actually quite valuable when you consider that foreach‘ing over an enumerable typically involves a hidden try/finally for the finally to call the enumerator’s Dispose.

Many of these different optimizations interact with each other. Dynamic PGO triggers a form of cloning, as part of the guarded devirtualization (GDV) mentioned earlier: if the instrumentation data reveals that a particular virtual call is generally performed on an instance of a specific type, the JIT can clone the resulting code into one path specific to that type and another path that handles any type. That then enables the specific-type code path to devirtualize the call and possibly inline it. And if it inlines it, that then provides more opportunities for the JIT to see that an object doesn’t escape, and potentially stack allocate it. dotnet/runtime#111473, dotnet/runtime#116978, dotnet/runtime#116992, dotnet/runtime#117222, and dotnet/runtime#117295 enable that, enhancing escape analysis to determine if an object only escapes when such a generated type test fails (when the target object isn’t of the expected common type).

I want to pause for a moment, because my words thus far aren’t nearly enthusiastic enough to highlight the magnitude of what this enables. The dotnet/runtime repo uses an automated performance analysis system which flags when benchmarks significantly improve or regress and ties those changes back to the responsible PR. This is what it looked like for this PR: Conditional Escape Analysis Triggering Many Benchmark Improvements We can see why this is so good from a simple example:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Runtime.CompilerServices;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private int[] _values = Enumerable.Range(1, 100).ToArray();

    [Benchmark]
    public int Sum() => Sum(_values);

    [MethodImpl(MethodImplOptions.NoInlining)]
    private static int Sum(IEnumerable<int> values)
    {
        int sum = 0;
        foreach (int value in values)
        {
            sum += value;
        }
        return sum;
    }
}

With dynamic PGO, the instrumented code for Sum will see that values is generally an int[], and it’ll be able to emit a specialized code path in the optimized Sum implementation for when it is. And then with this ability to do conditional escape analysis, for the common path the JIT can see that the resulting GetEnumerator produces an IEnumerator<int> that never escapes, such that along with all of the relevant methods being devirtualized and inlined, the enumerator can be stack allocated.

Method Runtime Mean Ratio Allocated Alloc Ratio
Sum .NET 9.0 109.86 ns 1.00 32 B 1.00
Sum .NET 10.0 35.45 ns 0.32 0.00

Just think about how many places in your apps and services you enumerate collections like this, and you can see why it’s such an exciting improvement. Note that these cases don’t always even require PGO. Consider a case like this:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private static readonly IEnumerable<int> s_values = new int[] { 1, 2, 3, 4, 5 };

    [Benchmark]
    public int Sum()
    {
        int sum = 0;
        foreach (int value in s_values)
        {
            sum += value;
        }
        return sum;
    }
}

Here, the JIT can see that even though the s_values is typed as IEnumerable<int>, it’s always actually an int[]. In that case, dotnet/runtime#111948 enables the return type to be retyped in the JIT as int[] and the enumerator can be stack allocated.

Method Runtime Mean Ratio Allocated Alloc Ratio
Sum .NET 9.0 16.341 ns 1.00 32 B 1.00
Sum .NET 10.0 2.059 ns 0.13 0.00

Of course, too much cloning can be a bad thing, in particular as it increases code size. dotnet/runtime#108771 employs a heuristic to determine whether loops that can be cloned should be cloned; the larger the loop, the less likely it’ll be to be cloned.

Inlining

“Inlining”, which replaces a call to a function with a copy of that function’s implementation, has always been a critically important optimization. It’s easy to think about the benefits of inlining as just being about avoiding the overhead of a call, and while that can be meaningful (especially when considering security mechanisms like Intel’s Control-Flow Enforcement Technology, which slightly increases the cost of calls), generally the most benefit from inlining comes from knock-on benefits. Just as a simple example, if you have code like:

int i = Divide(10, 5);

static int Divide(int n, int d) => n / d;

if Divide doesn’t get inlined, then when Divide is called, it’ll need to perform the actual idiv, which is a relatively expensive operation. In contrast, if Divide is inlined, then the call site becomes:

int i = 10 / 5;

which can be evaluated at compile time and becomes just:

int i = 2;

More compelling examples were already seen throughout the discussion of escape analysis and stack allocation, which depend heavily on the ability to inline methods. Given the increased importance of inlining, it’s gotten even more focus in .NET 10.

Some of the .NET work related to inlining is about enabling more kinds of things to be inlined. Historically, a variety of constructs present in a method would prevent that method from even being considered for inlining. Arguably the most well known of these is exception handling: methods with exception handling clauses, e.g. try/catch or try/finally, would not be inlined. Even a simple method like M in this example:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private readonly object _o = new();

    [Benchmark]
    public int Test()
    {
        M(_o);
        return 42;
    }

    private static void M(object o)
    {
        Monitor.Enter(o);
        try
        {
        }
        finally
        {
            Monitor.Exit(o);
        }
    }
}

does not get inlined on .NET 9:

; .NET 9
; Tests.Test()
       push      rax
       mov       rdi,[rdi+8]
       call      qword ptr [78F199864EE8]; Tests.M(System.Object)
       mov       eax,2A
       add       rsp,8
       ret
; Total bytes of code 21

But with a plethora of PRs, in particular dotnet/runtime#112968, dotnet/runtime#113023, dotnet/runtime#113497, and dotnet/runtime#112998, methods containing try/finally are no longer blocked from inlining (try/catch regions are still a challenge). For the same benchmark on .NET 10, we now get this assembly:

; .NET 10
; Tests.Test()
       push      rbp
       push      rbx
       push      rax
       lea       rbp,[rsp+10]
       mov       rbx,[rdi+8]
       test      rbx,rbx
       je        short M00_L03
       mov       rdi,rbx
       call      00007920A0EE65E0
       test      eax,eax
       je        short M00_L02
M00_L00:
       mov       rdi,rbx
       call      00007920A0EE6D50
       test      eax,eax
       jne       short M00_L04
M00_L01:
       mov       eax,2A
       add       rsp,8
       pop       rbx
       pop       rbp
       ret
M00_L02:
       mov       rdi,rbx
       call      qword ptr [79202393C1F8]
       jmp       short M00_L00
M00_L03:
       xor       edi,edi
       call      qword ptr [79202393C1C8]
       int       3
M00_L04:
       mov       edi,eax
       mov       rsi,rbx
       call      qword ptr [79202393C1E0]
       jmp       short M00_L01
; Total bytes of code 86

The details of the assembly don’t matter, other than it’s a whole lot more than was there before, because we’re now looking in large part at the implementation of M. In addition to methods with try/finally now being inlineable, other improvements have also been made around exception handling. For example, dotnet/runtime#110273 and dotnet/runtime#110464 enable the removal of try/catch and try/fault blocks if it can prove the try block can’t possibly throw. Consider this:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "i")]
public partial class Tests
{
    [Benchmark]
    [Arguments(42)]
    public int Test(int i)
    {
        try
        {
            i++;
        }
        catch
        {
            Console.WriteLine("Exception caught");
        }

        return i;
    }
}

There’s nothing the try block here can do that will result in an exception being thrown (assuming the developer hasn’t enabled checked arithmetic, in which case it could possibly throw an OverflowException), yet on .NET 9 we get this assembly:

; .NET 9
; Tests.Test(Int32)
       push      rbp
       sub       rsp,10
       lea       rbp,[rsp+10]
       mov       [rbp-10],rsp
       mov       [rbp-4],esi
       mov       eax,[rbp-4]
       inc       eax
       mov       [rbp-4],eax
M00_L00:
       mov       eax,[rbp-4]
       add       rsp,10
       pop       rbp
       ret
       push      rbp
       sub       rsp,10
       mov       rbp,[rdi]
       mov       [rsp],rbp
       lea       rbp,[rbp+10]
       mov       rdi,784B08950018
       call      qword ptr [784B0DE44EE8]
       lea       rax,[M00_L00]
       add       rsp,10
       pop       rbp
       ret
; Total bytes of code 79

Now on .NET 10, the JIT is able to elide the catch and remove all ceremony related to the try because it can see that ceremony is pointless overhead.

; .NET 10
; Tests.Test(Int32)
       lea       eax,[rsi+1]
       ret
; Total bytes of code 4

That’s true even when the contents of the try calls into other methods that are then inlined, exposing their contents to the JIT’s analysis.

(As an aside, the JIT was already able to remove try/finally when the finally was empty, but dotnet/runtime#108003 catches even more cases of checking for empty finallys again after most other optimizations have been run, in case they revealed additional empty blocks.)

Another example is “GVM”. Previously, any method that called a GVM, or generic virtual method (a virtual method with a generic type parameter), would be blocked from being inlined.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private Base _base = new();

    [Benchmark]
    public int Test()
    {
        M();
        return 42;
    }

    private void M() => _base.M<object>();
}

class Base
{
    public virtual void M<T>() { }
}

On .NET 9, the above results in this assembly:

; .NET 9
; Tests.Test()
       push      rax
       call      qword ptr [728ED5664FD8]; Tests.M()
       mov       eax,2A
       add       rsp,8
       ret
; Total bytes of code 17

Now on .NET 10, with dotnet/runtime#116773, M can now be inlined.

; .NET 10
; Tests.Test()
       push      rbp
       push      rbx
       push      rax
       lea       rbp,[rsp+10]
       mov       rbx,[rdi+8]
       mov       rdi,rbx
       mov       rsi,offset MT_Base
       mov       rdx,78034C95D2A0
       call      System.Runtime.CompilerServices.VirtualDispatchHelpers.VirtualFunctionPointer(System.Object, IntPtr, IntPtr)
       mov       rdi,rbx
       call      rax
       mov       eax,2A
       add       rsp,8
       pop       rbx
       pop       rbp
       ret
; Total bytes of code 57

Another area of investment with inlining is to do with the heuristics around when methods should be inlined. Just inlining everything would be bad; inlining copies code, which results in more code, which can have significant negative repercussions. For example, inlining’s increased code size puts more pressure on caches. Processors have an instruction cache, a small amount of super fast memory in a CPU that stores recently used instructions, making them really fast to access again the next time they’re needed (such as the next iteration through a loop, or the next time that same function is called). Consider a method M, and 100 call sites to M that are all being accessed. If all of those share the same instructions for M, because the 100 call sites are all actually calling M, the instruction cache will only need to load M‘s instructions once. If all of those 100 call sites each have their own copy of M‘s instructions, then all 100 copies will separately be loaded through the cache, fighting with each other and other instructions for residence. The less likely it is that instructions are in the cache, the more likely it is that the CPU will stall waiting for the instructions to be loaded from memory.

For this reason, the JIT needs to be careful what it inlines. It tries hard to avoid inlining anything that won’t benefit (e.g. a larger method whose instructions won’t be materially influenced by the caller’s context) while also trying hard to inline anything that will materially benefit (e.g. small functions where the code required to call the function is similar in size to the contents of the function, functions with instructions that could be materially impacted by information from the call site, etc.) As part of these heuristics, the JIT has the notion of “boosts,” where observations it makes about things methods do boost the chances of that method being inlined. dotnet/runtime#114806 gives a boost to methods that appear to be returning new arrays of a small, fixed length; if those arrays can instead be allocated in the caller’s frame, the JIT might then be able to discover they don’t escape and enable them to be stack allocated. dotnet/runtime#110596 similarly looks for boxing, as the caller could possibly instead avoid the box entirely.

For the same purpose (and also just to minimize time spent performing compilation), the JIT also maintains a budget for how much it allows to be inlined into a method compilation… once it hits that budget, it might stop inlining anything. The budgeting scheme overall works ok, however in certain circumstances it can run out of budget at very inopportune times, for example doing a lot of inlining at top-level call sites but then running out of budget by the time it gets to small methods that are critically-important to inline for good performance. To help mitigate these scenarios, dotnet/runtime#114191 and dotnet/runtime#118641 more than double the JIT’s default inlining budget.

The JIT also pays a lot of attention to the number of local variables (e.g. parameters/locals explicitly in the IL, JIT-created temporary locals, promoted struct fields, etc.) it tracks. To avoid creating too many, the JIT would stop inlining once it was already tracking 512. But as other changes have made inlining more aggressive, this (strangely hardcoded) limit gets hit more often, leaving very valuable inlinees out in the cold. dotnet/runtime#118515 removed this fixed limit and instead ties it to a large percentage of the number of locals the JIT is allowed to track (by default, this ends up almost doubling the limit used by the inliner).

Constant Folding

Constant folding is a compiler’s ability to perform operations, typically math, at compile-time rather than at run-time: given multiple constants and an expressed relationship between them, the compiler can “fold” those constants together into a new constant. So, if you have the C# code int M(int i) => i + 2 * 3;, the C# compiler does constant folding and emits that into your compilation as if you’d written int M(int i) => i + 6;. The JIT can and does also do constant folding, which is valuable especially when it’s based on information not available to the C# compiler. For example, the JIT can treat static readonly fields or IntPtr.Size or Vector128<T>.Count as constants. And the JIT can do folding across inlines. For example, if you have:

int M1(int i) => i + M2(2 * 3);
int M2(int j) => j * Environment.ProcessorCount;

the C# compiler will only be able to fold the 2 * 3, and will emit the equivalent of:

int M1(int i) => i + M2(6);
int M2(int j) => j * Environment.ProcessorCount;

but when compiling M1, the JIT can inline M2 and treat ProcessorCount as a constant (on my machine it’s 16), and produce the following assembly code for M1:

// dotnet run -c Release -f net9.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "i")]
public partial class Tests
{
    [Benchmark]
    [Arguments(42)]
    public int M1(int i) => i + M2(6);

    private int M2(int j) => j * Environment.ProcessorCount;
}
; .NET 9
; Tests.M1(Int32)
       lea       eax,[rsi+60]
       ret
; Total bytes of code 4

That’s as if the code for M1 had been public int M1(int i) => i + 96; (the displayed assembly renders hexadecimal, so the 60 is hexadecimal 0x60 and thus decimal 96).

Or consider:

string M() => GetString() ?? throw new Exception();

static string GetString() => "test";

The JIT will be able to inline GetString, at which point it can see that the result is non-null and can fold away the check for the null constant, at which point it can also dead-code eliminate the throw. Constant folding is useful on its own in avoiding unnecessary work, but it also often unlocks other optimizations, like dead-code elimination and bounds-check elimination. The JIT is already quite good at finding constant folding opportunities, and gets better in .NET 10. Consider this benchmark:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "s")]
public partial class Tests
{
    [Benchmark]
    [Arguments("test")]
    public ReadOnlySpan<char> Test(string s)
    {
        s ??= "";
        return s.AsSpan();
    }
}

Here’s the assembly that gets produced for .NET 9:

; .NET 9
; Tests.Test(System.String)
       push      rbp
       mov       rbp,rsp
       mov       rax,75B5D6200008
       test      rsi,rsi
       cmove     rsi,rax
       test      rsi,rsi
       jne       short M00_L01
       xor       eax,eax
       xor       edx,edx
M00_L00:
       pop       rbp
       ret
M00_L01:
       lea       rax,[rsi+0C]
       mov       edx,[rsi+8]
       jmp       short M00_L00
; Total bytes of code 41

Of particular note are those two test rsi,rsi instructions, which are null checks. The assembly starts by loading a value into rax; that value is the address of the "" string literal. It then uses test rsi,rsi to check whether the s parameter, which was passed into this instance method in the rsi register, is null. If it is null, the cmove rsi,rax instruction sets it to the address of the "" literal. And then… it does test rsi,rsi again? That second test is the null check at the beginning of AsSpan, which looks like this:

public static ReadOnlySpan<char> AsSpan(this string? text)
{
    if (text is null) return default;
    return new ReadOnlySpan<char>(ref text.GetRawStringData(), text.Length);
}

Now with dotnet/runtime#111985, that second null check, along with others, can be folded, resulting in this:

; .NET 10
; Tests.Test(System.String)
       mov       rax,7C01C4600008
       test      rsi,rsi
       cmove     rsi,rax
       lea       rax,[rsi+0C]
       mov       edx,[rsi+8]
       ret
; Total bytes of code 25

Similar impact comes from dotnet/runtime#108420, which is also able to fold a different class of null checks.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "condition")]
public partial class Tests
{
    [Benchmark]
    [Arguments(true)]
    public bool Test(bool condition)
    {
        string tmp = condition ? GetString1() : GetString2();
        return tmp is not null;
    }

    private static string GetString1() => "Hello";
    private static string GetString2() => "World";
}

In this benchmark, we can see that neither GetString1 nor GetString2 return null, and thus the is not null check shouldn’t be necessary. The JIT in .NET 9 couldn’t see that, but its improved .NET 10 self can.

; .NET 9
; Tests.Test(Boolean)
       mov       rax,7407F000A018
       mov       rcx,7407F000A050
       test      sil,sil
       cmove     rax,rcx
       test      rax,rax
       setne     al
       movzx     eax,al
       ret
; Total bytes of code 37

; .NET 10
; Tests.Test(Boolean)
       mov       eax,1
       ret
; Total bytes of code 6

Constant folding also applies to SIMD (Single Instruction Multiple Data), instructions that enable processing multiple pieces of data at once rather than only one element at a time. dotnet/runtime#117099 and dotnet/runtime#117572 both enable more SIMD comparison operations to participate in folding.

Code Layout

When the JIT compiler generates assembly from the IL emitted by the C# compiler, it organizes that code into “basic blocks,” a sequence of instructions with one entry point and one exit point, no jumps inside, no branches out except at the end. These blocks can then be moved around as a unit, and the order in which these blocks are placed in memory is referred to as “code layout” or “basic block layout.” This ordering can have a significant performance impact because modern CPUs rely heavily on an instruction cache and on branch prediction to keep things moving fast. If frequently executed (“hot”) blocks are close together and follow a common execution path, the CPU can execute them with fewer cache misses and fewer mispredicted jumps. If the layout is poor, where the hot code is split into pieces far apart from each other, or where rarely executed (“cold”) code sits in between, the CPU can spend more time jumping around and refilling caches than doing actual work. Consider a tight loop executed millions of times. A good layout keeps the loop entry, body, and backward edge (the jump back to the beginning of the body to do the next iteration) right next to each other, letting the CPU fetch them straight from the cache. In a bad layout, that loop might be interwoven with unrelated cold blocks (say, a catch block for a try in the loop), forcing the CPU to load instructions from different places and disrupting the flow. Similarly, for an if block, the likely path should generally be the next block so no jump is required, with the unlikely branch behind a short jump away, as that better aligns with the sensibilities of branch predictors. Code layout heuristics control how that happens, and as a result, how efficient the resulting code is able to execute.

When determining the starting layout of the blocks (before additional optimizations are done for the layout), dotnet/runtime#108903 employs a “loop-aware reverse post-order” traversal. A reverse post-order traversal is an algorithm for visiting the nodes in a control flow graph such that each block appears after its predecessors. The “loop aware” part means the traversal recognizes loops as units, effectively creating a block around the whole loop, and tries to keep the whole loop together as the layout algorithm moves things around. The intent here is to start the larger layout optimizations from a more sensible place, reducing the amount of later reshuffling and situations where loop bodies get broken up.

In the extreme, layout is essentially the traveling salesman problem. The JIT must decide the order of basic blocks so that control transfers follow short, predictable paths and make efficient use of instruction cache and branch prediction. Just like the salesman visiting cities with minimal total travel distance, the compiler is trying to arrange blocks so that the “distance” between blocks, which might be measured in bytes or instruction fetch cost or something similar, is minimized. For any meaningfully-sized set of blocks, this is prohibitively expensive to compute optimally, as the number of possible orderings grows factorially with the number of blocks. Thus, the JIT has to rely on approximations rather than attempting an exact solution. One such approximation it employs now as of dotnet/runtime#103450 (and then tweaked further in dotnet/runtime#109741 and dotnet/runtime#109835) is a “3-opt,” which really just means that rather than considering all blocks together, it looks at only three and tries to produce an optimal ordering amongst those (there are only eight possible orderings to be checked). The JIT can choose to iterate through sets of three blocks until either it doesn’t see any more improvements or hits a self-imposed limit. Specifically when handling backward jumps, with dotnet/runtime#110277, it expands this “3-opt” to “4-opt” (four blocks).

.NET 10 also does a better job of factoring PGO data into layout. With dynamic PGO, the JIT is able to gather instrumentation data from an initial compilation and then use the results of that profiling to impact an optimized re-compilation. That data can lead to conclusions about what blocks are hot or cold, and which direction branches take, all information that’s valuable for layout optimization. However, data can sometimes be missing from these profiles, so the JIT has a “profile synthesis” algorithm that makes realistic guesses for these gaps in order to fill them in (if you’ve read or seen “Jurassic Park,” this is the JIT-equivalent to filling in gaps in the dinosaur DNA sequences with that from present-day frogs.) With dotnet/runtime#111915, that repairing of the profile data is now performed just before layout, so that layout has a more complete picture.

Let’s take a concrete example of all this. Here I’ve extracted the core function from MemoryExtensions.BinarySearch:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Runtime.CompilerServices;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private int[] _values = Enumerable.Range(0, 512).ToArray();

    [Benchmark]
    public int BinarySearch()
    {
        int[] values = _values;
        return BinarySearch(ref values[0], values.Length, 256);
    }

    [MethodImpl(MethodImplOptions.NoInlining)]
    private static int BinarySearch<T, TComparable>(
        ref T spanStart, int length, TComparable comparable)
        where TComparable : IComparable<T>, allows ref struct
    {
        int lo = 0;
        int hi = length - 1;
        while (lo <= hi)
        {
            int i = (int)(((uint)hi + (uint)lo) >> 1);

            int c = comparable.CompareTo(Unsafe.Add(ref spanStart, i));
            if (c == 0)
            {
                return i;
            }
            else if (c > 0)
            {
                lo = i + 1;
            }
            else
            {
                hi = i - 1;
            }
        }

        return ~lo;
    }
}

And here’s the assembly we get for .NET 9 and .NET 10, diff’d from the former to the latter:

; Tests.BinarySearch[[System.Int32, System.Private.CoreLib],[System.Int32, System.Private.CoreLib]](Int32 ByRef, Int32, Int32)
       push      rbp
       mov       rbp,rsp
       xor       ecx,ecx
       dec       esi
       js        short M01_L07
+      jmp       short M01_L03
M01_L00:
-      lea       eax,[rsi+rcx]
-      shr       eax,1
-      movsxd    r8,eax
-      mov       r8d,[rdi+r8*4]
-      cmp       edx,r8d
-      jge       short M01_L03
       mov       r9d,0FFFFFFFF
M01_L01:
       test      r9d,r9d
       je        short M01_L06
       test      r9d,r9d
       jg        short M01_L05
       lea       esi,[rax-1]
M01_L02:
       cmp       ecx,esi
-      jle       short M01_L00
-      jmp       short M01_L07
+      jg        short M01_L07
M01_L03:
+      lea       eax,[rsi+rcx]
+      shr       eax,1
+      movsxd    r8,eax
+      mov       r8d,[rdi+r8*4]
       cmp       edx,r8d
-      jg        short M01_L04
-      xor       r9d,r9d
+      jl        short M01_L00
+      cmp       edx,r8d
+      jle       short M01_L04
+      mov       r9d,1
       jmp       short M01_L01
M01_L04:
-      mov       r9d,1
+      xor       r9d,r9d
       jmp       short M01_L01
M01_L05:
       lea       ecx,[rax+1]
       jmp       short M01_L02
M01_L06:
       pop       rbp
       ret
M01_L07:
       mov       eax,ecx
       not       eax
       pop       rbp
       ret
; Total bytes of code 83

We can see that the main change here is a block that’s moved (the bulk of M01_L00 moving down to M01_L03). In .NET 9, the lo <= hi “stay in the loop check” (cmp ecx,esi) is a backward conditional branch (jle short M01_L00), where every iteration of the loop except for the last jumps back to the top (M01_L00). In .NET 10, it instead does a forward branch to exit the loop only in the rarer case, otherwise falling through to the body of the loop in the common case, and then unconditionally branching back.

GC Write Barriers

The .NET garbage collector (GC) works on a generational model, organizing the managed heap according to how long objects have been alive. The newest allocations land in “generation 0” (gen0), objects that have survived at least one collection are promoted to “generation 1” (gen1), and those that have been around for longer end up in “generation 2” (gen2). This is based on the premise that most objects are temporary, and that once an object has been around for a while, it’s likely to stick around for a while longer. Splitting up the heap into generations enables for quickly collecting gen0 objects by only scanning the gen0 heap for remaining references to that object. The expectation is that all, or at least the vast majority, of references to a gen0 object are also in gen0. Of course, if a reference to a gen0 object snuck into gen1 or gen2, not scanning gen1 or gen2 during a gen0 collection could be, well, bad. To avoid that case, the JIT collaborates with the GC to track references from older to younger generations. Whenever there’s a reference write that could cross a generation, the JIT emits a call to a helper that tracks the information in a “card table,” and when the GC runs, it consults this table to see if it needs to scan a portion of the higher generations. That helper is referred to as a “GC write barrier.” Since a write barrier is potentially employed on every reference write, it must be super fast, and in fact the runtime has several different variations of write barriers so that the JIT can pick one optimized for the given situation. Of course, the fastest write barrier is one that doesn’t need to exist at all, so as with bounds checks, the JIT also exerts energy to try to prove when write barriers aren’t needed, eliding them when it can. And it can even more in .NET 10.

ref structs, referred to in runtime vernacular as “byref-like types,” can never live on the heap, which means any reference fields in them will similarly never live on the heap. As such, if the JIT can prove that a reference write is targeting a field of a ref struct, it can elide the write barrier. Consider this example:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private object _object = new();

    [Benchmark]
    public MyRefStruct Test() => new MyRefStruct() { Obj1 = _object, Obj2 = _object, Obj3 = _object };

    public ref struct MyRefStruct
    {
        public object Obj1;
        public object Obj2;
        public object Obj3;
    }
}

In the .NET 9 assembly, we can see three write barriers (CORINFO_HELP_CHECKED_ASSIGN_REF) corresponding to the three fields in MyRefStruct in the benchmark:

; .NET 9
; Tests.Test()
       push      r15
       push      r14
       push      rbx
       mov       rbx,rsi
       mov       r15,[rdi+8]
       mov       rsi,r15
       mov       r14,r15
       mov       rdi,rbx
       call      CORINFO_HELP_CHECKED_ASSIGN_REF
       lea       rdi,[rbx+8]
       mov       rsi,r14
       call      CORINFO_HELP_CHECKED_ASSIGN_REF
       lea       rdi,[rbx+10]
       mov       rsi,r15
       call      CORINFO_HELP_CHECKED_ASSIGN_REF
       mov       rax,rbx
       pop       rbx
       pop       r14
       pop       r15
       ret
; Total bytes of code 59

With dotnet/runtime#111576 and dotnet/runtime#111733 in .NET 10, all of those write barriers are elided:

; .NET 10
; Tests.Test()
       mov       rax,[rdi+8]
       mov       rcx,rax
       mov       rdx,rax
       mov       [rsi],rcx
       mov       [rsi+8],rdx
       mov       [rsi+10],rax
       mov       rax,rsi
       ret
; Total bytes of code 25

Much more impactful, however, are dotnet/runtime#112060 and dotnet/runtime#112227, which have to do with “return buffers.” When a .NET method is typed to return a value, the runtime has to decide how that value gets from the callee back to the caller. For small types, like integers, floating-point numbers, pointers, or object references, the answer is simple: the value can be passed back via one or more CPU registers reserved for return values, making the operation essentially free. But not all values fit neatly into registers. Larger value types, such as structs with multiple fields, require a different strategy. In these cases, the caller allocates a “return buffer,” a block of memory, typically in the caller’s stack frame, and the caller passes a pointer to that buffer as a hidden argument to the method. The method then writes the return value directly into that buffer in order to provide the caller with the data. When it comes to write barriers, the challenge here is that there historically hasn’t been a requirement that the return buffer be on the stack; it’s technically feasible it could have been allocated on the heap, even if it rarely or never is. And since the callee doesn’t know where the buffer lives, any object reference writes needed to be tracked with GC write barriers. We can see that with a simple benchmark:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private string _firstName = "Jane", _lastName = "Smith", _address = "123 Main St", _city = "Anytown";

    [Benchmark]
    public Person GetPerson() => new(_firstName, _lastName, _address, _city);

    public record struct Person(string FirstName, string LastName, string Address, string City);
}

On .NET 9, each field of the returned value type is incurring a CORINFO_HELP_CHECKED_ASSIGN_REF write barrier:

; .NET 9
; Tests.GetPerson()
       push      r15
       push      r14
       push      r13
       push      rbx
       mov       rbx,rsi
       mov       rsi,[rdi+8]
       mov       r15,[rdi+10]
       mov       r14,[rdi+18]
       mov       r13,[rdi+20]
       mov       rdi,rbx
       call      CORINFO_HELP_CHECKED_ASSIGN_REF
       lea       rdi,[rbx+8]
       mov       rsi,r15
       call      CORINFO_HELP_CHECKED_ASSIGN_REF
       lea       rdi,[rbx+10]
       mov       rsi,r14
       call      CORINFO_HELP_CHECKED_ASSIGN_REF
       lea       rdi,[rbx+18]
       mov       rsi,r13
       call      CORINFO_HELP_CHECKED_ASSIGN_REF
       mov       rax,rbx
       pop       rbx
       pop       r13
       pop       r14
       pop       r15
       ret
; Total bytes of code 81

Now in .NET 10, the calling convention has been updated to require that the return buffer live on the stack (if the caller wants the data somewhere else, it’s responsible for subsequently doing that copy). And because the return buffer is now guaranteed to be on the stack, the JIT can elide all GC write barriers as part of returning values.

; .NET 10
; Tests.GetPerson()
       mov       rax,[rdi+8]
       mov       rcx,[rdi+10]
       mov       rdx,[rdi+18]
       mov       rdi,[rdi+20]
       mov       [rsi],rax
       mov       [rsi+8],rcx
       mov       [rsi+10],rdx
       mov       [rsi+18],rdi
       mov       rax,rsi
       ret
; Total bytes of code 35

dotnet/runtime#111636 from @a74nh is also interesting from a performance perspective because, as is common in optimization, it trades off one thing for another. Prior to this change, Arm64 had one universal write barrier helper for all GC modes. This change brings Arm64 in line with x64 by routing through a WriteBarrierManager that selects among multiple JIT_WriteBarrier variants based on runtime configuration. In doing so, it makes each Arm64 write barrier a bit more expensive, by adding region checks and moving to a region-aware card marking scheme, but in exchange it lets the GC do less work: fewer cards in the card table get marked, and the GC can scan more precisely. dotnet/runtime#106191 also helps reduce the cost of write barriers on Arm64 by tightening the hot-path comparisons and eliminating some avoidable saves and restores.

Instruction Sets

.NET continues to see meaningful optimizations and improvements across all supported architectures, along with various architecture-specific improvements. Here are a handful of examples.

Arm SVE

APIs for Arm SVE were introduced in .NET 9. As noted in the Arm SVE section of last year’s post, enabling SVE is a multi-year effort, and in .NET 10, support is still considered experimental. However, the support has continued to be improved and extended, with PRs like dotnet/runtime#115775 from @snickolls-arm adding BitwiseSelect methods, dotnet/runtime#117711 from @jacob-crawley adding MaxPairwise and MinPairwise methods, and dotnet/runtime#117051 from @jonathandavies-arm adding VectorTableLookup methods.

Arm64

dotnet/runtime#111893 from @jonathandavies-arm, dotnet/runtime#111904 from @jonathandavies-arm, dotnet/runtime#111452 from @jonathandavies-arm, dotnet/runtime#112235 from @jonathandavies-arm, and dotnet/runtime#111797 from @snickolls-arm all improved .NET’s support for utilizing Arm64’s multi-operation compound instructions. For example, when implementing a compare and branch, rather than emitting a cmp against 0 followed by beq instruction, the JIT may now emit a cbz (“Compare and Branch on Zero”) instruction.

APX

Intel’s Advanced Performance Extensions (APX) was announced in 2023 as an extension of the x86/x64 instruction set. It expands the number of general-purpose registers from 16 to 32 and adds new instructions such as conditional operations designed to reduce memory traffic, improve performance, and lower power consumption. dotnet/runtime#106557 from @Ruihan-Yin, dotnet/runtime#108796 from @Ruihan-Yin, and dotnet/runtime#113237 from @Ruihan-Yin essentially teach the JIT how to speak the new dialect of assembly code (the REX and expanded EVEX encodings), and dotnet/runtime#108799 from @Ruihan-Yin updates the JIT to be able to use the expanded set of registers. The most impactful new instructions in APX are around conditional compares (ccmp), a concept the JIT already supports from targeting other instruction sets, and dotnet/runtime#111072 from @anthonycanino, dotnet/runtime#112153 from @anthonycanino, and dotnet/runtime#116445 from @khushal1996 all teach the JIT how to make good use of these new instructions with APX.

AVX512

.NET 8 added broad support for AVX512, and .NET 9 significantly improved its handling and adoption throughout the core libraries. .NET 10 includes a plethora of additional related optimizations:

  • dotnet/runtime#109258 from @saucecontrol and dotnet/runtime#109267 from @saucecontrol expand the number of places the JIT is able to use EVEX embedded broadcasts, a feature that lets vector instructions read a single scalar element from memory and implicitly replicate it to all the lanes of the vector, without needing a separate broadcast or shuffle operation.
  • dotnet/runtime#108824 removes a redundant sign extension from broadcasts.
  • dotnet/runtime#116117 from @alexcovington improves the code generated for Vector.Max and Vector.Min when AVX512 is supported.
  • dotnet/runtime#109474 from @saucecontrol improves “containment” (where an instruction can be eliminated by having its behaviors fully encapsulated by another instruction) for AVX512 widening intrinsics (similar containment-related improvements were made in dotnet/runtime#110736 from @saucecontrol and dotnet/runtime#111778 from @saucecontrol).
  • dotnet/runtime#111853 from @saucecontrol improves Vector128/256/512.Dot to be better accelerated with AVX512.
  • dotnet/runtime#110195, dotnet/runtime#110307, and dotnet/runtime#117118 all improve how vector masks are handled. In AVX512, masks are special registers that can be included as part of various instructions to control which subset of vector elements should be utilized (each bit in a mask corresponds to one element in the vector). This enables operating on only part of a vector without needing extra branching or shuffling.
  • dotnet/runtime#115981 improves zeroing (where the JIT emits instructions to zero out memory, often as part of initializing a stack frame) on AVX512. After zeroing as much as it can with 64-byte instructions, it was falling back to using 16-byte instructions, when it could have used 32-byte instructions.
  • dotnet/runtime#110662 improves the code generated for ExtractMostSignificantBits (which is used by many of the searching algorithms in the core libraries) when working with short and ushort (and char, as most of those core library implementations reinterpret cast char as one of the others) by using EVEX mask support.
  • dotnet/runtime#113864 from @saucecontrol improves the code generated for ConditionalSelect when not used with mask registers.

AVX10.2

.NET 9 added support and intrinsics for the AVX10.1 instruction set. With dotnet/runtime#111209 from @khushal1996, .NET 10 adds support and intrinsics for the AVX10.2 instruction set. dotnet/runtime#112535 from @khushal1996 optimizes floating-point min/max operations with AVX10.2 instructions, while dotnet/runtime#111775 from @khushal1996 enables floating-point conversions to utilize AVX10.2.

GFNI

dotnet/runtime#109537 from @saucecontrol adds intrinsics for the GFNI (Galois Field New Instructions) instruction set, which can be used for accelerating operations over Galois fields GF(2^8). These are common in cryptography, error correction, and data encoding.

VPCLMULQDQ

VPCLMULQDQ is an x86 instruction set extension that adds vector support to the older PCLMULQDQ instruction, which performs carry-less multiplication over 64-bit integers. dotnet/runtime#109137 from @saucecontrol adds new intrinsic APIs for VPCLMULQDQ.

Miscellaneous

Many more PRs than the ones I’ve already called out have gone into the JIT this release. Here are a few more:

  • Eliminating some covariance checks. Writing into arrays of reference types can require “covariance checks.” Imagine you have a class Base and two derived types Derived1 : Base and Derived2 : Base. Since arrays in .NET are covariant, I can have a Derived1[] and cast it successfully to a Base[], but under the covers that’s still a Derived1[]. That means, for example, that any attempt to store a Derived2 into that array should fail at runtime, even if it compiles. To achieve that, the JIT needs to insert such covariance checks when writing into arrays, but just like with bounds checking and write barriers, the JIT can elide those checks when it can prove statically that they’re not necessary. Such an example is with sealed types. If the JIT sees an array of type T[] and T is known to be sealed, T[] must exactly be a T[] and not some DerivedT[], because there can’t be a DerivedT. So with a benchmark like this:
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [DisassemblyDiagnoser]
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
    public partial class Tests
    {
        private List<string> _list = new() { "hello" };
    
        [Benchmark]
        public void Set() => _list[0] = "world";
    }

    as long as the JIT can see that the array underlying the List<string> is a string[] (string is sealed), it shouldn’t need a covariance check. In .NET 9, we get this:

    ; .NET 9
    ; Tests.Set()
           push      rbx
           mov       rbx,[rdi+8]
           cmp       dword ptr [rbx+10],0
           je        short M00_L00
           mov       rdi,[rbx+8]
           xor       esi,esi
           mov       rdx,78914920A038
           call      System.Runtime.CompilerServices.CastHelpers.StelemRef(System.Object[], IntPtr, System.Object)
           inc       dword ptr [rbx+14]
           pop       rbx
           ret
    M00_L00:
           call      qword ptr [78D1F80558A8]
           int       3
    ; Total bytes of code 44

    Note that CastHelpers.StelemRef call… that’s the helper that performs the write with the covariance check. But now in .NET 10, thanks to dotnet/runtime#107116 (which teaches the JIT how to resolve the exact type for the field of the closed generic), we get this:

    ; .NET 10
    ; Tests.Set()
           push      rbp
           mov       rbp,rsp
           mov       rax,[rdi+8]
           cmp       dword ptr [rax+10],0
           je        short M00_L00
           mov       rcx,[rax+8]
           mov       edx,[rcx+8]
           test      rdx,rdx
           je        short M00_L01
           mov       rdx,75E2B9009A40
           mov       [rcx+10],rdx
           inc       dword ptr [rax+14]
           pop       rbp
           ret
    M00_L00:
           call      qword ptr [762368116760]
           int       3
    M00_L01:
           call      CORINFO_HELP_RNGCHKFAIL
           int       3
    ; Total bytes of code 58

    No covariance check, thank you very much.

  • More strength reduction. “Strength reduction” is a classic compiler optimization that replaces more expensive operations, like multiplications, with cheaper ones, like additions. In .NET 9, this was used to transform indexed loops that used multiplied offsets (e.g. index * elementSize) into loops that simply incremented a pointer-like offset (e.g. offset += elementSize), cutting down on arithmetic overhead and improving performance. In .NET 10, strength reduction has been extended, in particular with dotnet/runtime#110222. This enables the JIT to detect multiple loop induction variables with different step sizes and strength-reduce them by leveraging their greatest common divisor (GCD). Essentially, it creates a single primary induction variable based on the GCD of the varying step sizes, and then recovers each original induction variable by appropriately scaling. Consider this example:
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [DisassemblyDiagnoser]
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "numbers")]
    public partial class Tests
    {
        [Benchmark]
        [Arguments("128514801826028643102849196099776734920914944609068831724328541639470403818631040")]
        public int[] Parse(string numbers)
        {
            int[] results = new int[numbers.Length];
            for (int i = 0; i < numbers.Length; i++)
            {
                results[i] = numbers[i] - '0';
            }
    
            return results;
        }
    }

    In this benchmark, we’re iterating through an input string, which is a collection of 2-byte char elements, and we’re storing the results into an array of 4-byte int elements. The core loop in the .NET 9 assembly looks like this:

    ; .NET 9
    M00_L00:
           mov       edx,ecx
           movzx     edi,word ptr [rbx+rdx*2+0C]
           add       edi,0FFFFFFD0
           mov       [rax+rdx*4+10],edi
           inc       ecx
           cmp       r15d,ecx
           jg        short M00_L00

    The movzx edi,word ptr [rbx+rdx*2+0C] is the read of numbers[i], and the mov [rax+rdx*4+10],edi is the assignment to results[i]. rdx here is i, so each assignment is effectively having to do i*2 to compute the byte offset of the char at index i, and similarly do i*4 to compute the byte offset of the int at offset i. Now here’s the .NET 10 assembly:

    ; .NET 10
    M00_L00:
           movzx     edx,word ptr [rbx+rcx+0C]
           add       edx,0FFFFFFD0
           mov       [rax+rcx*2+10],edx
           add       rcx,2
           dec       r15d
           jne       short M00_L00

    The multiplication in the numbers[i] read is gone. Instead, it can just increment rcx by 2 on each iteration, treating that as the offset to the ith char, and then instead of multiplying by 4 to compute the int offset, it just multiples by 2.

  • CSE integration with SSA. As with most compilers, the JIT employs common subexpression elimination (CSE) to find identical computations and avoid doing them repeatedly. dotnet/runtime#106637 teaches the JIT how to do so in a more consistent manner by more fully integrating CSE with its Static Single Assignment (SSA) representation. This in turn allows for more optimizations to kick in, e.g. some of the strength reduction done around loop induction variables in .NET 9 wasn’t applying as much as it should have, and now it will.
  • return someCondition ? true : false. There are often multiple ways to represent the same thing, but it often happens in compilers that certain patterns will be recognized during optimization while other equivalent ones won’t, and it can therefore behoove the compiler to first normalize the representations to all use the better recognized one. There’s a really common and interesting case of this with return someCondition, where, for reasons relating to the JIT’s internal representation, the JIT is better able to optimize with the equivalent return someCondition ? true : false. dotnet/runtime#107499 normalizes to the latter. As an example of this, consider this benchmark:
    // dotnet run -c Release -f net9.0 --filter "*"
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [DisassemblyDiagnoser]
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "i")]
    public partial class Tests
    {
        [Benchmark]
        [Arguments(42)]
        public bool Test1(int i)
        {
            if (i > 10 && i < 20) return true;
            return false;
        }
    
        [Benchmark]
        [Arguments(42)]
        public bool Test2(int i) => i > 10 && i < 20;
    }

    On .NET 9, that results in this assembly code for Test1:

    ; .NET 9
    ; Tests.Test1(Int32)
           sub       esi,0B
           cmp       esi,8
           setbe     al
           movzx     eax,al
           ret
    ; Total bytes of code 13

    The JIT has successfully recognized that it can change the two comparisons to instead be a subtraction and a single comparison, as if the i > 10 && i < 20 were instead written as (uint)(i - 11) <= 8. But for Test2, .NET 9 produces this:

    ; .NET 9
    ; Tests.Test2(Int32)
           xor       eax,eax
           cmp       esi,14
           setl      cl
           movzx     ecx,cl
           cmp       esi,0A
           cmovg     eax,ecx
           ret
    ; Total bytes of code 18

    Because of how the return condition is being represented internally by the JIT, it’s missing this particular optimization, and the assembly code more directly reflects what was written in the C#. But now in .NET 10, because of this normalization, we now get this for Test2, exactly what we got for Test1:

    ; .NET 10
    ; Tests.Test2(Int32)
           sub       esi,0B
           cmp       esi,8
           setbe     al
           movzx     eax,al
           ret
    ; Total bytes of code 13
  • Bit tests. The C# compiler has a lot of flexibility in how it emits switch and is expressions. Consider a case like this: c is ' ' or '\t' or '\r' or '\n'. It could emit that as the equivalent of a series of cascading if/else branches, as an IL switch instruction, as a bit test, or as combinations of those. The C# compiler, though, doesn’t have all of the information the JIT has, such as whether the process is 32-bit or 64-bit, or knowledge of what instructions cost on given hardware. With dotnet/runtime#107831, the JIT will now recognize more such expressions that can be implemented as a bit test and generate the code accordingly.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    using System.Runtime.CompilerServices;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [DisassemblyDiagnoser]
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "c")]
    public partial class Tests
    {
        [Benchmark]
        [Arguments('s')]
        public void Test(char c)
        {
            if (c is ' ' or '\t' or '\r' or '\n' or '.')
            {
                Handle(c);
            }
    
            [MethodImpl(MethodImplOptions.NoInlining)]
            static void Handle(char c) { }
        }
    }
    Method Runtime Mean Ratio Code Size
    Test .NET 9.0 0.4537 ns 1.02 58 B
    Test .NET 10.0 0.1304 ns 0.29 44 B

    It’s also common to see bit tests implemented in C# against shifted values; a constant mask is created with bits set at various indices, and then an incoming value to check is tested by shifting a bit to the corresponding index and seeing whether it aligns with one in the mask. For example, here is how Regex tests to see whether a provided UnicodeCategory is one of those that composes the “word” class (`\w`):

    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    using System.Globalization;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [DisassemblyDiagnoser]
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "uc")]
    public partial class Tests
    {
        [Benchmark]
        [Arguments(UnicodeCategory.DashPunctuation)]
        public bool Test(UnicodeCategory uc) => (WordCategoriesMask & (1 << (int)uc)) != 0;
    
        private const int WordCategoriesMask =
            1 << (int)UnicodeCategory.UppercaseLetter |
            1 << (int)UnicodeCategory.LowercaseLetter |
            1 << (int)UnicodeCategory.TitlecaseLetter |
            1 << (int)UnicodeCategory.ModifierLetter |
            1 << (int)UnicodeCategory.OtherLetter |
            1 << (int)UnicodeCategory.NonSpacingMark |
            1 << (int)UnicodeCategory.DecimalDigitNumber |
            1 << (int)UnicodeCategory.ConnectorPunctuation;
    }

    Previously, the JIT would end up emitting that similar to how it’s written: a shift followed by a test. Now with dotnet/runtime#111979 from @varelen, it can emit it as a bit test.

    ; .NET 9
    ; Tests.Test(System.Globalization.UnicodeCategory)
           mov       eax,1
           shlx      eax,eax,esi
           test      eax,4013F
           setne     al
           movzx     eax,al
           ret
    ; Total bytes of code 22
    
    ; .NET 10
    ; Tests.Test(System.Globalization.UnicodeCategory)
           mov       eax,4013F
           bt        eax,esi
           setb      al
           movzx     eax,al
           ret
    ; Total bytes of code 15
  • Redundant sign extensions. With dotnet/runtime#111305, the JIT can now remove more redundant sign extensions (when you take a smaller size type, e.g. int, and convert it to a larger size type, e.g. long, while preserving the value’s sign). For example, with a test like this public ulong Test(int x) => (uint)x < 10 ? (ulong)x << 60 : 0, the JIT can now emit a mov (just copy the bits) instead of movsxd (move with sign extension), since it knows from the first comparison that the shift will only ever be performed with a non-negative x.
  • Better division with BMI2. If the BMI2 instruction set is available, with dotnet/runtime#116198 from @Daniel-Svensson the JIT can now use the mulx instruction (“Unsigned Multiply Without Affecting Flags”) to implement integer division, e.g.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [DisassemblyDiagnoser]
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "value")]
    public partial class Tests
    {
        [Benchmark]
        [Arguments(12345)]
        public ulong Div10(ulong value) => value / 10;
    }

    results in:

    ; .NET 9
    ; Tests.Div10(UInt64)
           mov       rdx,0CCCCCCCCCCCCCCCD
           mov       rax,rsi
           mul       rdx
           mov       rax,rdx
           shr       rax,3
           ret
    ; Total bytes of code 24
    
    ; .NET 10
    ; Tests.Div10(UInt64)
           mov       rdx,0CCCCCCCCCCCCCCCD
           mulx      rax,rax,rsi
           shr       rax,3
           ret
    ; Total bytes of code 20
  • Better range comparison. When comparing a ulong expression against uint.MaxValue, rather than being emitted as a comparison, with dotnet/runtime#113037 from @shunkino it can be handled more efficiently as a shift.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [DisassemblyDiagnoser]
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "x")]
    public partial class Tests
    {
        [Benchmark]
        [Arguments(12345)]
        public bool Test(ulong x) => x <= uint.MaxValue;
    }

    resulting in:

    ; .NET 9
    ; Tests.Test(UInt64)
           mov       eax,0FFFFFFFF
           cmp       rsi,rax
           setbe     al
           movzx     eax,al
           ret
    ; Total bytes of code 15
    
    ; .NET 10
    ; Tests.Test(UInt64)
           shr       rsi,20
           sete      al
           movzx     eax,al
           ret
    ; Total bytes of code 11
  • Better dead branch elimination. The JIT’s branch optimizer is already able to use implications from comparisons to statically determine the outcome of other branches. For example, if I have this:
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [DisassemblyDiagnoser]
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "x")]
    public partial class Tests
    {
        [Benchmark]
        [Arguments(42)]
        public void Test(int x)
        {
            if (x > 100)
            {
                if (x > 10)
                {
                    Console.WriteLine();
                }
            }
        }
    }

    the JIT generates this on .NET 9:

    ; .NET 9
    ; Tests.Test(Int32)
           cmp       esi,64
           jg        short M00_L00
           ret
    M00_L00:
           jmp       qword ptr [7766D3E64FA8]
    ; Total bytes of code 12

    Note there’s only a single comparison against 100 (0x64), with the comparison against 10 elided (as it’s implied by the previous comparison). However, there are many variations to this, and not all of them were handled equally well. Consider this:

    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [DisassemblyDiagnoser]
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "x")]
    public partial class Tests
    {
        [Benchmark]
        [Arguments(42)]
        public void Test(int x)
        {
            if (x < 16)
                return;
    
            if (x < 8)
                Console.WriteLine();
        }
    }

    Here, the Console.WriteLine ideally wouldn’t appear in the emitted assembly at all, as it’s never reachable. Alas, on .NET 9, we get this (the jmp instruction here is a tail call to WriteLine):

    ; .NET 9
    ; Tests.Test(Int32)
           push      rbp
           mov       rbp,rsp
           cmp       esi,10
           jl        short M00_L00
           cmp       esi,8
           jge       short M00_L00
           pop       rbp
           jmp       qword ptr [731ED8054FA8]
    M00_L00:
           pop       rbp
           ret
    ; Total bytes of code 23

    With dotnet/runtime#111766 on .NET 10, it successfully recognizes that by the time it gets to the x < 8, that condition will always be false, and it can be eliminated. And once it’s eliminated, the initial branch is also unnecessary. So the whole method reduces to this:

    ; .NET 10
    ; Tests.Test(Int32)
           ret
    ; Total bytes of code 1
  • Better floating-point conversion. dotnet/runtime#114410 from @saucecontrol, dotnet/runtime#114597 from @saucecontrol, and dotnet/runtime#111595 from @saucecontrol all speed up conversions between floating-point and integers, such as by using vcvtusi2s when AVX512 is available, or when it isn’t, avoiding the intermediate double conversion.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [DisassemblyDiagnoser]
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "i")]
    public partial class Tests
    {
        [Benchmark]
        [Arguments(42)]
        public float Compute(uint i) => i;
    }
    ; .NET 9
    ; Tests.Compute(UInt32)
           mov       eax,esi
           vxorps    xmm0,xmm0,xmm0
           vcvtsi2sd xmm0,xmm0,rax
           vcvtsd2ss xmm0,xmm0,xmm0
           ret
    ; Total bytes of code 16
    
    ; .NET 10
    ; Tests.Compute(UInt32)
           vxorps    xmm0,xmm0,xmm0
           vcvtusi2ss xmm0,xmm0,esi
           ret
    ; Total bytes of code 11
  • Unrolling. When using CopyTo (or other “memmove”-based operations) with a constant source, dotnet/runtime#108576 reduces costs by avoiding a redundant memory load. dotnet/runtime#109036 unblocks more unrolling on Arm64 for Equals/StartsWith/EndsWith. And dotnet/runtime#110893 enables unrolling non-zero fills (unrolling already happened for zero fills).
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [DisassemblyDiagnoser]
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
    public partial class Tests
    {
        private char[] _chars = new char[100];
    
        [Benchmark]
        public void Fill() => _chars.AsSpan(0, 16).Fill('x');
    }
    ; .NET 9
    ; Tests.Fill()
           push      rbp
           mov       rbp,rsp
           mov       rdi,[rdi+8]
           test      rdi,rdi
           je        short M00_L00
           cmp       dword ptr [rdi+8],10
           jb        short M00_L00
           add       rdi,10
           mov       esi,10
           mov       edx,78
           call      qword ptr [7F3093FBF1F8]; System.SpanHelpers.Fill[[System.Char, System.Private.CoreLib]](Char ByRef, UIntPtr, Char)
           nop
           pop       rbp
           ret
    M00_L00:
           call      qword ptr [7F3093787810]
           int       3
    ; Total bytes of code 49
    
    ; .NET 10
    ; Tests.Fill()
           push      rbp
           mov       rbp,rsp
           mov       rax,[rdi+8]
           test      rax,rax
           je        short M00_L00
           cmp       dword ptr [rax+8],10
           jl        short M00_L00
           add       rax,10
           vbroadcastss ymm0,dword ptr [78EFC70C9340]
           vmovups   [rax],ymm0
           vzeroupper
           pop       rbp
           ret
    M00_L00:
           call      qword ptr [78EFC7447B88]
           int       3
    ; Total bytes of code 48

    Note the call to SpanHelpers.Fill in the .NET 9 assembly and the absence of it in the .NET 10 assembly.

Native AOT

Native AOT is the ability for a .NET application to be compiled directly to assembly code at build-time. The JIT is still used for code generation, but only at build time; the JIT isn’t part of the shipping app at all, and no code generation is performed at run-time. As such, most of the optimizations to the JIT already discussed, as well as optimizations throughput the rest of this post, apply to Native AOT equally. Native AOT presents some unique opportunities and challenges, however.

One super power of the Native AOT tool chain is the ability to interpret (some) code at build-time and use the results of that execution rather than performing the operation at run-time. This is particularly relevant for static constructors, where the constructor’s code can be interpreted to initialize various static readonly fields, and then the contents of those fields can be persisted into the generated assembly; at run-time, the contents needs only be rehydrated from the assembly rather than being recomputed. This also potentially helps to make more code redundant and removable, if for example the static constructor and anything it (and only it) referenced were no longer needed. Of course, it would be dangerous and problematic if any arbitrary code could be run during build, so instead there’s a very filtered allow list and specialized support for the most common and appropriate constructs. dotnet/runtime#107575 augments this “preinitialization” capability to support spans sourced from arrays, such that using methods like .AsSpan() doesn’t cause preinitialization to bail out. dotnet/runtime#114374 also improved preinitialization, removing restrictions around accessing static fields of other types, calling methods on other types that have their own static constructors, and dereferencing pointers.

Conversely, Native AOT has its own challenges, specifically that size really matters and is harder to control. With a JIT available at run-time, code generation for only exactly what’s needed can be deferred until run-time. With Native AOT, all assembly code generation needs to be done at build-time, which means the Native AOT tool chain needs to work hard to determine the least amount of code it needs to emit to support everything the app might need to do at run-time. Most of the effort on Native AOT in any given release ends up being about helping it to further decrease the size of generated code. For example:

  • dotnet/runtime#117411 enables folding bodies of generic instantations of the same method, essentially avoiding duplication by using the same code for copies of the same method where possible.
  • dotnet/runtime#117080 similarly helps improve the existing method body deduplication logic.
  • dotnet/runtime#117345 from @huoyaoyuan tweaks a bit of code in reflection that would previously artificially force the code to be preserved for all enumerators of all generic instantations of every collection type.
  • dotnet/runtime#112782 adds the same distinction that already existed for MethodTables for non-generic methods (“is this method table visible to user code or not”) to generic methods, allowing more metadata for the non-user visible ones to be optimized away.
  • dotnet/runtime#118718 and dotnet/runtime#118832 enable size reductions related to boxed enums. The former tweaks a few methods in Thread, GC, and CultureInfo to avoid boxing some enums, which means the code for those needn’t be generated. The latter tweaks the implementation of RuntimeHelpers.CreateSpan, which is used by the C# compiler as part of creating spans with constructs like collection expressions. CreateSpan is a generic method, and the Native AOT toolchain’s whole-program analysis would end up treating the generic type parameter as being “reflected on,” meaning the compiler had to assume any type parameter would be used with reflection and thus had to preserve relevant metadata. When used with enums, it would need to ensure support for boxed enums was kept around, and System.Console has such a use with an enum. That in turn meant that a simple “hello, world” console app couldn’t trim away that boxed enum reflection support; now it can.

VM

The .NET runtime offers a wide range of services to managed applications, most obviously the garbage collector and the JIT compiler, but it also encompasses a host of other capabilities: assembly and type loading, exception handling, virtual method dispatch, interoperability support, stub generation, and so on. Collectively, all of these features are referred to as being a part of the .NET Virtual Machine (VM).

dotnet/runtime#108167 and dotnet/runtime#109135 rewrote various runtime helpers from C in the runtime to C# in System.Private.CoreLib, including the “unboxing” helpers, which are used to unbox objects to value types in niche scenarios. This rewrite avoids overheads associated with transitioning between native and managed and also enables the JIT an opportunity to optimize in the context of callers, such as with inlining. Note that these unboxing helpers are only used in obscure situations, so it requires a bit of a complicated benchmark to demonstrate the impact:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Runtime.CompilerServices;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[DisassemblyDiagnoser(0)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private object[] _objects = [new GenericStruct<MyStruct, object>()];

    [Benchmark]
    public void Unbox() => Unbox<GenericStruct<MyStruct, object>>(_objects[0]);

    private void Unbox<T>(object o) where T : struct, IStaticMethod<T>
    {
        T? local = (T?)o;
        if (local.HasValue)
        {
            T localCopy = local.Value;
            T.Method(ref localCopy);
        }
    }

    public interface IStaticMethod<T>
    {
        public static abstract void Method(ref T param);
    }

    struct MyStruct : IStaticMethod<MyStruct>
    {
        public static void Method(ref MyStruct param) { }
    }

    struct GenericStruct<T, V> : IStaticMethod<GenericStruct<T, V>> where T : IStaticMethod<T>
    {
        public T Value;

        [MethodImpl(MethodImplOptions.NoInlining)]
        public static void Method(ref GenericStruct<T, V> value) => T.Method(ref value.Value);
    }
}
Method Runtime Mean Ratio Code Size
Unbox .NET 9.0 1.626 ns 1.00 148 B
Unbox .NET 10.0 1.379 ns 0.85 148 B

What it means to move the implementation from native to managed is most easily seen just by looking at the generated assembly. Other than uninteresting and non-impactful changes in which registers happen to get assigned, the only real difference between .NET 9 and .NET 10 is a single instruction:

-      call      CORINFO_HELP_UNBOX_NULLABLE
+      call      System.Runtime.CompilerServices.CastHelpers.Unbox_Nullable(Byte ByRef, System.Runtime.CompilerServices.MethodTable*, System.Object)

dotnet/runtime#115284 streamlines how the runtime sets up and tears down the little code blocks (“funclets”) the runtime uses to implement catch/finally on x64. Historically, these funclets acted a lot like tiny functions, saving and restoring non-volatile CPU registers on entry and exit (a “non-volatile” register is effectively one where the caller can expect it to contain the same value after a function call as it did before the function call). This PR changes the contract so that funclets no longer need to preserve those registers themselves; instead, the runtime takes care of preserving them. That shrinks the prologs and epilogs the JIT emits for funclets, reduces instruction count and code size, and lowers the cost of entering and exiting exception handlers.

With dotnet/runtime#114462, the runtime now uses a single shared “template” for many of the small executable “stubs” it needs at runtime; stubs are tiny chunks of machine code that act as jump points, call counters, or patchable trampolines. Previously, each memory allocation for stubs would regenerate the same instructions over and over. The new approach builds one copy of the stub code in a read-only page and then maps that same physical page into every place it’s needed, while giving each allocation its own writable page for the per-stub data that changes at runtime. This lets hundreds of virtual stub pages all point to one physical code page, cutting memory use, reducing startup work, and improving instruction cache locality.

Also interesting are dotnet/runtime#117218 and dotnet/runtime#116031, which together help optimize the generation of stack traces in large, heavily multi-threaded applications when being profiled.

Threading

The ThreadPool underlies most work in most .NET apps and services. It’s a critical-path component that has to be able to deal with all manners of workloads efficiently.

dotnet/runtime#109841 implemented an opt-in feature that dotnet/runtime#112796 then enabled by default for .NET 10. The idea behind it is fairly straightforward, but to understand it, we first need to examine how the thread pool queues work items. The thread pool has multiple queues, typically one “global” queue and then one “local” queue per thread in the pool. When threads outside of the pool queue work, that work goes to the global queue, and when a thread pool thread queues work, especially a Task or work related to an await, that work item typically goes to that thread’s local queue. Then when a thread pool thread finishes whatever it was doing and goes in search of more work, it first checks its own local queue (treating its local queue as highest priority), then if that’s empty it checks the global queue, and then if that’s empty it goes and helps out the other threads in the pool by searching their queues for work to be done. This is all in an attempt to a) minimize contention on the global queue (if threads are mainly queueing and dequeuing from their own local queue, they’re not contending with each other), and b) prioritize work that’s logically part of already started work (the only way for work to get into a local queue is if that thread was processing a work item that created it). Generally, this works out well, but sometimes we get into degenerate scenarios, typically when an app does something that goes against best practices… like blocking.

Blocking a thread pool thread means that thread can’t service other work coming into the pool. If the blocking is brief, it’s generally fine, and if it’s longer, the thread pool tries to accommodate it by injecting more threads and finding a steady state at which things hum along. But a certain kind of blocking can be really problematic: “sync over async”. With “sync over async”, one thread blocks while waiting for an asynchronous operation to complete, and if that operation needs to do something on the thread pool in order to complete, you now have one thread pool thread blocked waiting for another thread pool thread to pick up a particular work item and process it. This can quickly lead to the whole pool getting into a jam… especially with the thread local queues. If a thread is blocked on an operation that depends on work items in that thread’s local queue getting processed, that work item being picked off now depends on the global queue being exhausted and another thread coming along and stealing the work item from this thread’s queue. If there’s a steady stream of incoming work into the global queue, though, that will never happen; essentially, the highest priority work item has become the lowest priority work item.

So, back to these PRs. The idea is fairly simple: when the thread is about to block, and in particular when it’s about to block waiting on a Task, it first dumps its entire local queue into the global queue. That way, this work which was highest priority for the blocked thread has a fairer chance of being processed by other threads, rather than it being the lowest priority work for everyone. We can try to see the impact of this with a specifically-crafted workload:

// dotnet run -c Release -f net9.0 --filter "*"
// dotnet run -c Release -f net10.0 --filter "*"

using System.Diagnostics;

int numThreads = Environment.ProcessorCount;
ThreadPool.SetMaxThreads(numThreads, 1);

ManualResetEventSlim start = new();
CountdownEvent allDone = new(numThreads);
new Thread(() =>
{
    while (true)
    {
        for (int i = 0; i < 10_000; i++)
        {
            ThreadPool.QueueUserWorkItem(_ => Thread.SpinWait(1));
        }

        Thread.Yield();
    }
}) { IsBackground = true }.Start();

for (int i = 0; i < numThreads; i++)
{
    ThreadPool.QueueUserWorkItem(_ =>
    {
        start.Wait();
        TaskCompletionSource tcs = new();

        const int LocalItemsPerThread = 4;
        var remaining = LocalItemsPerThread;
        for (int j = 0; j < LocalItemsPerThread; j++)
        {
            Task.Run(() =>
            {
                Thread.SpinWait(100);
                if (Interlocked.Decrement(ref remaining) == 0)
                {
                    tcs.SetResult();
                }
            });
        }

        tcs.Task.Wait();
        allDone.Signal();
    });
}

var sw = Stopwatch.StartNew();
start.Set();
Console.WriteLine(allDone.Wait(20_000) ?
    $"Completed: {sw.ElapsedMilliseconds}ms" :
    $"Timed out after {sw.ElapsedMilliseconds}ms");

This is:

  • creating a noise thread that tries to keep the global queue inundated with new work
  • queuing Environment.ProcessorCount work items, each of which queues four work items to the local queue that all do a little work and then blocks on a Task until they all complete
  • waiting for those Environment.ProcessorCount work items to complete

When I run this on .NET 9, it hangs, because there’s so much work in the global queue, no threads are able to process those sub-work items that are necessary to unblock the main work items:

Timed out after 20002ms

On .NET 10, it generally completes almost instantly:

Completed: 4ms

Some other tweaks were made to the pool as well:

  • dotnet/runtime#115402 reduced the amount of spin-waiting done on Arm processors, bringing it more in line with x64.
  • dotnet/runtime#112789 reduced the frequency at which the thread pool checked CPU utilization, as in some circumstances it was adding noticeable overhead, and makes the frequency configurable.
  • dotnet/runtime#108135 from @AlanLiu90 removed a bit of lock contention that could happen under load when starting new thread pool threads.

On the subject of locking, and only for developers that find themselves with a strong need to do really low-level low-lock development, dotnet/runtime#107843 from @hamarb123 adds two new methods to the Volatile class: ReadBarrier and WriteBarrier. A read barrier has “load acquire” semantics, and is sometimes referred to as a “downward fence”: it prevents instructions from being reordered in such a way that memory accesses below/after the barrier move to above/before it. In contrast, a write barrier has “store release” semantics, and is sometimes referred to as an “upwards fence”: it prevents instructions from being reordered in such a way that memory accesses above/before the barrier move to below/after it. I find it helps to think about this with regards to a lock:

A;
lock (...)
{
    B;
}
C;

While in practice the implementation may provide stronger fences, by specification entering a lock has acquire semantics and exiting a lock has release semantics. Imagine if the instructions in the above code could be reordered like this:

A;
B;
lock (...)
{
}
C;

or like this:

A;
lock (...)
{
}
B;
C;

Both of those would be really bad. Thankfully, the barriers help us here. The acquire / read barrier semantics of entering the lock are a downwards fence: logically the brace that starts the lock puts downwards pressure on everything inside the lock to not move to before it, and the brace that ends the lock puts upwards pressure on everything inside the lock to not move to after it. Interestingly, nothing about the semantics of these barriers prevents this from happening:

lock (...)
{
    A;
    B;
    C;
}

These barriers are referred to as “half fences”; the read barrier prevents later things from moving earlier, but not the other way around, and the write barrier prevents earlier things from moving later, but not the other way around. (As it happens, though, while not required by specification, today the implementation of lock does use a full barrier on both enter and exit, so nothing before or after a lock will move into it.)

For Task in .NET 10, Task.WhenAll has a few changes to improve its performance. dotnet/runtime#110536 avoids a temporary collection allocation when needing to buffer up tasks from an IEnumerable<Task>.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Runtime.CompilerServices;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    [Benchmark]
    public Task WhenAllAlloc()
    {
        AsyncTaskMethodBuilder t = default;
        Task whenAll = Task.WhenAll(from i in Enumerable.Range(0, 2) select t.Task);
        t.SetResult();
        return whenAll;
    }
}
Method Runtime Mean Ratio Allocated Alloc Ratio
WhenAllAlloc .NET 9.0 216.8 ns 1.00 496 B 1.00
WhenAllAlloc .NET 10.0 181.9 ns 0.84 408 B 0.82

And dotnet/runtime#117715 from @CuteLeon avoids the overhead of the Task.WhenAll altogether when the input ends up just being a single task, in which case it simply returns that task instance.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Runtime.CompilerServices;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    [Benchmark]
    public Task WhenAllAlloc()
    {
        AsyncTaskMethodBuilder t = default;
        Task whenAll = Task.WhenAll([t.Task]);
        t.SetResult();
        return whenAll;
    }
}
Method Runtime Mean Ratio Allocated Alloc Ratio
WhenAllAlloc .NET 9.0 72.73 ns 1.00 144 B 1.00
WhenAllAlloc .NET 10.0 33.06 ns 0.45 72 B 0.50

System.Threading.Channels is one of the lesser-known but quite useful areas of threading in .NET (you can watch Yet Another “Highly Technical Talk” with Hanselman and Toub from Build 2025 to learn more about it). If you find yourself needing a queue to hand off some data between a producer and a consumer, you should likely look into Channel<T>. The library was introduced in .NET Core 3.0 as a small, robust, and fast producer/consumer queueing mechanism; it’s evolved since, such as gaining a ReadAllAsync method for consuming the contents of a channel as an IAsyncEnumerable<T> and a PeekAsync method for peeking at its contents without consuming. The original release supported Channel.CreateUnbounded and Channel.CreateBounded methods, and .NET 9 augmented those with a Channel.CreateUnboundedPrioritized. .NET 10 continues to expand on channels, both with functional improvements (such as with dotnet/runtime#116097, which adds an unbuffered channel implementation), and with performance improvements.

.NET 10 helps to to reduce overall memory consumption of an application using channels. One of the cross-cutting features channels supports is cancellation: you can cancel pretty much any interaction with a channel, which sports asynchronous methods for both producing and consuming data. When a reader or writer needs to pend, it creates (or reuses a pooled instance of) an AsyncOperation object that gets added to a queue; a later writer or reader that’s then able to satisfy a pending reader or writer dequeues one and marks it as completed. These queues were implemented with arrays, which makes it challenging to remove an entry from the middle of the queue if the associated operation gets canceled. So, rather than trying, it simply left the canceled object in the queue, and then when it would eventually get dequeued, it’s just thrown away and the dequeuer tries again. The theory was that, at steady state, you will quickly dequeue any canceled operations, and it’d be better to not exert a lot of effort to try to remove them more quickly. As it turns out, that assumption was problematic for some scenarios, where the workload wasn’t balanced, e.g. lots of readers would pend and timeout due to lack of writers, and each of those timed out readers would leave behind a canceled item in the queue. The next time a writer would come along, yes, all those canceled readers would get cleared out, but in the meantime, it would manifest as a notable increase in working set.

dotnet/runtime#116021 addresses that by switching from array-backed queues to linked-list-based queues. The waiter objects themselves double as the nodes in the linked lists, so the only additional memory overhead is a couple of fields for the previous and next nodes in the linked list. But even that modest increase is undesirable, so as part of the PR, it also tries to find compensating optimizations to balance things out. It’s able to remove a field from Channel<T>‘s custom implementation of IValueTaskSource<T> by applying a similar optimization that was made to ManualResetValueTaskSourceCore<T> in a previous release: it’s incredibly rare for an awaiter to supply an ExecutionContext (via use of the awaiter’s OnCompleted rather than UnsafeOnCompleted method), and even more so for that to happen when there’s also a non-default TaskScheduler or SynchronizationContext that needs to be stored, so rather than using two fields for those concepts, they just get grouped into one (which means that in the super duper rare case where both are needed, it incurs an extra allocation). Another field is removed for storing a CancellationToken on the instance, which on .NET Core can be retrieved from other available state. These changes then actually result in the size of the AsyncOperation waiter instance decreasing rather than increasing. Win win. It’s hard to see the impact of this change on throughput; it’s easier to just see what the impact is on working set in the degenerate case where canceled operations are never removed. If I run this code:

// dotnet run -c Release -f net9.0 --filter "*"
// dotnet run -c Release -f net10.0 --filter "*"

using System.Threading.Channels;

Channel<int> c = Channel.CreateUnbounded<int>();

for (int i = 0; ; i++)
{
    CancellationTokenSource cts = new();
    var vt = c.Reader.ReadAsync(cts.Token);
    cts.Cancel();
    await ((Task)vt.AsTask()).ConfigureAwait(ConfigureAwaitOptions.SuppressThrowing);

    if (i % 100_000 == 0)
    {
        Console.WriteLine($"Working set: {Environment.WorkingSet:N0}b");
    }
}

in .NET 9 I get output like this, with an ever increasing working set:

Working set: 31,588,352b
Working set: 164,884,480b
Working set: 210,698,240b
Working set: 293,711,872b
Working set: 385,495,040b
Working set: 478,158,848b
Working set: 553,385,984b
Working set: 608,206,848b
Working set: 699,695,104b
Working set: 793,034,752b
Working set: 885,309,440b
Working set: 986,103,808b
Working set: 1,094,234,112b
Working set: 1,156,239,360b
Working set: 1,255,198,720b
Working set: 1,347,604,480b
Working set: 1,439,879,168b
Working set: 1,532,284,928b

and in .NET 10, I get output like this, with a nice level steady state working set:

Working set: 33,030,144b
Working set: 44,826,624b
Working set: 45,481,984b
Working set: 45,613,056b
Working set: 45,875,200b
Working set: 45,875,200b
Working set: 46,006,272b
Working set: 46,006,272b
Working set: 46,006,272b
Working set: 46,006,272b
Working set: 46,006,272b
Working set: 46,006,272b
Working set: 46,006,272b
Working set: 46,006,272b
Working set: 46,006,272b
Working set: 46,006,272b
Working set: 46,006,272b
Working set: 46,006,272b

Reflection

.NET 8 added the [UnsafeAccessor] attribute, which enables a developer to write an extern method that matches up with some non-visible member the developer wants to be able to use, and the runtime fixes up the accesses to be just as if the target member was being used directly. .NET 9 then extended it with generic support.

// dotnet run -c Release -f net10.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Reflection;
using System.Runtime.CompilerServices;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private List<int> _list = new List<int>(16);
    private FieldInfo _itemsField = typeof(List<int>).GetField("_items", BindingFlags.NonPublic | BindingFlags.Instance)!;

    private static class Accessors<T>
    {
        [UnsafeAccessor(UnsafeAccessorKind.Field, Name = "_items")]
        public static extern ref T[] GetItems(List<T> list);
    }

    [Benchmark]
    public int[] WithReflection() => (int[])_itemsField.GetValue(_list)!;

    [Benchmark]
    public int[] WithUnsafeAccessor() => Accessors<int>.GetItems(_list);
}
Method Mean
WithReflection 2.6397 ns
WithUnsafeAccessor 0.7300 ns

But there are still gaps in that story. The signature of the UnsafeAccessor member needs to align with the signature of the target member, but what if that target member has parameters that aren’t visible to the code writing the UnsafeAccessor? Or, what if the target member is static? There’s no way for the developer to express in the UnsafeAccessor on which type the target member exists.

For these scenarios, dotnet/runtime#114881 augments the story with the [UnsafeAccessorType] attribute. The UnsafeAccessor method can type the relevant parameters as object but then adorn them with an [UnsafeAccessorType("...")] that provides a fully-qualified name of the target type. There are a bunch of examples then of this being used in dotnet/runtime#115583, which replaces most of the cross-library reflection done between libraries in .NET itself with use of [UnsafeAccessor]. An example of where this is handy is with a cyclic relationship between System.Net.Http and System.Security.Cryptography. System.Net.Http sits above System.Security.Cryptography, referencing it for critical features like X509Certificate. But System.Security.Cryptography needs to be able to make HTTP requests in order to download OCSP information, and with System.Net.Http referencing System.Security.Cryptography, System.Security.Cryptography can’t in turn explicitly reference System.Net.Http. It can, however, use reflection or [UnsafeAccessor] and [UnsafeAccessorType] to do so, and it does. It used to use reflection, now in .NET 10 it uses [UnsafeAccessor].

There are a few other nice improvements in and around reflection. dotnet/runtime#105814 from @huoyaoyuan updates ActivatorUtilities.CreateFactory to remove a layer of delegates. CreateFactory returns an ObjectFactory delegate, but under the covers the implementation was creating a Func<...> and then creating an ObjectFactory delegate for that Func<...>‘s Invoke method. The PR changes it to just create the ObjectFactory initially, which means every invocation avoids one layer of delegate invocation.

// dotnet run -c Release -f net9.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Configs;
using BenchmarkDotNet.Environments;
using BenchmarkDotNet.Jobs;
using BenchmarkDotNet.Running;
using Microsoft.Extensions.DependencyInjection;

var config = DefaultConfig.Instance
    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core90).WithNuGet("Microsoft.Extensions.DependencyInjection.Abstractions", "9.0.9").AsBaseline())
    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core10_0).WithNuGet("Microsoft.Extensions.DependencyInjection.Abstractions", "10.0.0-rc.1.25451.107"));

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args, config);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "NuGetReferences")]
public partial class Tests
{
    private IServiceProvider _sp = new ServiceCollection().BuildServiceProvider();
    private ObjectFactory _factory = ActivatorUtilities.CreateFactory(typeof(object), Type.EmptyTypes);

    [Benchmark]
    public object CreateInstance() => _factory(_sp, null);
}
Method Runtime Mean Ratio
CreateInstance .NET 9.0 8.136 ns 1.00
CreateInstance .NET 10.0 6.676 ns 0.82

dotnet/runtime#112350 reduces some overheads and allocations as part of parsing and rendering TypeNames.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Reflection.Metadata;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "t")]
public partial class Tests
{
    [Benchmark]
    [Arguments(typeof(Dictionary<List<int[]>[,], List<int?[][][,]>>[]))]
    public string ParseAndGetName(Type t) => TypeName.Parse(t.FullName).FullName; 
}
Method Runtime Mean Ratio Allocated Alloc Ratio
ParseAndGetName .NET 9.0 5.930 us 1.00 12.25 KB 1.00
ParseAndGetName .NET 10.0 4.305 us 0.73 5.75 KB 0.47

And dotnet/runtime#113803 from @teo-tsirpanis improves how DebugDirectoryBuilder in System.Reflection.Metadata uses DeflateStream to embed a PDB. The code was previously buffering the compressed output into an intermediate MemoryStream, and then that MemoryStream was being written to the BlobBuilder. With this change, the DeflateStream is wrapped directly around the BlobBuilder, enabling the compressed data to be propagated directly to builder.WriteBytes.

Primitives and Numerics

Every time I write one of these “Performance Improvements in .NET” posts, a part of me thinks “how could there possibly be more next time.” That’s especially true for core data types, which have received so much scrutiny over the years. Yet, here we are, with more to look at for .NET 10.

DateTime and DateTimeOffset get some love in dotnet/runtime#111112, in particular with micro-optimizations around how instances are initialized. Similar tweaks show up in dotnet/runtime#111244 for DateOnly, TimeOnly, and ISOWeek.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private DateTimeOffset _dto = new DateTimeOffset(2025, 9, 10, 0, 0, 0, TimeSpan.Zero);

    [Benchmark]
    public DateTimeOffset GetFutureTime() => _dto + TimeSpan.FromDays(1);
}
Method Runtime Mean Ratio
GetFutureTime .NET 9.0 6.012 ns 1.00
GetFutureTime .NET 10.0 1.029 ns 0.17

Guid gets several notable performance improvements in .NET 10. dotnet/runtime#105654 from @SirCxyrtyx imbues Guid with an implementation of IUtf8SpanParsable. This not only allows Guid to be used in places where a generic parameter is constrained to IUtf8SpanParsable, it gives Guid overloads of Parse and TryParse that operate on UTF8 bytes. This means if you have UTF8 data, you don’t first need to transcode it to UTF16 in order to parse it, nor use Utf8Parser.TryParse, which isn’t as optimized as is Guid.TryParse (but which does enable parsing out a Guid from the beginning of a larger input).

// dotnet run -c Release -f net10.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Buffers.Text;
using System.Text;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private byte[] _utf8 = Encoding.UTF8.GetBytes(Guid.NewGuid().ToString("N"));

    [Benchmark(Baseline = true)]
    public Guid TranscodeParse()
    {
        Span<char> scratch = stackalloc char[64];
        ReadOnlySpan<char> input = Encoding.UTF8.TryGetChars(_utf8, scratch, out int charsWritten) ?
            scratch.Slice(0, charsWritten) :
            Encoding.UTF8.GetString(_utf8);

        return Guid.Parse(input);
    }

    [Benchmark]
    public Guid Utf8ParserParse() => Utf8Parser.TryParse(_utf8, out Guid result, out _, 'N') ? result : Guid.Empty;

    [Benchmark]
    public Guid GuidParse() => Guid.Parse(_utf8);
}
Method Mean Ratio
TranscodeParse 24.72 ns 1.00
Utf8ParserParse 19.34 ns 0.78
GuidParse 16.47 ns 0.67

Char, Rune, and Version also gained IUtf8SpanParsable implementations, in dotnet/runtime#105773 from @lilinus and dotnet/runtime#109252 from @lilinus. There’s not much of a performance benefit here for char and Rune; implementing the interface mainly yields consistency and the ability to use these types with generic routines parameterized based on that interface. But Version gains the same kinds of performance (and usability) benefits as did Guid: it now sports support for parsing directly from UTF8, rather than needing to transcode first to UTF16 and then parse that.

// dotnet run -c Release -f net10.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Text;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private byte[] _utf8 = Encoding.UTF8.GetBytes(new Version("123.456.789.10").ToString());

    [Benchmark(Baseline = true)]
    public Version TranscodeParse()
    {
        Span<char> scratch = stackalloc char[64];
        ReadOnlySpan<char> input = Encoding.UTF8.TryGetChars(_utf8, scratch, out int charsWritten) ?
            scratch.Slice(0, charsWritten) :
            Encoding.UTF8.GetString(_utf8);

        return Version.Parse(input);
    }

    [Benchmark]
    public Version GuidParse() => Version.Parse(_utf8);
}
Method Mean Ratio
TranscodeParse 46.48 ns 1.00
GuidParse 35.75 ns 0.77

Sometimes performance improvements come about as a side-effect of other work. dotnet/runtime#110923 was intending to remove some pointer use from Guid‘s formatting implementation, but in doing so, it ended up also slightly improving throughput of the (admittedly rarely used) “X” format.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private char[] _dest = new char[64];
    private Guid _g = Guid.NewGuid();

    [Benchmark]
    public void FormatX() => _g.TryFormat(_dest, out int charsWritten, "X");
}
Method Runtime Mean Ratio
FormatX .NET 9.0 3.0584 ns 1.00
FormatX .NET 10.0 0.7873 ns 0.26

Random (and its cryptographically-secure counterpart RandomNumberGenerator) continues to improve in .NET 10, with new methods (such as Random.GetString and Random.GetHexString from dotnet/runtime#112162) for usability, but also importantly with performance improvements to existing methods. Both Random and RandomNumberGenerator were given a handy GetItems method in .NET 8; this method allows a caller to supply a set of choices and the number of items desired, allowing Random{NumberGenerator} to perform “sampling with replacement”, selecting an item from the set that number of times. In .NET 9, these implementations were optimized to special-case a power-of-2 number of choices that’s less than or equal to 256. In such a case, we can avoid many trips to the underlying source of randomness by requesting bytes in bulk, rather than requesting an int per element. With the power-of-2 choice count, we can simply mask each byte to produce the index into the choices while not introducing bias. In .NET 10, dotnet/runtime#107988 extends this to apply to non-power-of-2 cases, as well. We can’t just mask off bits as in the power-of-2 case, but we can do “rejection sampling,” which is just a fancy way of saying “if you randomly get a value outside of the allowed range, try again”.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Security.Cryptography;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private const string Base58 = "123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz";

    [Params(30)]
    public int Length { get; set; }

    [Benchmark]
    public char[] WithRandom() => Random.Shared.GetItems<char>(Base58, Length);

    [Benchmark]
    public char[] WithRandomNumberGenerator() => RandomNumberGenerator.GetItems<char>(Base58, Length);
}
Method Runtime Length Mean Ratio
WithRandom .NET 9.0 30 144.42 ns 1.00
WithRandom .NET 10.0 30 73.68 ns 0.51
WithRandomNumberGenerator .NET 9.0 30 23,179.73 ns 1.00
WithRandomNumberGenerator .NET 10.0 30 853.47 ns 0.04

decimal operations, specifically multiplication and division, get a performance bump, thanks to dotnet/runtime#99212 from @Daniel-Svensson.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private decimal _n = 9.87654321m;
    private decimal _d = 1.23456789m;

    [Benchmark]
    public decimal Divide() => _n / _d;
}
Method Runtime Mean Ratio
Divide .NET 9.0 27.09 ns 1.00
Divide .NET 10.0 23.68 ns 0.87

UInt128 division similarly gets some assistance in dotnet/runtime#99747 from @Daniel-Svensson, utilizing the X86 DivRem hardware intrinsic when dividing a value that’s larger than a ulong by a value that could fit in a ulong.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private UInt128 _n = new UInt128(123, 456);
    private UInt128 _d = new UInt128(0, 789);

    [Benchmark]
    public UInt128 Divide() => _n / _d;
}
Method Runtime Mean Ratio
Divide .NET 9.0 27.3112 ns 1.00
Divide .NET 10.0 0.5522 ns 0.02

BigInteger gets a few improvements as well. dotnet/runtime#115445 from @Rob-Hague augments its TryWriteBytes method to use a direct memory copy when viable, namely when the number is non-negative such that it doesn’t need twos-complement tweaks.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Numerics;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private BigInteger _value = BigInteger.Parse(string.Concat(Enumerable.Repeat("1234567890", 20)));
    private byte[] _bytes = new byte[256];

    [Benchmark]
    public bool TryWriteBytes() => _value.TryWriteBytes(_bytes, out _);
}
Method Runtime Mean Ratio
TryWriteBytes .NET 9.0 27.814 ns 1.00
TryWriteBytes .NET 10.0 5.743 ns 0.21

Also rare but fun, if you tried using BigInteger.Parse exactly with the string representation of int.MinValue, you’d end up allocating unnecessarily. That’s addressed by dotnet/runtime#104666 from @kzrnm, which tweaks the handling of this corner-case so that it’s appropriately recognized as a case that can be represented using a singleton for int.MinValue (the singleton already existed, it just wasn’t applied in this case).

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Numerics;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private string _int32min = int.MinValue.ToString();

    [Benchmark]
    public BigInteger ParseInt32Min() => BigInteger.Parse(_int32min);
}
Method Runtime Mean Ratio Allocated Alloc Ratio
ParseInt32Min .NET 9.0 80.54 ns 1.00 32 B 1.00
ParseInt32Min .NET 10.0 71.59 ns 0.89 0.00

One area that got a lot of attention in .NET 10 is System.Numerics.Tensors. The System.Numerics.Tensors library was introduced in .NET 8, focusing on a TensorPrimitives class that provided various numerical routines on spans of float. .NET 9 then expanded TensorPrimitives with more operations and generic versions of them. Now in .NET 10, TensorPrimitives gains even more operations, with many of the existing ones also made faster for various scenarios.

To start, dotnet/runtime#112933 adds over 70 new overloads to TensorPrimitives, including operations like StdDev, Average, Clamp, DivRem, IsNaN, IsPow2, Remainder, and many more. The majority of these operations are also vectorized, using shared implementations that are parameterized with generic operators. For example, the entirety of the Decrement<T> implementation is:

public static void Decrement<T>(ReadOnlySpan<T> x, Span<T> destination) where T : IDecrementOperators<T> =>
    InvokeSpanIntoSpan<T, DecrementOperator<T>>(x, destination);

where InvokeSpanIntoSpan is a shared routine used by almost 60 methods, each of which supplies its own operator that’s then used in the heavily-optimized routine. In this case, the DecrementOperator<T> is simply this:

private readonly struct DecrementOperator<T> : IUnaryOperator<T, T> where T : IDecrementOperators<T>
{
    public static bool Vectorizable => true;
    public static T Invoke(T x) => --x;
    public static Vector128<T> Invoke(Vector128<T> x) => x - Vector128<T>.One;
    public static Vector256<T> Invoke(Vector256<T> x) => x - Vector256<T>.One;
    public static Vector512<T> Invoke(Vector512<T> x) => x - Vector512<T>.One;
}

With that minimal implementation, which provides a decrement implementation for vectorized widths of 128 bits, 256 bits, 512 bits, and scalar, the workhorse routine is able to provide a very efficient implementation.

// Update benchmark.csproj with a package reference to System.Numerics.Tensors.
// dotnet run -c Release -f net10.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Numerics.Tensors;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private float[] _src = Enumerable.Range(0, 1000).Select(i => (float)i).ToArray();
    private float[] _dest = new float[1000];

    [Benchmark(Baseline = true)]
    public void DecrementManual()
    {
        ReadOnlySpan<float> src = _src;
        Span<float> dest = _dest;
        for (int i = 0; i < src.Length; i++)
        {
            dest[i] = src[i] - 1f;
        }
    }

    [Benchmark]
    public void DecrementTP() => TensorPrimitives.Decrement(_src, _dest);
}
Method Mean Ratio
DecrementManual 288.80 ns 1.00
DecrementTP 22.46 ns 0.08

Wherever possible, these methods also utilize APIs on the underlying Vector128, Vector256, and Vector512 types, including new corresponding methods introduced in dotnet/runtime#111179 and dotnet/runtime#115525, such as IsNaN.

Existing methods are also improved. dotnet/runtime#111615 from @BarionLP improves TensorPrimitives.SoftMax by avoiding unnecessary recomputation of T.Exp. The softmax function involves computing exp for every element and summing them all together. The output for an element with value x is then the exp(x) divided by that sum. The previous implementation was following that outline, resulting in computing exp twice for each element. We can instead compute exp just once for each element, caching them temporarily in the destination while creating the sum, and then reusing those for the subsequent division, overwriting each with the actual result. The net result is close to doubling the throughput:

// Update benchmark.csproj with a package reference to System.Numerics.Tensors.
// dotnet run -c Release -f net9.0 --filter **

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Configs;
using BenchmarkDotNet.Environments;
using BenchmarkDotNet.Jobs;
using BenchmarkDotNet.Running;
using System.Numerics.Tensors;

var config = DefaultConfig.Instance
    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core90).WithNuGet("System.Numerics.Tensors", "9.0.9").AsBaseline())
    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core10_0).WithNuGet("System.Numerics.Tensors", "10.0.0-rc.1.25451.107"));

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args, config);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "NuGetReferences")]
public partial class Tests
{
    private float[] _src, _dst;

    [GlobalSetup]
    public void Setup()
    {
        Random r = new(42);
        _src = Enumerable.Range(0, 1000).Select(_ => r.NextSingle()).ToArray();
        _dst = new float[_src.Length];
    }

    [Benchmark]
    public void SoftMax() => TensorPrimitives.SoftMax(_src, _dst);
}
Method Runtime Mean Ratio
SoftMax .NET 9.0 1,047.9 ns 1.00
SoftMax .NET 10.0 649.8 ns 0.62

dotnet/runtime#111505 from @alexcovington enables TensorPrimitives.Divide<T> to be vectorized for int. The operation already supported vectorization for float and double, for which there’s SIMD hardware-accelerated support for division, but it didn’t support int, which lacks SIMD hardware-accelerated support. This PR teaches the JIT how to emulate SIMD integer division, by converting the ints to doubles, doing double division, and then converting back.

// Update benchmark.csproj with a package reference to System.Numerics.Tensors.
// dotnet run -c Release -f net9.0 --filter **

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Configs;
using BenchmarkDotNet.Environments;
using BenchmarkDotNet.Jobs;
using BenchmarkDotNet.Running;
using System.Numerics.Tensors;

var config = DefaultConfig.Instance
    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core90).WithNuGet("System.Numerics.Tensors", "9.0.9").AsBaseline())
    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core10_0).WithNuGet("System.Numerics.Tensors", "10.0.0-rc.1.25451.107"));

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args, config);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "NuGetReferences")]
public partial class Tests
{
    private int[] _n, _d, _dst;

    [GlobalSetup]
    public void Setup()
    {
        Random r = new(42);
        _n = Enumerable.Range(0, 1000).Select(_ => r.Next(1000, int.MaxValue)).ToArray();
        _d = Enumerable.Range(0, 1000).Select(_ => r.Next(1, 1000)).ToArray();
        _dst = new int[_n.Length];
    }

    [Benchmark]
    public void Divide() => TensorPrimitives.Divide(_n, _d, _dst);
}
Method Runtime Mean Ratio
Divide .NET 9.0 1,293.9 ns 1.00
Divide .NET 10.0 458.4 ns 0.35

dotnet/runtime#116945 further updates TensorPrimitives.Divide (as well as TensorPrimitives.Sign and TensorPrimitives.ConvertToInteger) to be vectorizable when used with nint or nuint. nint can be treated identically to int when in a 32-bit process and to long when in a 64-bit process; same for nuint with uint and ulong, respectively. So anywhere we’re successfully vectorizing for int/uint on 32-bit or long/ulong on 64-bit, we can also successfully vectorize for nint/nuint. dotnet/runtime#116895 also enables vectorizing TensorPrimitives.ConvertTruncating when used to convert float to int or uint and double to long or ulong. Vectorization hadn’t previously been enabled because the underlying operations used had some undefined behavior; that behavior was fixed late in the .NET 9 cycle, such that this vectorization can now be enabled.

// Update benchmark.csproj with a package reference to System.Numerics.Tensors.
// dotnet run -c Release -f net9.0 --filter **

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Configs;
using BenchmarkDotNet.Environments;
using BenchmarkDotNet.Jobs;
using BenchmarkDotNet.Running;
using System.Numerics.Tensors;

var config = DefaultConfig.Instance
    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core90).WithNuGet("System.Numerics.Tensors", "9.0.9").AsBaseline())
    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core10_0).WithNuGet("System.Numerics.Tensors", "10.0.0-rc.1.25451.107"));

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args, config);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "NuGetReferences")]
public partial class Tests
{
    private float[] _src;
    private int[] _dst;

    [GlobalSetup]
    public void Setup()
    {
        Random r = new(42);
        _src = Enumerable.Range(0, 1000).Select(_ => r.NextSingle() * 1000).ToArray();
        _dst = new int[_src.Length];
    }

    [Benchmark]
    public void ConvertTruncating() => TensorPrimitives.ConvertTruncating(_src, _dst);
}
Method Runtime Mean Ratio
ConvertTruncating .NET 9.0 933.86 ns 1.00
ConvertTruncating .NET 10.0 41.99 ns 0.04

Not to be left out, TensorPrimitives.LeadingZeroCount is also improved in dotnet/runtime#110333 from @alexcovington. When AVX512 is available, the change utilizes AVX512 instructions like PermuteVar16x8x2 to vectorize LeadingZeroCount for all types supported by Vector512<T>.

// Update benchmark.csproj with a package reference to System.Numerics.Tensors.
// dotnet run -c Release -f net9.0 --filter **

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Configs;
using BenchmarkDotNet.Environments;
using BenchmarkDotNet.Jobs;
using BenchmarkDotNet.Running;
using System.Numerics.Tensors;

var config = DefaultConfig.Instance
    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core90).WithNuGet("System.Numerics.Tensors", "9.0.9").AsBaseline())
    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core10_0).WithNuGet("System.Numerics.Tensors", "10.0.0-rc.1.25451.107"));

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args, config);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "NuGetReferences")]
public partial class Tests
{
    private byte[] _src, _dst;

    [GlobalSetup]
    public void Setup()
    {
        _src = new byte[1000];
        _dst = new byte[_src.Length];
        new Random(42).NextBytes(_src);
    }

    [Benchmark]
    public void LeadingZeroCount() => TensorPrimitives.LeadingZeroCount(_src, _dst);
}
Method Runtime Mean Ratio
LeadingZeroCount .NET 9.0 401.60 ns 1.00
LeadingZeroCount .NET 10.0 12.33 ns 0.03

In terms of changes that affected the most operations, dotnet/runtime#116898 and dotnet/runtime#116934 take the cake. Together, these PRs extend vectorization for almost 60 distinct operations to also accelerate for Half: Abs, Add, AddMultiply, BitwiseAnd, BitwiseOr, Ceiling, Clamp, CopySign, Cos, CosPi, Cosh, CosineSimilarity, Decrement, DegreesToRadians, Divide, Exp, Exp10, Exp10M1, Exp2, Exp2M1, ExpM1, Floor, FusedAddMultiply, Hypot, Increment, Lerp, Log, Log10, Log10P1, Log2, Log2P1, LogP1, Max, MaxMagnitude, MaxMagnitudeNumber, MaxNumber, Min, MinMagnitude, MinMagnitudeNumber, MinNumber, Multiply, MultiplyAdd, MultiplyAddEstimate, Negate, OnesComplement, Reciprocal, Remainder, Round, Sigmoid, Sin, SinPi, Sinh, Sqrt, Subtract, Tan, TanPi, Tanh, Truncate, and Xor. The challenge here is that Half doesn’t have accelerated hardware support, and today is not even supported by the vector types. In fact, even for its scalar operations, Half is manipulated internally by converting it to a float, performing the relevant operation as float, and then casting back, e.g. here’s the implementation of the Half multiplication operator:

public static Half operator *(Half left, Half right) => (Half)((float)left * (float)right);

For all of these TensorPrimitives operations, they previously would treat Half like any other unaccelerated type, and would just run a scalar loop that performed the operation on each Half. That means for each element, we’re converting it to float, then performing the operation, and then converting it back. As luck would have it, though, TensorPrimitives already defines the ConvertToSingle and ConvertToHalf methods, which are accelerated. We can then reuse those methods to do the same thing that’s already done for scalar operations but do it vectorized: take a vector of Halfs, convert them all to floats, process all the floats, and convert them all back to Halfs. Of course, I already stated that the vector types don’t support Half, so how can we “take a vector of Half“? By reinterpret casting the Span<Half> to Span<short> (or Span<ushort>), which allows us to smuggle the Halfs through. And, as it turns out, even for scalar, the very first thing Half‘s float cast operator does is convert it to a short.

The net result is that a ton of operations can now be accelerated for Half.

// Update benchmark.csproj with a package reference to System.Numerics.Tensors.
// dotnet run -c Release -f net9.0 --filter **

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Configs;
using BenchmarkDotNet.Environments;
using BenchmarkDotNet.Jobs;
using BenchmarkDotNet.Running;
using System.Numerics.Tensors;

var config = DefaultConfig.Instance
    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core90).WithNuGet("System.Numerics.Tensors", "9.0.9").AsBaseline())
    .AddJob(Job.Default.WithRuntime(CoreRuntime.Core10_0).WithNuGet("System.Numerics.Tensors", "10.0.0-rc.1.25451.107"));
BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args, config);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "NuGetReferences")]
public partial class Tests
{
    private Half[] _x, _y, _dest;

    [GlobalSetup]
    public void Setup()
    {
        _x = new Half[1000];
        _y = new Half[_x.Length];
        _dest = new Half[_x.Length];
        var random = new Random(42);
        for (int i = 0; i < _x.Length; i++)
        {
            _x[i] = (Half)random.NextSingle();
            _y[i] = (Half)random.NextSingle();
        }
    }

    [Benchmark]
    public void Add() => TensorPrimitives.Add(_x, _y, _dest);
}
Method Runtime Mean Ratio
Add .NET 9.0 5,984.3 ns 1.00
Add .NET 10.0 481.7 ns 0.08

The System.Numerics.Tensors library in .NET 10 now also includes stable APIs for tensor types (which use TensorPrimitives in their implementations). This includes a Tensor<T>, ITensor<,>, TensorSpan<T>, and ReadOnlyTensorSpan<T>. One of the really interesting things about these types is that they take advantage of the new C# 14 compound operators feature, and do so for a significant performance benefit. In previous versions of C#, you’re able to write custom operators, for example an addition operator:

public class C 
{
    public int Value;

    public static C operator +(C left, C right) => new() { Value = left.Value + right.Value };
}

With that type, I can write code like:

C a = new() { Value = 42 };
C b = new() { Value = 84 };
C c = a + b;

Console.WriteLine(c.Value);

which will print out 126. I can also change the code to use a compound operator, +=, like this:

C a = new() { Value = 42 };
C b = new() { Value = 84 };
a += b;

Console.WriteLine(a.Value);

which will also print out 126, because the a += b is always identical to a = a + b… or, at least it was. Now with C# 14, it’s possible for a type to not only define a + operator, it can also define a += operator. If a type defines a += operator, it will be used rather than expanding a += b as shorthand for a = a + b. And that has performance ramifications.

A tensor is basically a multidimensional array, and as with arrays, these can be big… really big. If I have a sequence of operations:

Tensor<int> t1 = ...;
Tensor<int> t2 = ...;
for (int i = 0; i < 3; i++)
{
    t1 += t2;
}

and each of those t1 += t2s exands into t1 = t1 + t2, then for each I’m allocating a brand new tensor. If they’re big, that gets expensive right quick. But C# 14’s new user-defined compound operators, as initially added to the compiler in dotnet/roslyn#78400, enable mutation of the target.

public class C 
{
    public int Value;

    public static C operator +(C left, C right) => new() { Value = left.Value + right.Value };
    public static void operator +=(C other) => left.Value += other.Value;
}

And that means that such compound operators on the tensor types can just update the target tensor in place rather than allocating a whole new (possibly very large) data structure for each computation. dotnet/runtime#117997 adds all of these compound operators for the tensor types. (Not only are these using C# 14 user-defined compound operators, they’re doing so as extension operators, using the new C# 14 extension types feature. Fun!)

Collections

Handling collections of data is the lifeblood of any application, and as such every .NET release tries to eke out even more performance from collections and collection processing.

Enumeration

Iterating through collections is one of the most common things developers do. To make this as efficient as possible, the most prominent collection types in .NET (e.g. List<T>) expose struct-based enumerators (e.g. List<T>.Enumerator) which their public GetEnumerator() methods then return in a strongly-typed manner:

public Enumerator GetEnumerator() => new Enumerator(this);

This is in addition to their IEnumerable<T>.GetEnumerator() implementation, which ends up being implemented via an “explicit” interface implementation (“explicit” means the relevant method provides the interface method implementation but does not show up as a public method on the type itself), e.g. List<T>‘s implementation:

IEnumerator<T> IEnumerable<T>.GetEnumerator() =>
    Count == 0 ? SZGenericArrayEnumerator<T>.Empty :
    GetEnumerator();

Directly foreach‘ing over the collection allows the C# compiler to bind to the struct-based enumerator, enabling avoiding the enumerator allocation and being able to directly see the non-virtual methods on the enumerator, rather than working with an IEnumerator<T> and the interface dispatch required to invoke methods on it. That, however, falls apart once the collection is used polymorphically as an IEnumerable<T>; at that point, the IEnumerable<T>.GetEnumerator() is used, which is forced to allocate a new enumerator instance (except for special-cases, such as how List<T>‘s implementation shown above returns a singleton enumerator when the collection is empty).

Thankfully, as noted earlier in the JIT section, the JIT has been gaining super powers around dynamic PGO, escape analysis, and stack allocation. This means that in many situations, the JIT is now able to see that the most common concrete type for a given call site is a specific enumerator type and generate code specific to when it is that type, devirtualizing the calls, possibly inlining them, and then, if it’s able to do so sufficiently, stack allocating the enumerator. With the progress that’s been made in .NET 10, this now happens very frequently for arrays and List<T>. While the JIT is able to do this in general regardless of an object’s type, the ubiquity of enumeration makes it all that much more important for IEnumerator<T>, so dotnet/runtime#116978 marks IEnumerator<T> as an [Intrinsic], giving the JIT the ability to better reason about it.

However, some enumerators still needed a bit of help. Besides T[], List<T> is the most popular collection type in .NET, and with the JIT changes, many foreachs of an IEnumerable<T> that are actually List<T> will successfully have the enumerator stack allocated. Awesome. That awesomeness dwindled, however, when trying out different sized lists. This is a benchmark that tests out enumerating a List<T> typed as IEnumerable<T>, with different lengths, along with benchmark results from early August 2025 (around .NET 10 Preview 7).

// dotnet run -c Release -f net10.0 --filter **

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private IEnumerable<int> _enumerable;

    [Params(500, 5000, 15000)]
    public int Count { get; set; }

    [GlobalSetup]
    public void Setup() => _enumerable = Enumerable.Range(0, Count).ToList();

    [Benchmark]
    public int Sum()
    {
        int sum = 0;
        foreach (int item in _enumerable) sum += item;
        return sum;
    }
}
Method Count Mean Allocated
Sum 500 214.1 ns
Sum 5000 4,767.1 ns 40 B
Sum 15000 13,824.4 ns 40 B

Note that for the 500 element List<T>, the allocation column shows that nothing was allocated on the heap, as the enumerator was successfully stack allocated. Fabulous. But then just increasing the size of the list caused it to no longer be stack-allocated. Why? The reason for the allocation in the jump from 500 to 5000 has to do with dynamic PGO combined with how List<T>‘s enumerator was written oh so many years ago.

List<T>‘s enumerator’s MoveNext was structured like this:

public bool MoveNext()
{
    if (_version == _list._version && ((uint)_index < (uint)_list._size))
    {
        ... // handle successfully getting next element
        return true;
    }

    return MoveNextRare();
}

private bool MoveNextRare()
{
    ... // handle version mismatch and/or returning false for completed enumeration
}

The Rare in the name gives a hint as to why it’s split like this. The MoveNext method was kept as thin as possible for the common case of invoking MoveNext, namely all successful calls that return true; the only time MoveNextRare is needed, other than when the enumerator is misused, is for the final call to it after all elements have been yielded. That streamlining of MoveNext itself was done to make MoveNext inlineable. However, a lot has changed since this code was written, making it less important, and the separating out of MoveNextRare has a really interesting interaction with dynamic PGO. One of the things dynamic PGO looks for is whether code is considered hot (used a lot) or cold (used rarely), and that data influences whether a method should be considered for inlining. For shorter lists, dynamic PGO will see MoveNextRare invoked a reasonable number of times, and will consider it for inlining. And if all of the calls to the enumerator are inlined, the enumerator instance can avoid escaping the call frame, and can then be stack allocated. But once the list length grows to a much larger amount, that MoveNextRare method will start to look really cold, will struggle to be inlined, and will then allow the enumerator instance to escape, preventing it from being stack allocated. dotnet/runtime#118425 recognizes that times have changed since this enumerator was written, with many changes to inlining heuristics and PGO and the like; it undoes the separating out of MoveNextRare and simplifies the enumerator. With how the system works today, the re-combined MoveNext is still inlineable, with or without PGO, and we’re able to stack allocate at the larger size.

Method Count Mean Allocated
Sum 500 221.2 ns
Sum 5000 2,153.6 ns
Sum 15000 14,724.9 ns 40 B

With that fix, we still had an issue, though. We’re now avoiding the allocation at lengths 500 and 5000, but at 15,000 we still see the enumerator being allocated. Now why? This has to do with OSR (on-stack replacement), which was introduced in .NET 7 as a key enabler for allowing tiered compilation to be used with methods containing loops. OSR allows for a method to be recompiled with optimizations even while it’s executing, and for an invocation of the method to jump from the unoptimized code for the method to the corresponding location in the newly optimized method. While OSR is awesome, it unfortunately causes some complications here. Once the list gets long enough, an invocation of the tier 0 (unoptimized) method will transition to the OSR optimized method… but OSR methods don’t contain dynamic PGO instrumentation (they used to, but it was removed because it led to problems if the instrumented code never got recompiled again and thus suffered regressions due to forever-more running with the instrumentation probes in place). Without the instrumentation, and in particular without the instrumentation for the tail portion of the method (where the enumerator’s Dispose method is invoked), even though List<T>.Dispose is a nop, the JIT may not be able to do the guarded devirtualization that enables the IEnumerator<T>.Dispose to be devirtualized and inlined. Meaning, ironically, that the nop Dispose causes escape analysis to see the enumerator instance escape, such that it can’t be stack allocated. Whew.

Thankfully, dotnet/runtime#118461 addresses that in the JIT. Specifically for enumerators, this PR enables dynamic PGO to infer the missing instrumentation based on the earlier probes used with the other enumerator methods, which then enables it to successfully devirtualize and inline Dispose. So, for .NET 10, and the same benchmark, we end up with this lovely sight:

Method Count Mean Allocated
Sum 500 216.5 ns
Sum 5000 2,082.4 ns
Sum 15000 6,525.3 ns

Other types needed a bit of help as well. dotnet/runtime#118467 addresses PriorityQueue<TElement, TPriority>‘s enumerator; it’s enumerator was a port of List<T>‘s and so was changed similarly.

Separately, dotnet/runtime#117328 streamline’s Stack<T>‘s enumerator type, removing around half the lines of code that previously composed it. The previous enumerator’s MoveNext incurred five branches on the way to grabbing most next elements:

  • It first did a version check, comparing the stack’s version number against the enumerator’s captured version number, to ensure the stack hadn’t been mutated since the time the enumerator was grabbed.
  • It then checked to see whether this was the first call to the enumerator, taking one path that lazily-initialized some state if it was and another path assuming already-initialized state if not.
  • Assuming this wasn’t the first call, it then checked whether enumeration had previously ended.
  • Assuming it hadn’t, it then checked whether there’s anything left to enumerate.
  • And finally, it dereferenced the underlying array, incurring a bounds check.

The new implementation cuts that in half. It relies on the enumerator’s constructor initializing the current index to the length of the stack, such that each MoveNext call just decrements this value. When the data is exhausted, the count will go negative. This means that we can combine a whole bunch of these checks into a single check:

if ((uint)index < (uint)array.Length)

and we’re left with just two branches on the way to reading any element: the version check and whether the index is in bounds. That reduction not only means there’s less code to process and fewer branches that might be improperly predicted, it also shrinks the size of the members to the point where they’re much more likely to be inlined, which in turns makes it much more likely that the enumerator object can be stack allocated.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private Stack<int> _direct = new Stack<int>(Enumerable.Range(0, 10));
    private IEnumerable<int> _enumerable = new Stack<int>(Enumerable.Range(0, 10));

    [Benchmark]
    public int SumDirect()
    {
        int sum = 0;
        foreach (int item in _direct) sum += item;
        return sum;
    }

    [Benchmark]
    public int SumEnumerable()
    {
        int sum = 0;
        foreach (int item in _enumerable) sum += item;
        return sum;
    }
}
Method Runtime Mean Ratio Code Size Allocated Alloc Ratio
SumDirect .NET 9.0 23.317 ns 1.00 331 B NA
SumDirect .NET 10.0 4.502 ns 0.19 55 B NA
SumEnumerable .NET 9.0 30.893 ns 1.00 642 B 40 B 1.00
SumEnumerable .NET 10.0 7.906 ns 0.26 381 B 0.00

dotnet/runtime#117341 does something similar but for Queue<T>. Queue<T> has an interesting complication when compared to Stack<T>, which is that it can wrap around the length of the underlying array. Whereas with Stack<T>, we can always start at a particular index and just count down to 0, using that index as the offset into the array, with Queue<T>, the starting index can be anywhere in the array, and when walking from that index to the last element, we might need to wrap around back to the beginning. Such wrapping can be accomplished using % array.Length (which is what Queue<T> does on .NET Framework), but such a division operation can be relatively costly. An alternative, since we know the count can never be more than the array’s length, is to check whether we’ve already walked past the end of the array, and if we have, then subtract the array’s length to get to the corresponding location from the start of the array. The existing implementation in .NET 9 did just that:

if (index >= array.Length)
{
    index -= array.Length; // wrap around if needed
}

_currentElement = array[index];

That is two branches, one for the check against the array length, and one for the bounds check. The bounds check can’t be eliminated here because the JIT hasn’t seen proof that the index is actually in-bounds and thus needs to be defensive. Instead, we can write it like this:

if ((uint)index < (uint)array.Length)
{
    _currentElement = array[index];
}
else
{
    index -= array.Length;
    _currentElement = array[index];
}

An enumeration of a queue can logically be split into two parts: the elements from the head index to the end of the array, and the elements from the beginning of the array to the tail. All of the former now fall into the first block, which incurs only one branch because the JIT can use the knowledge gleaned from the comparison to eliminate the bounds check. It only incurs a bounds check when in the second portion of the enumeration.

We can more easily visualize the branch savings by using benchmarkdotnet’s HardwareCounters diagnoser, asking it to track HardwareCounter.BranchInstructions (this diagnoser only works on Windows). Note here, as well, that the changes not only improve throughput, they also enable the boxed enumerator to be stack allocated.

// This benchmark was run on Windows for the HardwareCounters diagnoser to work.
// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using BenchmarkDotNet.Diagnosers;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HardwareCounters(HardwareCounter.BranchInstructions)]
[MemoryDiagnoser(displayGenColumns: false)]
[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private Queue<int> _direct;
    private IEnumerable<int> _enumerable;

    [GlobalSetup]
    public void Setup()
    {
        _direct = new Queue<int>(Enumerable.Range(0, 10));
        for (int i = 0; i < 5; i++)
        {
            _direct.Enqueue(_direct.Dequeue());
        }

        _enumerable = _direct;
    }

    [Benchmark]
    public int SumDirect()
    {
        int sum = 0;
        foreach (int item in _direct) sum += item;
        return sum;
    }

    [Benchmark]
    public int SumEnumerable()
    {
        int sum = 0;
        foreach (int item in _enumerable) sum += item;
        return sum;
    }
}
Method Runtime Mean Ratio BranchInstructions/Op Code Size Allocated Alloc Ratio
SumDirect .NET 9.0 24.340 ns 1.00 79 251 B NA
SumDirect .NET 10.0 7.192 ns 0.30 37 96 B NA
SumEnumerable .NET 9.0 30.695 ns 1.00 103 531 B 40 B 1.00
SumEnumerable .NET 10.0 8.672 ns 0.28 50 324 B 0.00

ConcurrentDictionary<TKey, TValue> also gets in on the fun. The dictionary is implemented as a collection of “buckets”, each of which of which is a linked list of entries. It had a fairly complicated enumerator for processing these structures, relying on jumping between cases of a switch statement, e.g.

switch (_state)
{
    case StateUninitialized:
        ... // Initialize on first MoveNext.
        goto case StateOuterloop;

    case StateOuterloop:
        // Check if there are more buckets in the dictionary to enumerate.
        if ((uint)i < (uint)buckets.Length)
        {
            // Move to the next bucket.
            ...
            goto case StateInnerLoop;
        }
        goto default;

    case StateInnerLoop:
        ... // Yield elements from the current bucket.
        goto case StateOuterloop;

    default:
        // Done iterating.
        ...
}

If you squint, there are nested loops here, where we’re enumerating each bucket and for each bucket enumerating its contents. With how this is structured, however, from the JIT’s perspective, we could enter those loops from any of those cases, depending on the current value of _state. That produces something referred to as an “irreducible loop,” which is a loop that has multiple possible entry points. Imagine you have:

A:
if (someCondition) goto B;
...

B:
if (someOtherCondition) goto A;

Labels A and B form a loop, but that loop can be entered by jumping to either A or to B. If the compiler could prove that this loop were only ever enterable from A or only ever enterable from B, then the loop would be “reducible.” Irreducible loops are much more complex than reducible loops for a compiler to deal with, as they have more complex control and data flow and in general are harder to analyze. dotnet/runtime#116949 rewrites the MoveNext method to be a more typical while loop, which is not only easier to read and maintain, it’s also reducible and more efficient, and because it’s more streamlined, it’s also inlineable and enables possible stack allocation.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Collections.Concurrent;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private ConcurrentDictionary<int, int> _ints = new(Enumerable.Range(0, 1000).ToDictionary(i => i, i => i));

    [Benchmark]
    public int EnumerateInts()
    {
        int sum = 0;
        foreach (var kvp in _ints) sum += kvp.Value;
        return sum;
    }
}
Method Runtime Mean Ratio Allocated Alloc Ratio
EnumerateInts .NET 9.0 4,232.8 ns 1.00 56 B 1.00
EnumerateInts .NET 10.0 664.2 ns 0.16 0.00

LINQ

All of these examples show enumerating collections using a foreach loop, and while that’s obviously incredibly common, so too is using LINQ (Language Integrated Query) to enumerate and process collections. For in-memory collections, LINQ provides literally hundreds of extension methods for performing maps, filters, sorts, and a plethora of other operations over enumerables. It is incredibly handy, is thus used everywhere, and is thus important to optimize. Every release of .NET has seen improvements to LINQ, and that continues in .NET 10.

Most prominent from a performance perspective in this release are the changes to Contains. As discussed in depth in Deep .NET: Deep Dive on LINQ with Stephen Toub and Scott Hanselman and Deep .NET: An even DEEPER Dive into LINQ with Stephen Toub and Scott Hanselman, the LINQ methods are able to pass information between them by using specialized internal IEnumerable<T> implementations. When you call Select, that might return an ArraySelectIterator<TSource, TResult> or an IListSelectIterator<TSource, TResult> or an IListSkipTakeSelectIterator<TSource, TResult> or one of any number of other types. Each of these types has fields that carry information about the source (e.g. the IListSkipTakeSelectIterator<TSource, TResult> has fields not only for the IList<TSource> source and the Func<TSource, TResult> selector, but also for the tracked min and max bounds based on previous Skip and Take calls), and they have overrides of virtual methods that allow for various operations to be specialized. This means sequences of LINQ methods can be optimized. For example, source.Where(...).Select(...) is optimized a) to combine both the filter and the map delegates into a single IEnumerable<T>, thus removing the overhead of an extra layer of interface dispatch, and b) to perform operations specific to the original source data type (e.g. if source was an array, the processing can be done directly on that array rather than via IEnumerator<T>).

Many of these optimizations make the most sense when a method returns an IEnumerable<T> that happens to be the result of a LINQ query. The producer of that method doesn’t know how the consumer will be consuming it, and the consumer doesn’t know the details of how the producer produced it. But since the LINQ methods flow context via the concrete implementations of IEnumerable<T>, significant optimizations are possible for interesting combinations of consumer and producer methods. For example, let’s say a producer of an IEnumerable<T> decides they want to always return data in ascending order, so they do:

public static IEnumerable<T> GetData()
{
    ...
    return data.OrderBy(s => s.CreatedAt);
}

But as it turns out, the consumer won’t be looking at all of the elements, and instead just wants the first:

T value = GetData().First();

LINQ optimizes this by having the enumerable returned from OrderBy provide a specialized implementation of First/FirstOrDefault: it doesn’t need to perform an O(N log N) sort (or allocate a lot of memory to hold all of the keys), it can instead just do an O(N) search for the smallest element in the source, because the smallest element would be the first to be yielded from OrderBy.

Contains is ripe for these kinds of optimizations as well, e.g. OrderBy, Distinct, and Reverse all entail non-trivial processing and/or allocation, but if followed by a Contains, all that work can be skipped, as the Contains can just search the source directly. With dotnet/runtime#112684, this set of optimizations is extended to Contains, with almost 30 specialized implementations of Contains across the various iterator specializations.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private IEnumerable<int> _source = Enumerable.Range(0, 1000).ToArray();

    [Benchmark]
    public bool AppendContains() => _source.Append(100).Contains(999);

    [Benchmark]
    public bool ConcatContains() => _source.Concat(_source).Contains(999);

    [Benchmark]
    public bool DefaultIfEmptyContains() => _source.DefaultIfEmpty(42).Contains(999);

    [Benchmark]
    public bool DistinctContains() => _source.Distinct().Contains(999);

    [Benchmark]
    public bool OrderByContains() => _source.OrderBy(x => x).Contains(999);

    [Benchmark]
    public bool ReverseContains() => _source.Reverse().Contains(999);

    [Benchmark]
    public bool UnionContains() => _source.Union(_source).Contains(999);

    [Benchmark]
    public bool SelectManyContains() => _source.SelectMany(x => _source).Contains(999);

    [Benchmark]
    public bool WhereSelectContains() => _source.Where(x => true).Select(x => x).Contains(999);
}
Method Runtime Mean Ratio Allocated Alloc Ratio
AppendContains .NET 9.0 2,931.97 ns 1.00 88 B 1.00
AppendContains .NET 10.0 52.06 ns 0.02 56 B 0.64
ConcatContains .NET 9.0 3,065.17 ns 1.00 88 B 1.00
ConcatContains .NET 10.0 54.58 ns 0.02 56 B 0.64
DefaultIfEmptyContains .NET 9.0 39.21 ns 1.00 NA
DefaultIfEmptyContains .NET 10.0 32.89 ns 0.84 NA
DistinctContains .NET 9.0 16,967.31 ns 1.000 58656 B 1.000
DistinctContains .NET 10.0 46.72 ns 0.003 64 B 0.001
OrderByContains .NET 9.0 12,884.28 ns 1.000 12280 B 1.000
OrderByContains .NET 10.0 50.14 ns 0.004 88 B 0.007
ReverseContains .NET 9.0 479.59 ns 1.00 4072 B 1.00
ReverseContains .NET 10.0 51.80 ns 0.11 48 B 0.01
UnionContains .NET 9.0 16,910.57 ns 1.000 58664 B 1.000
UnionContains .NET 10.0 55.56 ns 0.003 72 B 0.001
SelectManyContains .NET 9.0 2,950.64 ns 1.00 192 B 1.00
SelectManyContains .NET 10.0 60.42 ns 0.02 128 B 0.67
WhereSelectContains .NET 9.0 1,782.05 ns 1.00 104 B 1.00
WhereSelectContains .NET 10.0 260.25 ns 0.15 104 B 1.00

LINQ in .NET 10 also gains some new methods, including Sequence and Shuffle. While the primary purpose of these new methods is not performance, they can have a meaningful impact on performance, due to how they’ve been implemented and how they integrate with the rest of the optimizations in LINQ. Take Sequence, for example. Sequence is similar to Range, in that its a source for numbers:

public static IEnumerable<T> Sequence<T>(T start, T endInclusive, T step) where T : INumber<T>

Whereas Range only works with int and produces a contiguous series of non-overflowing numbers starting at the initial value, Sequence works with any INumber<>, supports step values other than 1 (including negative values), and allows for wrapping around T‘s maximum or minimum. However, when appropriate (e.g. step is 1), Sequence will try to utilize Range‘s implementation, which has internally been updated to work with any T : INumber<T>, even though its public API is still tied to int. That means that all of the optimizations afforded to Range<T> propagate to Sequence<T>.

// dotnet run -c Release -f net10.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private List<short> _values = new();

    [Benchmark(Baseline = true)]
    public void Fill1()
    {
        _values.Clear();
        for (short i = 42; i <= 1042; i++)
        {
            _values.Add(i);
        }
    }

    [Benchmark]
    public void Fill2()
    {
        _values.Clear();
        _values.AddRange(Enumerable.Sequence<short>(42, 1042, 1));
    }
}
Method Mean Ratio
Fill1 1,479.99 ns 1.00
Fill2 37.42 ns 0.03

My favorite new LINQ method, though, is Shuffle (introduced in dotnet/runtime#112173), in part because it’s very handy, but in part because of its implementation and performance focus. The purpose of Shuffle is to randomize the source input, and logically, it’s akin to a very simple implementation:

public static IEnumerable<T> Shuffle<T>(IEnumerable<T> source)
{
    T[] arr = source.ToArray();
    Random.Shared.Shuffle(arr);
    foreach (T item in arr) yield return item;
}

Worst case, this implementation is effectively what’s in LINQ. Just as in the worst case OrderBy needs to buffer up the whole input because it’s possible any item might be the smallest and thus need to be yielded first, Shuffle similarly needs to support the possibility that the last element should probabilistically be yielded first. However, there are a variety of special-cases in the implementation that allow it to perform significantly better than such a hand-rolled Shuffle implementation you might be using today.

First, Shuffle has some of the same characteristics as OrderBy, in that they’re both creating permutations of the input. That means that many of the ways we can specialize subsequent operations on the result of an OrderBy also apply to Shuffle. For example, Shuffle.First on an IList<T> can just select an element from the list at random. Shuffle.Count can just count the underlying source, since the order of the elements is irrelevant to the result. Shuffle.Contains can just perform the contains on the underlying source. Etc. But my two favorite sequences are Shuffle.Take and Shuffle.Take.Contains.

Shuffle.Take provides an interesting optimization opportunity: whereas with Shuffle by itself we need to build the whole shuffled sequence, with a Shuffle followed immediately by a Take(N), we only need to sample N items from the source. We still need those N items to be a uniformly random distribution, akin to what we’d get if we performed the buffering shuffle and then selected the first N items in the resulting array, but we can do so using an algorithm that allows us to avoid buffering everything. We need an algorithm that will let us iterate through the source data once, picking out elements as we go, and only ever buffering N items at a time. Enter “reservoir sampling.” I previously discussed reservoir sampling in Performance Improvements in .NET 8, as it’s employed by the JIT as part of its dynamic PGO implementation, and we can use the algorithm here in Shuffle as well. Reservoir sampling provides exactly the single-pass, low-memory path we want: initialize a “reservoir” (an array) with the first N items, then as we scan the rest of the sequence, probabilistically overwrite one of the elements in our reservoir with the current item. The algorithm ensures that every element ends up in the reservoir with equal probability, yielding the same distribution as fully shuffling and taking N, but using only O(N) space and only making a single pass over an otherwise unknown-length source.

// dotnet run -c Release -f net10.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private IEnumerable<int> _source = Enumerable.Range(1, 1000).ToList();

    [Benchmark(Baseline = true)]
    public List<int> ShuffleTakeManual() => ShuffleManual(_source).Take(10).ToList();

    [Benchmark]
    public List<int> ShuffleTakeLinq() => _source.Shuffle().Take(10).ToList();

    private static IEnumerable<int> ShuffleManual(IEnumerable<int> source)
    {
        int[] arr = source.ToArray();
        Random.Shared.Shuffle(arr);
        foreach (var item in arr)
        {
            yield return item;
        }
    }
}
Method Mean Ratio Allocated Alloc Ratio
ShuffleTakeManual 4.150 us 1.00 4232 B 1.00
ShuffleTakeLinq 3.801 us 0.92 192 B 0.05

Shuffle.Take.Contains is even more fun. We now have a probability problem that reads like a brain teaser or an SAT question. “I have totalCount items of which equalCount match my target value, and we’re going to pick takeCount items at random. What is the probability that at least one of those takeCount items is one of the equalCount items?” This is called a hypergeometric distribution, and we can use an implementation of it for Shuffle.Take.Contains.

To make this easier to reason about, let’s talk candy. Imagine you have a jar of 100 jelly beans, of which 20 are your favorite flavor, Watermelon, and you’re going to pick 5 of the 100 beans at random; what are the chances you get at least one Watermelon? To solve this, we could reason through all the different ways we might get 1, 2, 3, 4, or 5 Watermelons, but instead, let’s do the opposite and think through how likely it is that we don’t get any (sad panda):

  • The chance that our first pick isn’t a Watermelon is the number of non-Watermelons divided by the total number of beans, so (100-20)/100.
  • Once we’ve picked a bean out of the jar, we’re not putting it back, so the chance that our second pick isn’t a Watermelon is now (99-20)/99 (we have one fewer bean, but our first pick wasn’t a Watermelon, so there’s the same number of Watermelons as there was before).
  • For a third pick, it’s now (98-20)/98.
  • And so on.

After five rounds, we end up with (80/100) * (79/99) * (78/98) * (77/97) * (76/96), which is ~32%. If the chances I don’t get a Watermelon are ~32%, then the chances I do get a Watermelon are ~68%. Jelly beans aside, that’s our algorithm:

double probOfDrawingZeroMatches = 1;
for (long i = 0; i < _takeCount; i++)
{
    probOfDrawingZeroMatches *= (double)(totalCount - i - equalCount) / (totalCount - i);
}

return Random.Shared.NextDouble() > probOfDrawingZeroMatches;

The net effect is we can compute the answer much more efficiently than with a naive implementation that shuffles and then separately takes and separately contains.

// dotnet run -c Release -f net10.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private IEnumerable<int> _source = Enumerable.Range(1, 1000).ToList();

    [Benchmark(Baseline = true)]
    public bool ShuffleTakeContainsManual() => ShuffleManual(_source).Take(10).Contains(2000);

    [Benchmark]
    public bool ShuffleTakeContainsLinq() => _source.Shuffle().Take(10).Contains(2000);

    private static IEnumerable<int> ShuffleManual(IEnumerable<int> source)
    {
        int[] arr = source.ToArray();
        Random.Shared.Shuffle(arr);
        foreach (var item in arr)
        {
            yield return item;
        }
    }
}
Method Mean Ratio Allocated Alloc Ratio
ShuffleTakeContainsManual 3,900.99 ns 1.00 4136 B 1.00
ShuffleTakeContainsLinq 79.12 ns 0.02 96 B 0.02

LINQ in .NET 10 also sports some new methods that are about performance (at least in part), in particular LeftJoin and RightJoin, from dotnet/runtime#110872. I say these are about performance because it’s already possible to achieve the left and right join semantics using existing LINQ surface area, and the new methods do it more efficiently.

Enumerable.Join implements an “inner join,” meaning only matching pairs from the two supplied collections appear in the output. For example, this code, which is joining based on the first letter in each string:

IEnumerable<string> left = ["apple", "banana", "cherry", "date", "grape", "honeydew"];
IEnumerable<string> right = ["aardvark", "dog", "elephant", "goat", "gorilla", "hippopotamus"];
foreach (string result in left.Join(right, s => s[0], s => s[0], (s1, s2) => $"{s1} {s2}"))
{
    Console.WriteLine(result);
}

outputs:

apple aardvark
date dog
grape goat
grape gorilla
honeydew hippopotamus

In contrast, a “left join” (also known as a “left outer join”) would yield the following:

apple aardvark
banana
cherry
date dog
grape goat
grape gorilla
honeydew hippopotamus

Note that it has all of the same output as with the “inner join,” except it has at least one row for every left element, even if there’s no matching element in the right row. And then a “right join” (also known as a “right outer join”) would yield the following:

apple aardvark
date dog
 elephant
grape goat
grape gorilla
honeydew hippopotamus

Again, all the same output as with the “inner join,” except it has at least one row for every right element, even if there’s no matching element in the left row.

Prior to .NET 10, there was no LeftJoin or RightJoin, but their semantics could be achieved using a combination of GroupJoin, SelectMany, and DefaultIfEmpty:

public static IEnumerable<TResult> LeftJoin<TOuter, TInner, TKey, TResult>(
    this IEnumerable<TOuter> outer, IEnumerable<TInner> inner,
    Func<TOuter, TKey> outerKeySelector, Func<TInner, TKey> innerKeySelector,
    Func<TOuter, TInner?, TResult> resultSelector) =>
    outer
    .GroupJoin(inner, outerKeySelector, innerKeySelector, (o, inners) => (o, inners))
    .SelectMany(x => x.inners.DefaultIfEmpty(), (x, i) => resultSelector(x.o, i));

GroupJoin creates a group for each outer (“left”) element, where the group contains all matching items from inner (“right”). We can flatten those results by using SelectMany, such that we end up with an output for each pairing, using DefaultIfEmpty to ensure that there’s always at least a default inner element to pair. We can do the exact same thing for a RightJoin: in fact, we can implement the right join just by delegating to the left join and flipping all the arguments:

public static IEnumerable<TResult> RightJoin<TOuter, TInner, TKey, TResult>(
    this IEnumerable<TOuter> outer, IEnumerable<TInner> inner,
    Func<TOuter, TKey> outerKeySelector, Func<TInner, TKey> innerKeySelector,
    Func<TOuter, TInner?, TResult> resultSelector) =>
    inner.LeftJoin(outer, innerKeySelector, outerKeySelector, (i, o) => resultSelector(o, i));

Thankfully, you no longer need to do that yourself, and this isn’t how the new LeftJoin and RightJoin methods are implemented in .NET 10. We can see the difference with a benchmark:

// dotnet run -c Release -f net10.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Linq;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private IEnumerable<int> Outer { get; } = Enumerable.Sequence(0, 1000, 2);
    private IEnumerable<int> Inner { get; } = Enumerable.Sequence(0, 1000, 3);

    [Benchmark(Baseline = true)]
    public void LeftJoin_Manual() =>
        ManualLeftJoin(Outer, Inner, o => o, i => i, (o, i) => o + i).Count();

    [Benchmark]
    public int LeftJoin_Linq() =>
        Outer.LeftJoin(Inner, o => o, i => i, (o, i) => o + i).Count();

    private static IEnumerable<TResult> ManualLeftJoin<TOuter, TInner, TKey, TResult>(
        IEnumerable<TOuter> outer, IEnumerable<TInner> inner,
        Func<TOuter, TKey> outerKeySelector, Func<TInner, TKey> innerKeySelector,
        Func<TOuter, TInner?, TResult> resultSelector) =>
        outer
        .GroupJoin(inner, outerKeySelector, innerKeySelector, (o, inners) => (o, inners))
        .SelectMany(x => x.inners.DefaultIfEmpty(), (x, i) => resultSelector(x.o, i));
}
Method Mean Ratio Allocated Alloc Ratio
LeftJoin_Manual 29.02 us 1.00 65.84 KB 1.00
LeftJoin_Linq 15.23 us 0.53 36.95 KB 0.56

Moving on from new methods, existing methods were also improved in other ways. dotnet/runtime#112401 from @miyaji255 improved the performance of ToArray and ToList following Skip and/or Take calls. In the specialized iterator implementation used for Take and Skip, this PR simply checks in the ToList and ToArray implementations whether the source is something from which we can easily get a ReadOnlySpan<T> (namely a T[] or List<T>). If it is, rather than copying elements one by one into the destination, it can slice the retrieved span and use its CopyTo, which, depending on the T, may even be vectorized.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private readonly IEnumerable<string> _source = Enumerable.Range(0, 1000).Select(i => i.ToString()).ToArray();

    [Benchmark]
    public List<string> SkipTakeToList() => _source.Skip(200).Take(200).ToList();
}
Method Runtime Mean Ratio
SkipTakeToList .NET 9.0 1,218.9 ns 1.00
SkipTakeToList .NET 10.0 257.4 ns 0.21

LINQ in .NET 10 also sees a few notable enhancements for Native AOT. The code for LINQ has grown over time, as all of these various specializations have found their way into the codebase. These optimizations are generally implemented by deriving specialized iterators from a base Iterator<T>, which has a bunch of abstract or virtual methods for performing the subsequent operation (e.g. Contains). With Native AOT, any use of a method like Enumerable.Contains then prevents the corresponding implementations on all of those specializations from being trimmed away, leading to non-trivial increase in assembly code size. As such, years ago multiple builds of System.Linq.dll were introduced into the dotnet/runtime build system: one focused on speed, and one focused on size. When building System.Linq.dll to go with coreclr, you’d end up with the speed-optimized build that has all of these specializations. When building System.Linq.dll to go with other flavors, like Native AOT, you’d instead get the size-optimized build, which eschews many of the LINQ optimizations that have been added in the last decade. And as this was a build-time decision, developers using one of these platforms didn’t get a choice; as you learn in kindergarten, “you get what you get and you don’t get upset.” Now in .NET 10, if you do forget what you learned in kindergarten and you do get upset, you have recourse: thanks to dotnet/runtime#111743 and dotnet/runtime#109978, this setting is now a feature switch rather than a build-time configuration. So, in particular if you’re publishing for Native AOT and you’d prefer all the speed-focused optimizations, you can add <UseSizeOptimizedLinq>false</UseSizeOptimizedLinq> to your project file and be happy.

However, the need for that switch is now also reduced significantly by dotnet/runtime#118156. When this size/speed split was previously introduced into the System.Linq.dll build, all of these specializations were eschewed, without a lot of an analysis for tradeoffs involved; as this was focused on optimizing for size, any specialized overrides were removed, no matter how much space they actually saved. Many of those savings turned out to be minimal, however, and in a variety of situations, the throughput cost was significant. This PR brings back some of the more impactful specializations where the throughput gains significantly outweigh the relatively-minimal size cost.

Frozen Collections

The FrozenDictionary<TKey, TValue> and FrozenSet<T> collection types were introduced in .NET 8 as collections optimized for the common scenario of creating a long-lived collection that’s then read from a lot. They spend more time at construction in exchange for faster read operations. Under the covers, this is achieved in part by having specializations of the implementations that are optimized for different types of data or shapes of input. .NET 9 improved upon the implementations, and .NET 10 takes it even further.

FrozenDictionary<TKey, TValue> exerts a lot of energy for TKey as string, as that is such a common use case. It also has specializations for TKey as Int32. dotnet/runtime#111886 and dotnet/runtime#112298 extend that further by adding specializations for when TKey is any primitive integral type that’s the size of an int or smaller (e.g. byte, char, ushort, etc.) as well as enums backed by such primitives (which represent the vast, vast majority of enums used in practice). In particular, they handle the common case where these values are densely packed, in which case they implement the dictionary as an array that it can index into based on the integer’s value. This makes for a very efficient lookup, while not consuming too much additional space: it’s only used when the values are dense and thus won’t be wasting many empty slots in the array.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Collections.Frozen;
using System.Net;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "status")]
public partial class Tests
{
    private static readonly FrozenDictionary<HttpStatusCode, string> s_statusDescriptions =
        Enum.GetValues<HttpStatusCode>().Distinct()
            .ToFrozenDictionary(status => status, status => status.ToString());

    [Benchmark]
    [Arguments(HttpStatusCode.OK)]
    public string Get(HttpStatusCode status) => s_statusDescriptions[status];
}
Method Runtime Mean Ratio
Get .NET 9.0 2.0660 ns 1.00
Get .NET 10.0 0.8735 ns 0.42

Both FrozenDictionary<TKey, TValue> and FrozenSet<T> also improve with regards to the alternate lookup functionality introduced in .NET 9. Alternate lookups are a mechanism that enables getting a proxy for a dictionary or set that’s keyed with a different key from TKey, most commonly a ReadOnlySpan<char> when TKey is string. As noted, both FrozenDictionary<TKey, TValue> and FrozenSet<T> achieve their goals by having different implementations based on the nature of the indexed data, and that specialization is achieved by virtual methods that derived specializations override. The JIT is typically able to minimize the costs of such virtuals, especially if the collections are stored in static readonly fields. However, the alternate lookup support complicated things, as it introduced a virtual method with a generic method parameter (the alternate key type), otherwise known as GVM. “GVM” might as well be a four-letter word in performance circles, as they’re hard for the runtime to optimize. The purpose of these alternate lookups is primarily performance, but the use of a GVM significantly reduced those performance gains. dotnet/runtime#108732 from @andrewjsaid addresses this by changing the frequency with which a GVM needs to be invoked. Rather than the lookup operation itself being a generic virtual method, the PR introduces a separate generic virtual method that retrieves a delegate for performing the lookup; the retrieval of that delegate still incurs GVM penalties, but once the delegate is retrieved, it can be cached, and invoking it does not incur said overheads. This results in measurable improvements on throughput.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Collections.Frozen;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private static readonly FrozenDictionary<string, int> s_d = new Dictionary<string, int> 
    {
        ["one"] = 1, ["two"] = 2, ["three"] = 3, ["four"] = 4, ["five"] = 5, ["six"] = 6, 
        ["seven"] = 7, ["eight"] = 8, ["nine"] = 9, ["ten"] = 10, ["eleven"] = 11, ["twelve"] = 12,
    }.ToFrozenDictionary();

    [Benchmark]
    public int Get()
    {
        var alternate = s_d.GetAlternateLookup<ReadOnlySpan<char>>();
        return
            alternate["one"] + alternate["two"] + alternate["three"] + alternate["four"] + alternate["five"] +
            alternate["six"] + alternate["seven"] + alternate["eight"] + alternate["nine"] + alternate["ten"] + 
            alternate["eleven"] + alternate["twelve"];
    }
}
Method Runtime Mean Ratio
Get .NET 9.0 133.46 ns 1.00
Get .NET 10.0 81.39 ns 0.61

BitArray

BitArray provides support for exactly what its name says, a bit array. You create it with the desired number of values and can then read and write a bool for each index, turning the corresponding bit to 1 or 0 accordingly. It also provides a variety of helper operations for processing the whole bit array, such as for Boolean logic operations like And and Not. Where possible, those operations are vectorized, taking advantage of SIMD to process many bits per instruction.

However, for situations where you want to write custom manipulations of the bits, you only have two options: use the indexer (or corresponding Get and Set methods), which means multiple instructions required to process each bit, or use CopyTo to extract all of the bits to a separate array, which means you need to allocate (or at least rent) such an array and pay for the memory copy before you can then manipulate the bits. There’s also not a great way to then copy those bits back if you wanted to manipulate the BitArray in place.

dotnet/runtime#116308 adds a CollectionsMarshal.AsBytes(BitArray) method that returns a Span<byte> directly referencing the BitArray‘s underlying storage. This provides a very efficient way to get access to all the bits, which then makes it possible to write (or reuse) vectorized algorithms. Say, for example, you wanted to use a BitArray to represent a binary embedding (an “embedding” is a vector representation of the semantic meaning of some data, basically an array of numbers, each one corresponding to some aspect of the data; a binary embedding uses a single bit for each number). To determine how semantically similar two inputs are, you get an embedding for each and then perform a distance or similarity calculation on the two. For binary embeddings, a common distance metric is “hamming distance,” which effectively lines up the bits and tells you the number of positions that have different values, e.g. 0b1100 and 0b1010 have a hamming distance of 2. Helpfully, TensorPrimitives.HammingBitDistance provides an implementation of this, accepting two ReadOnlySpan<T>s and computing the number of bits that differ between them. With CollectionsMarshal.AsBytes, we can now utilize that helper directly with the contents of BitArrays, both saving us the effort of having to write it manually and benefiting from any optimizations in HammingBitDistance itself.

// Update benchmark.csproj with a package reference to System.Numerics.Tensors.
// dotnet run -c Release -f net10.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Collections;
using System.Numerics.Tensors;
using System.Runtime.InteropServices;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private BitArray _bits1, _bits2;

    [GlobalSetup]
    public void Setup()
    {
        Random r = new(42);
        byte[] bytes = new byte[128];

        r.NextBytes(bytes);
        _bits1 = new BitArray(bytes);

        r.NextBytes(bytes);
        _bits2 = new BitArray(bytes);
    }

    [Benchmark(Baseline = true)]
    public long HammingDistanceManual()
    {
        long distance = 0;
        for (int i = 0; i < _bits1.Length; i++)
        {
            if (_bits1[i] != _bits2[i])
            {
                distance++;
            }
        }

        return distance;
    }

    [Benchmark]
    public long HammingDistanceTensorPrimitives() =>
        TensorPrimitives.HammingBitDistance(
            CollectionsMarshal.AsBytes(_bits1),
            CollectionsMarshal.AsBytes(_bits2));
}
Method Mean Ratio
HammingDistanceManual 1,256.72 ns 1.00
HammingDistanceTensorPrimitives 63.29 ns 0.05

The main motivation for this PR was adding the AsBytes method, but doing so triggered a series of other modifications that themselves help with performance. For example, rather than backing the BitArray with an int[] as was previously done, it’s now backed by a byte[], and rather than reading elements one by one in the byte[]-based constructor, vectorized copy operations are now being used (they were already being used and continue to be used in the int[]-based constructor).

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Collections;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private byte[] _byteData = Enumerable.Range(0, 512).Select(i => (byte)i).ToArray();

    [Benchmark]
    public BitArray ByteCtor() => new BitArray(_byteData);
}
Method Runtime Mean Ratio
ByteCtor .NET 9.0 160.10 ns 1.00
ByteCtor .NET 10.0 83.07 ns 0.52

Other Collections

There are a variety of other notable improvements in collections:

  • List<T>. dotnet/runtime#107683 from @karakasa builds on a change that was made in .NET 9 to improve the performance of using InsertRange on a List<T> to insert a ReadOnlySpan<T>. When a full List<T> is appended to, the typical process is a new larger array is allocated, all of the existing elements are copied over (one array copy), and then the new element is stored into the array in the next available slot. If that same growth routine is used when inserting rather than appending an element, you possibly end up copying some elements twice: you first copy over all of the elements into the new array, and then to handle the insert, you may again need to copy some of the elements you already copied as part of shifting them to make room for the insertion at the new location. In the extreme, if you’re inserting at index 0, you copy all of the elements into the new array, and then you copy all of the elements again to shift them by one slot. The same applies when inserting a range of elements, so with this PR, rather than first copying over all of the elements and then shifting a subset, List<T> now grows by copying the elements above and below the target range for the insertion to their correct location and then fills in the target range with the inserted elements.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
    public partial class Tests
    {
        private readonly int[] _data = [1, 2, 3, 4];
    
        [Benchmark]
        public List<int> Test()
        {
            List<int> list = new(4);
            list.AddRange(_data);
            list.InsertRange(0, _data);
            return list;
        }
    }
    Method Runtime Mean Ratio
    Test .NET 9.0 48.65 ns 1.00
    Test .NET 10.0 30.07 ns 0.62
  • ConcurrentDictionary<TKey, TValue>. dotnet/runtime#108065 from @koenigst changes how a ConcurrentDictionary‘s backing array is sized when it’s cleared. ConcurrentDictionary is implemented with an array of linked lists, and when the collection is constructed, a constructor parameter allows for presizing that array. Due to the concurrent nature of the dictionary and its implementation, Clear‘ing it necessitates creating a new array rather than just using part of the old one. When that new array was created, it reset to using the default size. This PR tweaks that to remember the initial capacity requested by the user, and using that initial size again when constructing the new array.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    using System.Collections.Concurrent;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [MemoryDiagnoser(displayGenColumns: false)]
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
    public partial class Tests
    {
        private ConcurrentDictionary<int, int> _data = new(concurrencyLevel: 1, capacity: 1024);
    
        [Benchmark]
        public void ClearAndAdd()
        {
            _data.Clear();
            for (int i = 0; i < 1024; i++)
            {
                _data.TryAdd(i, i);
            }
        }
    }
    Method Runtime Mean Ratio Allocated Alloc Ratio
    ClearAndAdd .NET 9.0 51.95 us 1.00 134.36 KB 1.00
    ClearAndAdd .NET 10.0 30.32 us 0.58 48.73 KB 0.36
  • Dictionary<TKey, TValue>. Dictionary is one of the most popular collection types across .NET, and TKey == string is one of (if not the) most popular forms. dotnet/runtime#117427 makes dictionary lookups with constant strings much faster. You might expect it would be a complicated change, but it ends up being just a few strategic tweaks. A variety of methods for operating on strings are already known to the JIT and already have optimized implementations for when dealing with constants. All this PR needed to do was change which methods Dictionary<TKey, TValue> was using in its optimized TryGetValue lookup path, and because that path is often inlined, a constant argument to TryGetValue can be exposed as a constant to these helpers, e.g. string.Equals.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
    public partial class Tests
    {
        private Dictionary<string, int> _data = new() { ["a"] = 1, ["b"] = 2, ["c"] = 3, ["d"] = 4, ["e"] = 5 };
    
        [Benchmark]
        public int Get() => _data["a"] + _data["b"] + _data["c"] + _data["d"] + _data["e"];
    }
    Method Runtime Mean Ratio
    Get .NET 9.0 33.81 ns 1.00
    Get .NET 10.0 14.02 ns 0.41
  • OrderedDictionary<TKey, TValue>. dotnet/runtime#109324 adds new overloads of TryAdd and TryGetValue that provide the index of the added or retrieved element in the collection. This index can then be used in subsequent operations on the dictionary to access the same slot. For example, if you want to implement an AddOrUpdate operation on top of OrderedDictionary, you need to perform one or two operations, first trying to add the item, and then if found to already exist, updating it, and that update can benefit from targeting the exact index that contains the element rather than it needing to do another keyed lookup.
    // dotnet run -c Release -f net10.0 --filter "*"
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
    public partial class Tests
    {
        private OrderedDictionary<string, int> _dictionary = new();
    
        [Benchmark(Baseline = true)]
        public void Old() => AddOrUpdate_Old(_dictionary, "key", k => 1, (k, v) => v + 1);
    
        [Benchmark]
        public void New() => AddOrUpdate_New(_dictionary, "key", k => 1, (k, v) => v + 1);
    
        private static void AddOrUpdate_Old(OrderedDictionary<string, int> d, string key, Func<string, int> addFunc, Func<string, int, int> updateFunc)
        {
            if (d.TryGetValue(key, out int existing))
            {
                d[key] = updateFunc(key, existing);
            }
            else
            {
                d.Add(key, addFunc(key));
            }
        }
    
        private static void AddOrUpdate_New(OrderedDictionary<string, int> d, string key, Func<string, int> addFunc, Func<string, int, int> updateFunc)
        {
            if (d.TryGetValue(key, out int existing, out int index))
            {
                d.SetAt(index, updateFunc(key, existing));
            }
            else
            {
                d.Add(key, addFunc(key));
            }
        }
    }
    Method Mean Ratio
    Old 6.961 ns 1.00
    New 4.201 ns 0.60
  • ImmutableArray<T>. The ImmutableCollectionsMarshal class already exposes an AsArray method that enables retrieving the backing T[] from an ImmutableArray<T>. However, if you had an ImmutableArray<T>.Builder, there was previously no way to access the backing store it was using. dotnet/runtime#112177 enables doing so, with an AsMemory method that retrieves the underlying storage as a Memory<T>.
  • InlineArray. .NET 8 introduced InlineArrayAttribute, which can be used to attribute a struct containing a single field; the attribute accepts a count, and the runtime replicates the struct’s field that number of times, as if you’d logically copy/pasted the field repeatedly. The runtime also ensures that the storage is contiguous and appropriately aligned, such that if you had an indexible collection that pointed to the beginning of the struct, you could use it as an array. And it so happens such a collection exists: Span<T>. C# 12 then makes it easy to treat any such attributed struct as a span, e.g.
    [InlineArray(8)]
    internal struct EightStrings
    {
        private string _field;
    }
    ...
    EightStrings strings = default;
    Span<string> span = strings;

    The C# compiler will itself emit code that uses this capability. For example, if you use collection expressions to initialize a span, you’re likely triggering the compiler to emit an InlineArray. When I write this:

    public void M(int a, int b, int c, int d) 
    {
        Span<int> span = [a, b, c, d];
    }

    the compiler emits something like the following equivalent:

    public void M(int a, int b, int c, int d)
    {
        <>y__InlineArray4<int> buffer = default(<>y__InlineArray4<int>);
        <PrivateImplementationDetails>.InlineArrayElementRef<<>y__InlineArray4<int>, int>(ref buffer, 0) = a;
        <PrivateImplementationDetails>.InlineArrayElementRef<<>y__InlineArray4<int>, int>(ref buffer, 1) = b;
        <PrivateImplementationDetails>.InlineArrayElementRef<<>y__InlineArray4<int>, int>(ref buffer, 2) = c;
        <PrivateImplementationDetails>.InlineArrayElementRef<<>y__InlineArray4<int>, int>(ref buffer, 3) = d;
        <PrivateImplementationDetails>.InlineArrayAsSpan<<>y__InlineArray4<int>, int>(ref buffer, 4);
    }

    where it has defined that <>y__InlineArray4 like this:

    [StructLayout(LayoutKind.Auto)]
    [InlineArray(4)]
    internal struct <>y__InlineArray4<T>
    {
        [CompilerGenerated]
        private T _element0;
    }

    This shows up elsewhere, too. For example, C# 13 introduced support for using params with collections other than arrays, including spans, so now I can write this:

    public void Caller(int a, int b, int c, int d) => M(a, b, c, d);
    
    public void M(params ReadOnlySpan<int> span) { }

    and for Caller we’ll see very similar code emitted to what I previously showed, with the compiler manufacturing such an InlineArray type. As you might imagine, the popularity of the features that cause the compiler to produce these types has caused there to be a lot of them emitted. Each type is specific to a particular length, so while the compiler will reuse them, a) it can end up needing to emit a lot to cover different lengths, and b) it emits them as internal to each assembly that needs them, so there can end up being a lot of duplication. Looking just at the shared framework for .NET 9 (the core libraries like System.Private.CoreLib that ship as part of the runtime), there are ~140 of these types… all of which are for sizes no larger than 8. For .NET 10, dotnet/runtime#113403 adds a set of public InlineArray2<T>, InlineArray3<T>, etc., that should cover the vast majority of sizes the compiler would otherwise need to emit types. In the near future, the C# compiler will be updated to use those new types when available instead of emitting its own, thereby yielding non-trivial size savings.

I/O

In previous .NET releases, there have been concerted efforts that have invested a lot in improving specific areas of I/O performance, such as completely rewriting FileStream in .NET 6. Nothing as comprehensive as that was done for I/O in .NET 10, but there are some nice one-off improvements that can still have a measurable impact on certain scenarios.

On Unix, when a MemoryMappedFile is created and it’s not associated with a particular FileStream, it needs to create some kind of backing memory for the MMF’s data. On Linux, it’d try to use shm_open, which creates a shared memory object with appropriate semantics. However, in the years since MemoryMappedFile was initially enabled on Linux, the Linux kernel has added support for anonymous files and the memfd_create function that creates them. These are ideal for MemoryMappedFile and much more efficient, so dotnet/runtime#105178 from @am11 switches over to using memfd_create when it’s available.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.IO.MemoryMappedFiles;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    [Benchmark]
    public void MMF()
    {
        using MemoryMappedFile mff = MemoryMappedFile.CreateNew(null, 12345);
        using MemoryMappedViewAccessor accessor = mff.CreateViewAccessor();
    }
}
Method Runtime Mean Ratio
MMF .NET 9.0 9.916 us 1.00
MMF .NET 10.0 6.358 us 0.64

FileSystemWatcher is improved in dotnet/runtime#116830. The primary purpose for this PR was to fix a memory leak, where on Windows disposing of a FileSystemWatcher while it was in use could end up leaking some objects. However, it also addresses a performance issue specific to Windows. FileSystemWatcher needs to pass a buffer to the OS for the OS to populate with file-changed information. That meant that FileSystemWatcher was allocating a managed array and then immediately pinning that buffer so it could pass a pointer to it into native code. For certain consumption of FileSystemWatcher, especially in scenarios where lots of FileSystemWatcher instances are created, that pinning could contribute to non-trivial heap fragmentation. Interestingly, though, this array is effectively never consumed as an array: all of the writes into it are performed in native code via the pointer that was passed to the OS, and all consumption of it in managed code to read out the events are done via a span. That means the array nature of it doesn’t really matter, and we’re better off just allocating a native rather than managed buffer that then requires pinning.

// Run on Windows.
// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    [Benchmark]
    public void FSW()
    {
        using FileSystemWatcher fsw = new(Environment.CurrentDirectory);
        fsw.EnableRaisingEvents = true;
    }
}
Method Runtime Mean Ratio Allocated Alloc Ratio
FSW .NET 9 61.46 us 1.00 8944 B 1.00
FSW .NET 10 61.21 us 1.00 744 B 0.08

BufferedStream gets a boost from dotnet/runtime#104822 from @ANahr. There is a curious and problematic inconsistency in BufferedStream that’s been there since, well, forever as far as I can tell. It’s obviously been revisited in the past, and due to the super duper strong backwards compatibility concerns for .NET Framework (where a key feature is that the framework doesn’t change), the issue was never fixed. There’s even a comment in the code to this point:

// We should not be flushing here, but only writing to the underlying stream, but previous version flushed, so we keep this.

A BufferedStream does what its name says. It wraps an underlying Stream and buffers access to it. So, for example, if it were configured with a buffer size of 1000, and you wrote 100 bytes to the BufferedStream at a time, your first 10 writes would just go to the buffer and the underlying Stream wouldn’t be touched at all. Only on the 11th write would the buffer be full and need to be flushed (meaning written) to the underlying Stream. So far, so good. Moreover, there’s a difference between flushing to the underlying stream and flushing the underlying stream. Those sound almost identical, but they’re not: in the former case, we’re effectively calling _stream.Write(buffer) to write the buffer to that stream, and in the latter case, we’re effectively calling _stream.Flush() to force any buffering that stream was doing to propagate it to its underlying destination. BufferedStream really shouldn’t be in the business of doing the latter when Write‘ing to the BufferedStream, and in general it wasn’t… except in one case. Whereas most of the writing-related methods would not call _stream.Flush(), for some reason WriteByte did. In particular for cases where the BufferedStream is configured with a small buffer, and where the underlying stream’s flush is relatively expensive (e.g. DeflateStream.Flush forces any buffered bytes to be compressed and emitted), that can be problematic for performance, nevermind the inconsistency. This change simply fixes the inconsistency, such that WriteByte no longer forces a flush on the underlying stream.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.IO.Compression;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private byte[] _bytes;

    [GlobalSetup]
    public void Setup()
    {
        _bytes = new byte[1024 * 1024];
        new Random(42).NextBytes(_bytes);
    }

    [Benchmark]
    public void WriteByte()
    {
        using Stream s = new BufferedStream(new DeflateStream(Stream.Null, CompressionLevel.SmallestSize), 256);
        foreach (byte b in _bytes)
        {
            s.WriteByte(b);
        }
    }
}
Method Runtime Mean Ratio
WriteByte .NET 9.0 73.87 ms 1.00
WriteByte .NET 10.0 17.77 ms 0.24

While on the subject of compression, it’s worth calling out several improvements in System.IO.Compression in .NET 10, too. As noted in Performance Improvements in .NET 9, DeflateStream/GZipStream/ZLibStream are managed wrappers around an underlying native zlib library. For a long time, that was the original zlib (madler/zlib). Then it was Intel’s zlib-intel fork (intel/zlib), which is now archived and no longer maintained. In .NET 9, the library switched to using zlib-ng (zlib-ng/zlib-ng), which is a modernized fork that’s well-maintained and optimized for a large number of hardware architectures. .NET 9 is based on zlib-ng 2.2.1. dotnet/runtime#118457 updates it to use zlib-ng 2.2.5. Compared with the 2.2.1 release, there are a variety of performance improvements in zlib-ng itself, which .NET 10 then inherits, such as improved used of AVX2 and AVX512. Most importantly, though, the update includes a revert that undoes a cleanup change in the 2.2.0 release; the original change removed a workaround for a function that had been slow and was found to no longer be slow, but as it turns out, it’s still slow in some circumstances (long, highly compressible data), resulting in a throughput regression. The fix in 2.2.5 puts back the workaround to fix the regression.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.IO.Compression;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private byte[] _data = new HttpClient().GetByteArrayAsync(@"https://raw.githubusercontent.com/dotnet/runtime-assets/8d362e624cde837ec896e7fff04f2167af68cba0/src/System.IO.Compression.TestData/DeflateTestData/xargs.1").Result;

    [Benchmark]
    public void Compress()
    {
        using ZLibStream z = new(Stream.Null, CompressionMode.Compress);
        for (int i = 0; i < 100; i++)
        {
            z.Write(_data);
        }
    }
}
Method Runtime Mean Ratio
Compress .NET 9.0 202.79 us 1.00
Compress .NET 10.0 70.45 us 0.35

The managed wrapper for zlib also gains some improvements. dotnet/runtime#113587 from @edwardneal improves the case where multiple gzip payloads are being read from the underlying Stream. Due to its nature, multiple complete gzip payloads can be written one after the other, and a single GZipStream can be used to decompress all of them as if they were one. Each time it hit a boundary between payloads, the managed wrapper was throwing away the old interop handles and creating new ones, but it can instead take advantage of reset capabilities in the underlying zlib library, shaving off some cycles associated with freeing and re-allocating the underlying data structures. This is a very biased micro-benchmark (a stream containing a 1000 gzip payloads that each decompresses into a single byte), highlighting the worst case, but it exemplifies the issue:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.IO.Compression;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private MemoryStream _data;

    [GlobalSetup]
    public void Setup()
    {
        _data = new MemoryStream();
        for (int i = 0; i < 1000; i++)
        {
            using GZipStream gzip = new(_data, CompressionMode.Compress, leaveOpen: true);
            gzip.WriteByte(42);
        }
    }

    [Benchmark]
    public void Decompress()
    {
        _data.Position = 0;
        using GZipStream gzip = new(_data, CompressionMode.Decompress, leaveOpen: true);
        gzip.CopyTo(Stream.Null);
    }
}
Method Runtime Mean Ratio
Decompress .NET 9.0 331.3 us 1.00
Decompress .NET 10.0 104.3 us 0.31

Other components that sit above these streams, like ZipArchive, have also improved. dotnet/runtime#103153 from @edwardneal updates ZipArchive to not rely on BinaryReader and BinaryWriter, avoiding their underlying buffer allocations and having more fine-grained control over how and when exactly data is encoded/decoded and written/read. And dotnet/runtime#102704 from @edwardneal reduces memory consumption and allocation when updating ZipArchives. A ZipArchive update used to be “rewrite the world”: it loaded every entry’s data into memory and rewrote all the file headers, all entry data, and the “central directory” (what the zip format calls its catalog of all the entries in the archive). A large archive would have proportionally large allocation. This PR introduces change tracking plus ordering of entries so that only the portion of the file from the first actually affected entry (or one whose variable‑length metadata/data changed) is rewritten, rather than always rewriting the whole thing. The effects can be significant.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.IO.Compression;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private Stream _zip = new MemoryStream();

    [GlobalSetup]
    public void Setup()
    {
        using ZipArchive zip = new(_zip, ZipArchiveMode.Create, leaveOpen: true);

        Random r = new(42);
        for (int i = 0; i < 1000; i++)
        {
            byte[] fileBytes = new byte[r.Next(512, 2048)];
            r.NextBytes(fileBytes);
            using Stream s = zip.CreateEntry($"file{i}.txt").Open();
            s.Write(fileBytes);
        }
    }

    [Benchmark]
    public void Update()
    {
        _zip.Position = 0;
        using ZipArchive zip = new(_zip, ZipArchiveMode.Update, leaveOpen: true);
        zip.GetEntry("file987.txt")?.Delete();
    }
}
Method Runtime Mean Ratio Allocated Alloc Ratio
Update .NET 9.0 987.8 us 1.00 2173.9 KB 1.00
Update .NET 10.0 354.7 us 0.36 682.22 KB 0.31

(ZipArchive and ZipFile also gain async APIs in dotnet/runtime#114421, a long requested feature that allows using async I/O while loading, manipulating, and saving zips.)

Finally, somewhere between performance and reliability, dotnet/roslyn-analyzers#7390 from @mpidash adds a new analyzer for StreamReader.EndOfStream. StreamReader.EndOfStream seems like it should be harmless, but it’s quite the devious little property. The intent is to determine whether the reader is at the end up of the underlying Stream. Seems easy enough. If the StreamReader still has previously read data buffered, obviously it’s not at the end. And if the reader has previously seen EOF, e.g. Read returned 0, then it obviously is at the end. But in all other situations, there’s no way to know you’re at the end of the stream (at least in the general case) without performing a read, which means this property does something properties should never do: perform I/O. Worse than just performing I/O, that read can be a blocking operation, e.g. if the Stream represents a network stream for a Socket, and performing a read actually means blocking until data is received. Even worse, though, is when it’s used in an asynchronous method, e.g.

while (!reader.EndOfStream)
{
    string? line = await reader.ReadLineAsync();
    ...
}

Now not only might EndOfStream do I/O and block, it’s doing that in a method that’s supposed to do all of its waiting asynchronously.

What makes this even more frustrating is that EndOfStream isn’t even useful in a loop like that above. ReadLineAsync will return a null string if it’s at the end of the stream, so the loop would instead be better as:

while (await reader.ReadLineAsync() is string line)
{
    ...
}

Simpler, cheaper, and no ticking time bombs of synchronous I/O. Thanks to this new analyzer, any such use of EndOfStream in an async method will trigger CA2024:

CA2024 Analyzer

Networking

Networking-related operations show up in almost every modern workload. Past releases of .NET have seen a lot of energy exerted on whittling away at networking overheads, as these components are used over and over and over, often in critical paths, and the overheads can add up. .NET 10 continues the streamlining trend.

As was seen with core primitives earlier, IPAddress and IPNetwork are both imbued with UTF8 parsing capabilities, thanks to dotnet/runtime#102144 from @edwardneal. As is the case with most other such types in the core libraries, the UTF8-based implementation and the UTF16-based implementation are mostly the same implementation, sharing most of their code via generic methods parameterized on byte vs char. And as a result of the focus on enabling UTF8, not only can you parse UTF8 bytes directly rather than needing to transcode first, the existing code actually gets a bit faster.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Net;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD", "s")]
public partial class Tests
{
    [Benchmark]
    [Arguments("Fe08::1%13542")]
    public IPAddress Parse(string s) => IPAddress.Parse(s);
}
Method Runtime Mean Ratio
Parse .NET 9.0 71.35 ns 1.00
Parse .NET 10.0 54.60 ns 0.77

IPAddress is also imbued with IsValid and IsValidUtf8 methods, thanks to dotnet/runtime#111433. It was previously possible to test the validity of an address via TryParse, but when successful, that would allocate the IPAddress; if you don’t need the resulting object but just need to know whether it’s valid, the extra allocation is wasteful.

// dotnet run -c Release -f net10.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Net;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private string _address = "123.123.123.123";

    [Benchmark(Baseline = true)]
    public bool TryParse() => IPAddress.TryParse(_address, out _);

    [Benchmark]
    public bool IsValid() => IPAddress.IsValid(_address);
}
Method Mean Ratio Allocated Alloc Ratio
TryParse 26.26 ns 1.00 40 B 1.00
IsValid 21.88 ns 0.83 0.00

Uri, used in the above benchmark, also gets some notable improvements. In fact, one of my favorite improvements in all of .NET 10 is in Uri. The feature itself isn’t a performance improvement, but there are some interesting performance-related ramifications for it. In particular, since forever, Uri has had a length limitation due to implementation details. Uri keeps track of various offsets in the input, such as where the host portion starts, where the path starts, where the query starts, and so on. The implementer chose to use ushort for each of these values rather than int. That means the maximum length of a Uri is then constrained to the lengths a ushort can describe, namely 65,535 characters. That sounds like a ridiculously long Uri, one no one would ever need to go beyond… until you consider data URIs. Data URIs embed a representation of arbitrary bytes, typically Base64 encoded, in the URI itself. This allows for files to be represented directly in links, and it’s become a common way for AI-related services to send and receive data payloads, like images. It doesn’t take a very large image to exceed 65K characters, however, especially with Base64 encoding increasing the payload size by ~33%. dotnet/runtime#117287 finally removes that limitation, so now Uri can be used to represent very large data URIs, if desired. This, however, has some performance ramifications (beyond the few percentage increase in the size of Uri, to accomodate the extra ushort to int bytes). In particular, Uri implements path compression, so for example this:

Console.WriteLine(new Uri("http://test/hello/../hello/../hello"));

prints out:

http://test/hello

As it turns out, the algorithm implementing that path compression is O(N^2). Oops. With a limit of 65K characters, such a quadratic complexity isn’t a security concern (as O(N^2) operations can sometimes be, as if N is unbounded, it creates an attack vector where an attacker can do N work and get the attackee to do disproportionately more). But once the limit is removed entirely, it could be. As such, dotnet/runtime#117820 compensates by making the path compression O(N). And while in the general case, we don’t expect path compression to be a meaningfully impactful part of constructing Uri, in degenerate cases, even under the old limit, the change can still make a measurable improvement.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private string _input = $"http://host/{string.Concat(Enumerable.Repeat("a/../", 10_000))}{new string('a', 10_000)}";

    [Benchmark]
    public Uri Ctor() => new Uri(_input);
}
Method Runtime Mean Ratio
Ctor .NET 9.0 18.989 us 1.00
Ctor .NET 10.0 2.228 us 0.12

In the same vein, the longer the URI, the more effort is required to do whatever validation is needed in the constructor. Uri‘s constructor needs to check whether the input has any Unicode characters that might need to be handled. Rather than checking all the characters one at a time, with dotnet/runtime#107357, Uri can now use SearchValues to more quickly rule out or find the first location of a character that needs to be looked at more deeply.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private string _uri;

    [GlobalSetup]
    public void Setup()
    {
        byte[] bytes = new byte[40_000];
        new Random(42).NextBytes(bytes);
        _uri = $"data:application/octet-stream;base64,{Convert.ToBase64String(bytes)}";
    }

    [Benchmark]
    public Uri Ctor() => new Uri(_uri);
}
Method Runtime Mean Ratio
Ctor .NET 9.0 19.354 us 1.00
Ctor .NET 10.0 2.041 us 0.11

Other changes were made to Uri that further reduce construction costs in various other cases, too. For cases where the URI host is an IPv6 address, e.g. http://[2603:1020:201:10::10f], dotnet/runtime#117292 recognizes that scope IDs are relatively rare and makes the cases without a scope ID cheaper in exchange for making the cases with a scope ID a little more expensive.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    [Benchmark]
    public string CtorHost() => new Uri("http://[2603:1020:201:10::10f]").Host;
}
Method Runtime Mean Ratio Allocated Alloc Ratio
CtorHost .NET 9.0 304.9 ns 1.00 208 B 1.00
CtorHost .NET 10.0 254.2 ns 0.83 216 B 1.04

(Note that the .NET 10 allocation is 8 bytes larger than the .NET 9 allocation due to the extra space required in this case for dropping the length limitation, as discussed earlier.)

dotnet/runtime#117289 also improves construction for cases where the URI requires normalization, saving some allocations by using normalization routines over spans (which were added in dotnet/runtime#110465) instead of needing to allocate strings for the inputs.

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    [Benchmark]
    public Uri Ctor() => new("http://some.host.with.ümlauts/");
}
Method Runtime Mean Ratio Allocated Alloc Ratio
Ctor .NET 9.0 377.6 ns 1.00 440 B 1.00
Ctor .NET 10.0 322.0 ns 0.85 376 B 0.85

Various improvements have also found their way into the HTTP stack. For starters, the download helpers on HttpClient and HttpContent have improved. These types expose helper methods for some of the most common forms of grabbing data; while a developer can grab the response Stream and consume that efficiently, for simple and common cases like “just get the whole response as a string” or “just get the whole response as a byte[]“, the GetStringAsync and GetByteArrayAsync make that really easy to do. dotnet/runtime#109642 changes how these methods operate in order to better manage the temporary buffers that are required, especially in the case where the server hasn’t advertised a Content-Length, such that the client doesn’t know ahead of time how much data to expect and thus how much space to allocate.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Net;
using System.Net.Sockets;
using System.Text;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private HttpClient _client = new();
    private Uri _uri;

    [GlobalSetup]
    public void Setup()
    {
        Socket listener = new(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
        listener.Bind(new IPEndPoint(IPAddress.Loopback, 0));
        listener.Listen(int.MaxValue);
        _ = Task.Run(async () =>
        {
            byte[] header = "HTTP/1.1 200 OK\r\nTransfer-Encoding: chunked\r\n\r\n"u8.ToArray();
            byte[] chunkData = Enumerable.Range(0, 100).SelectMany(_ => "abcdefghijklmnopqrstuvwxyz").Select(c => (byte)c).ToArray();
            byte[] chunkHeader = Encoding.UTF8.GetBytes($"{chunkData.Length:X}\r\n");
            byte[] chunkFooter = "\r\n"u8.ToArray();
            byte[] footer = "0\r\n\r\n"u8.ToArray();
            while (true)
            {
                var server = await listener.AcceptAsync();
                server.NoDelay = true;
                using StreamReader reader = new(new NetworkStream(server), Encoding.ASCII);
                while (true)
                {
                    while (!string.IsNullOrEmpty(await reader.ReadLineAsync())) ;

                    await server.SendAsync(header);
                    for (int i = 0; i < 100; i++)
                    {
                        await server.SendAsync(chunkHeader);
                        await server.SendAsync(chunkData);
                        await server.SendAsync(chunkFooter);
                    }
                    await server.SendAsync(footer);
                }
            }
        });

        var ep = (IPEndPoint)listener.LocalEndPoint!;
        _uri = new Uri($"http://{ep.Address}:{ep.Port}/");
    }

    [Benchmark]
    public async Task<byte[]> ResponseContentRead_ReadAsByteArrayAsync()
    {
        using HttpResponseMessage resp = await _client.GetAsync(_uri);
        return await resp.Content.ReadAsByteArrayAsync();
    }

    [Benchmark]
    public async Task<string> ResponseHeadersRead_ReadAsStringAsync()
    {
        using HttpResponseMessage resp = await _client.GetAsync(_uri, HttpCompletionOption.ResponseHeadersRead);
        return await resp.Content.ReadAsStringAsync();
    }
}
Method Runtime Mean Ratio Allocated Alloc Ratio
ResponseContentRead_ReadAsByteArrayAsync .NET 9.0 1.438 ms 1.00 912.71 KB 1.00
ResponseContentRead_ReadAsByteArrayAsync .NET 10.0 1.166 ms 0.81 519.12 KB 0.57
ResponseHeadersRead_ReadAsStringAsync .NET 9.0 1.528 ms 1.00 1166.77 KB 1.00
ResponseHeadersRead_ReadAsStringAsync .NET 10.0 1.306 ms 0.86 773.3 KB 0.66

dotnet/runtime#117071 reduces overheads associated with HTTP header validation. In the System.Net.Http implementation, some headers have dedicated parsers for them, while many (the majority of custom ones that services define) don’t. This PR recognizes that for these, the validation that needs to be performed amounts to only checking for forbidden newline characters, and the objects that were being created for all headers weren’t necessary for these.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Net.Http.Headers;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private readonly HttpResponseHeaders _headers = new HttpResponseMessage().Headers;

    [Benchmark]
    public void Add()
    {
        _headers.Clear();
        _headers.Add("X-Custom", "Value");
    }

    [Benchmark]
    public object GetValues()
    {
        _headers.Clear();
        _headers.TryAddWithoutValidation("X-Custom", "Value");
        return _headers.GetValues("X-Custom");
    }
}
Method Runtime Mean Ratio Allocated Alloc Ratio
Add .NET 9.0 28.04 ns 1.00 32 B 1.00
Add .NET 10.0 12.61 ns 0.45 0.00
GetValues .NET 9.0 82.57 ns 1.00 64 B 1.00
GetValues .NET 10.0 23.97 ns 0.29 32 B 0.50

For folks using HTTP/2, dotnet/runtime#112719 decreases per-connection memory consumption, by changing the HPackDecoder to lazily grow its buffers, starting from expected-case sizing rather than worst-case. (“HPACK” is the header compression algorithm used by HTTP/2, utilizing a table shared between client and server for managing commonly transmitted headers.) It’s a little hard to measure in a micro-benchmark, since in a real app the connections get reused (and the benefits here aren’t about temporary allocation but rather connection density and overall working set), but we can get a glimpse of it by doing what you’re not supposed to do and create a new HttpClient for each request (you’re not supposed to do that, or more specifically not supposed to create a new handler for each request, because doing so tears down the connection pool and the connections it contains… which is bad for an app but exactly what we want for our micro-benchmark).

// For this benchmark, change the benchmark.csproj to start with:
//     <Project Sdk="Microsoft.NET.Sdk.Web">
// instead of:
//     <Project Sdk="Microsoft.NET.Sdk">
// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using System.Net;
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using Microsoft.AspNetCore.Server.Kestrel.Core;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private WebApplication _app;

    [GlobalSetup]
    public async Task Setup()
    {
        var builder = WebApplication.CreateBuilder();
        builder.Logging.SetMinimumLevel(LogLevel.Warning);
        builder.WebHost.ConfigureKestrel(o => o.ListenLocalhost(5000, listen => listen.Protocols = HttpProtocols.Http2));

        _app = builder.Build();
        _app.MapGet("/hello", () => Results.Text("hi from kestrel over h2c\n"));
        var serverTask = _app.RunAsync();
        await Task.Delay(300);
    }

    [GlobalCleanup]
    public async Task Cleanup()
    {
        await _app.StopAsync();
        await _app.DisposeAsync();
    }

    [Benchmark]
    public async Task Get()
    {
        using var client = new HttpClient()
        {
            DefaultRequestVersion = HttpVersion.Version20,
            DefaultVersionPolicy = HttpVersionPolicy.RequestVersionExact
        };

        var response = await client.GetAsync("http://localhost:5000/hello");
    }
}
Method Runtime Mean Ratio Allocated Alloc Ratio
Get .NET 9.0 485.9 us 1.00 83.19 KB 1.00
Get .NET 10.0 445.0 us 0.92 51.79 KB 0.62

Also, on Linux and macOS, all HTTP use (and, more generally, all socket interactions) gets a tad cheaper from dotnet/runtime#109052, which eliminates a ConcurrentDictionary<> lookup for each asynchronous operation that completes on a Socket.

And for all you Native AOT fans, dotnet/runtime#117012 also adds a feature switch that enables trimming out the HTTP/3 implementation from HttpClient, which can represent a very sizeable and “free” space savings if you’re not using HTTP/3 at all.

Searching

Someone once told me that computer science was “all about sorting and searching.” That’s not far off. Searching in one way, shape, or form is an integral part of many applications and services.

Regex

Whether you love or hate the terse syntax, regular expressions (regex) continue to be an integral part of software development, with applications as part of both software and the software development process. As such, it’s had robust support in .NET since the early days of the platform, with the System.Text.RegularExpressions namespace providing a feature-rich set of regex capabilities. The performance of Regex was improved significantly in .NET 5 (Regex Performance Improvements in .NET 5) and then again in .NET 7, which also saw a significant amount of new functionality added (Regular Expression Improvements in .NET 7). It’s continued to be improved in every release since, and .NET 10 is no exception.

As I’ve discussed in previous blog posts about regex and performance, there are two high-level ways regex engines are implemented, either with backtracking or without. Non-backtracking engines typically work by creating some form of finite automata that represents the pattern, and then for each character consumed from the input, moves around the deterministic finite automata (DFA, meaning you can be in only a single state at a time) or non-deterministic finite automata (NFA, meaning you can be in multiple states at a time), transitioning from one state to another. A key benefit of a non-backtracking engine is that it can often make linear guarantees about processing time, where an input string of length N can be processed in worst-case O(N) time. A key downside of a non-backtracking engine is it can’t support all of the features developers are familiar with in modern regex engines, like back references. Backtracking engines are named as such because they’re able to “backtrack,” trying one approach to see if there’s a match and then going back and trying another. If you have the regex pattern \w*\d (which matches any number of word characters followed by a single digit) and supply it with the string "12", a backtracking engine is likely to first try treating both the '1' and the '2' as word characters, then find that it doesn’t have anything to fulfill the \d, and thus backtrack, instead treating only the '1' as being consumed by the \w*, and leaving the '2' to be consumed by the \d. Backtracking is how engines support features like back references, variable-length lookarounds, conditional expressions, and more. They can also have excellent performance, especially on the average and best cases. A key downside, however, is their worst case, where on some patterns they can suffer from “catastrophic backtracking.” That happens when all of that backtracking leads to exploring the same input over and over and over again, possibly consuming much more than linear time.

Since .NET 7, .NET has had an opt-in non-backtracking engine, which is what you get with RegexOptions.NonBacktracking, Otherwise, it uses a backtracking engine, whether using the default interpreter, or a regex compiled to IL (RegexOptions.Compiled), or a regex emitted as a custom C# implementation with the regex source generator ([GeneratedRegex(...)]). These backtracking engines can yield exceptional performance, but due to their backtracking nature, they are susceptible to bad worst-case performance, which is why specifying timeouts to a Regex is often encouraged, especially when using patterns of unknown provenance. Still, there are things backtracking engines can do to help mitigate some such backtracking, in particular avoiding the need for some of the backtracking in the first place.

One of the main tools backtracking engines offer for reduced backtracking is an “atomic” construct. Some regex syntaxes surface this via “possessive quantifiers,” while others, including .NET, surface it via “atomic groups.” They’re fundamentally the same thing, just expressed in the syntax differently. An atomic group in .NET’s regex syntax is a group that is never backtracked into. If we take our previous \w*\d example, we could wrap the \w* loop in an atomic group like this: (?>\w*)\d. In doing so, whatever that \w* consumes won’t change via backtracking after exiting the group and moving on to whatever comes after it in the pattern. So if I try to match "12" with such a pattern, it’ll fail, because the \w* will consume both characters, the \d will have nothing to match, and no backtracking will be applied, because the \w* is wrapped in an atomic group and thus exposes no backtracking opportunities.

In that example, wrapping the \w* with an atomic group changes the meaning of the pattern, and thus it’s not something that a regex engine could choose to do automatically. However, there are many cases where wrapping otherwise backtracking constructs in an atomic group does not observably change behavior, because any backtracking that would otherwise happen would provably never be fruitful. Consider a pattern a*b. a*b is observably identical to (?>a*)b, which says that the a* should not be backtracked into. That’s because there’s nothing the a* can “give back” (which can only be as) that would satisfy what comes next in the pattern (which is only b). It’s thus valid for a backtracking engine to transform how it processes a*b to instead be the equivalent of how it processes (?>a*)b. And the .NET regex engine has been capable of such transformations since .NET 5. This can result in massive improvements to throughput. With backtracking, waving my hands, we effectively need to execute everything after the backtracking construct for each possible position we could backtrack to. So, for example, with \w*SOMEPATTERN, if the w* successfully initially consumes 100 characters, we then possibly need to try to match SOMEPATTERN up to 100 different times, as we may need to backtrack up to 100 times and re-evaluate SOMEPATTERN each time we give back one of the things initially matched. If we instead make that (?>\w*), we eliminate all but one of those! That makes improvements to this ability to automatically transform backtracking constructs to be non-backtracking possibly massive improvements in performance, and practically every release of .NET since .NET 5 has increased the set of patterns that are automatically transformed. .NET 10 included.

Let’s start with dotnet/runtime#117869, which teaches the regex optimizer about more “disjoint” sets. Consider the previous example of a*b, and how I said we can make that a* loop atomic because there’s nothing a* can “give back” that matches b. That is a general statement about auto-atomicity: a loop can be made atomic if it’s guaranteed to end with something that can’t possibly match the thing that comes after it. So, if I have [abc]+[def], that loop can be made atomic, because there’s nothing [abc] can match that [def] can also match. In contrast, if the expression were instead [abc]+[cef], that loop must not be made atomic automatically, as doing so could change behavior. The sets do overlap, as both can match 'c'. So, for example, if the input were just "cc", the original expression should match it (the [abc]* loop would match 'c' with one iteration of the loop and then the second 'c' would satisfy the [cef] set), but if the expression were instead (?>[abc]+)[cef], it would no longer match, as the [abc]+ would consume both 'c's, and there’d be nothing left for the [cef] set to match. Two sets that don’t have any overlap are referred to as being “disjoint,” and so the optimizer needs to be able to prove the disjointedness of sets in order to perform these kinds of auto-atomicity optimizations. The optimizer was already able to do so for many sets, in particular ones that were composed purely of characters or character ranges, e.g. [ace] or [a-zA-Z0-9]. But many sets are instead composed of entire Unicode categories. For example, when you write \d, unless you’ve specified RegexOptions.ECMAScript that’s the same as \p{Nd}, which says “match any character in the Unicode category of Number decimal digits”, aka all characters for which char.GetUnicodeCategory returns UnicodeCategory.DecimalDigitNumber. And the optimizer was unable to reason about overlap between such sets. So, for example, if you had the expression \w*\p{Sm}, that matches anything that’s any number of word characters followed by a math symbol (UnicodeCategory.MathSymbol). \w is actually just a set of eight specific Unicode categories, such that the previous expression behaves identically to if I’d written [\p{Ll}\p{Lu}\p{Lt}\p{Lm}\p{Lo}\p{Mn}\p{Nd}\p{Pc}]*\p{Sm} (\w is composed of UnicodeCategory.UppercaseLetter, UnicodeCategory.LowercaseLetter, UnicodeCategory.TitlecaseLetter, UnicodeCategory.ModiferLetter, UnicodeCategory.OtherLetter, UnicodeCategory.NonSpacingMark, UnicodeCategory.ModiferLetter, UnicodeCategory.DecimalDigitNumber, and UnicodeCategory.ConnectorPunctuation). Note that none of those eight categories is the same as \p{Sm}, which means they’re disjoint, which means we can safely change that loop to being atomic without impacting behavior; it just makes it faster. One of the easiest ways to see the effect of this is to look at the output from the regex source generator. Before the change, if I look at the XML comment generated for that expression, I get this:

/// ○ Match a word character greedily any number of times.
/// ○ Match a character in the set [\p{Sm}].

and after, I get this:

/// ○ Match a word character atomically any number of times.
/// ○ Match a character in the set [\p{Sm}].

That one word change in the first sentence makes a huge difference. Here’s the relevant portion of the C# code emitted by the source generator for the matching routine before the change:

// Match a word character greedily any number of times.
//{
    charloop_starting_pos = pos;

    int iteration = 0;
    while ((uint)iteration < (uint)slice.Length && Utilities.IsWordChar(slice[iteration]))
    {
        iteration++;
    }

    slice = slice.Slice(iteration);
    pos += iteration;

    charloop_ending_pos = pos;
    goto CharLoopEnd;

    CharLoopBacktrack:

    if (Utilities.s_hasTimeout)
    {
        base.CheckTimeout();
    }

    if (charloop_starting_pos >= charloop_ending_pos)
    {
        return false; // The input didn't match.
    }
    pos = --charloop_ending_pos;
    slice = inputSpan.Slice(pos);

    CharLoopEnd:
//}

You can see how backtracking influences the emitted code. The core loop in there is iterating through as many word characters as it can match, but then before moving on, it remembers some position information about where it was. It also sets up a label for where subsequent code should jump to if it needs to backtrack; that code undoes one of the matched characters and then retries everything that came after it. If the code needs to backtrack again, it’ll again undo one of the characters and retry. And so on. Now, here’s what the code looks like after the change:

// Match a word character atomically any number of times.
{
    int iteration = 0;
    while ((uint)iteration < (uint)slice.Length && Utilities.IsWordChar(slice[iteration]))
    {
        iteration++;
    }

    slice = slice.Slice(iteration);
    pos += iteration;
}

All of that backtracking gunk is gone; the loop matches as much as it can, and that’s that. You can see the effect this has one some cases with a micro-benchmark like this:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Text.RegularExpressions;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private static readonly string s_input = new string(' ', 100);
    private static readonly Regex s_regex = new Regex(@"\s+\S+", RegexOptions.Compiled);

    [Benchmark]
    public int Count() => s_regex.Count(s_input);
}

This is a simple test where we’re trying to match any positive number of whitespace characters followed by any positive number of non-whitespace characters, giving it an input composed entirely of whitespace. Without atomicity, the engine is going to consume all of the whitespace as part of the \s+ but will then find that there isn’t any non-whitespace available to match the \S+. What does it do then? It backtracks, gives back one of the hundred spaces consumed by \s+, and tries again to match the \S+. It won’t match, so it backtracks again. And again. And again. A hundred times, until it has nothing left to try and gives up. With atomicity, all that backtracking goes away, allowing it to fail faster.

Method Runtime Mean Ratio
Count .NET 9.0 183.31 ns 1.00
Count .NET 10.0 69.23 ns 0.38

dotnet/runtime#117892 is a related improvement. In regex, \b is called a “word boundary”; it checks whether the wordness of the previous character (whether the previous character matches \w) matches the wordness of the next character, calling it a boundary if they differ. You can see this in the engine’s IsBoundary helper’s implementation, which follows (note that according to TR18 whether a character is considered a boundary word char is almost exactly the same as \w, except with two additional zero-width Unicode characters also included):

internal static bool IsBoundary(ReadOnlySpan<char> inputSpan, int index)
{
    int indexM1 = index - 1;
    return ((uint)indexM1 < (uint)inputSpan.Length && RegexCharClass.IsBoundaryWordChar(inputSpan[indexM1])) !=
           ((uint)index < (uint)inputSpan.Length && RegexCharClass.IsBoundaryWordChar(inputSpan[index]));
}

The optimizer already had a special-case in its auto-atomicity logic that had knowledge of boundaries and their relationship to \w and \d, specifically. So, if you had \w+\b, the optimizer would recognize that in order for the \b to match, what comes after what the \w+ matches must necessarily not match \w, because then it wouldn’t be a boundary, and thus the \w+ could be made atomic. Similarly, with a pattern of \d+\b, it would recognize that what came after must not be in \d, and could make the loop atomic. It didn’t generalize this, though. Now in .NET 10, it does. This PR teaches the optimizer how to recognize subsets of \w, because, as with the special-case of \d, any subset of \w can similarly benefit: if what comes before the \b is a word character, what comes after must not be. Thus, with this PR, an expression like [a-zA-Z]+\b will now have the loop made atomic.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Text.RegularExpressions;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private static readonly string s_input = "Supercalifragilisticexpialidocious1";
    private static readonly Regex s_regex = new Regex(@"^[A-Za-z]+\b", RegexOptions.Compiled);

    [Benchmark]
    public int Count() => s_regex.Count(s_input);
}
Method Runtime Mean Ratio
Count .NET 9.0 116.57 ns 1.00
Count .NET 10.0 21.74 ns 0.19

Just doing a better job of set disjointedness analysis is helpful, but more so is actually recognizing whole new classes of things that can be made atomic. In prior releases, the auto-atomicity optimizations only kicked in for loops over single characters, e.g. a*, [abc]*?, [^abc]*. That is obviously only a subset of loops, as many loops are composed of more than just a single character; loops can surround any regex construct. Even a capture group thrown into the mix would knock the auto-atomicity behavior off the rails. Now with dotnet/runtime#117943, a significant number of loops involving more complicated constructs can be made atomic. Loops larger than a single character are tricky, though, as there are more things that need to be taken into account when reasoning through atomicity. With a single character, we only need to prove disjointedness for that one character with what comes after it. But, consider an expression like ([a-z][0-9])+a1. Can that loop be made atomic? What comes after the loop ('a') is provably disjoint from what ends the loop ([0-9]), and yet making this loop atomic automatically would change behavior and be a no-no. Imagine if the input were "b2a1". That matches; if this expression is processed normally, the loop would match a single iteration, consuming the "b2", and then the a1 after the loop would consume the corresponding a1 in the input. But, if the loop were made atomic, e.g. (?>([a-z][0-9])+)a1, the loop would end up performing two iterations and consuming both the "b2" and the "a1", leaving nothing for the a1 in the pattern. As it turns out, we not only need to ensure what ends the loop is disjoint from what comes after it, we also need to ensure that what starts the loop is disjoint from what comes after it. That’s not all, though. Now consider an expression ^(a|ab)+$. This matches an entire input composed of "a"s and "ab"s. Given an input string like "aba", this will match successfully, as it will consume the "ab" with the second branch of the alternation, and then consume the remaining a with the first branch of the alternation on the next iteration of the loop. But now consider what happens if we make the loop atomic: ^(?>(a|ab)+)$. Now on that same input, the initial a in the input will be consumed by the first branch of the alternation, and that will satisfy the loop’s minimum bound of 1 iteration, exiting the loop. It’ll then proceed to validate that it’s at the end of the string, and fail, but with the loop now atomic, there’s nothing to backtrack into, and the whole match fails. Oops. The problem here is that the loop’s ending must not only be disjoint with what comes next, and the loop’s beginning must not only be disjoint with what comes next, but because it’s a loop, what comes next can actually be itself, which means the loop’s beginning and ending must be disjoint from each other. Those criteria significantly limit to what patterns this can be applied, but even with that, it’s still surprisingly common: dotnet/runtime-assets (which contains test assets for use with dotnet/runtime) contains a database of regex patterns sourced from appropriately-licensed nuget packages, yielding almost 20,000 unique patterns, and more than 7% of those were positively impacted by this.

Here is an example that’s searching “The Entire Project Gutenberg Works of Mark Twain” for sequences of all lowercase ASCII words, each followed by a space, and then all followed by an uppercase ASCII letter.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Text.RegularExpressions;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private static readonly string s_input = new HttpClient().GetStringAsync(@"https://www.gutenberg.org/cache/epub/3200/pg3200.txt").Result;
    private static readonly Regex s_regex = new Regex(@"([a-z]+ )+[A-Z]", RegexOptions.Compiled);

    [Benchmark]
    public int Count() => s_regex.Count(s_input);
}

In previous releases, that inner loop would be made atomic, but the outerloop would remain greedy (backtracking). From the XML comment generated by the source generator, we get this:

/// ○ Loop greedily at least once.
///     ○ 1st capture group.
///         ○ Match a character in the set [a-z] atomically at least once.
///         ○ Match ' '.
/// ○ Match a character in the set [A-Z].

Now in .NET 10, we get this:

/// ○ Loop atomically at least once.
///     ○ 1st capture group.
///         ○ Match a character in the set [a-z] atomically at least once.
///         ○ Match ' '.
/// ○ Match a character in the set [A-Z].
Method Runtime Mean Ratio
Count .NET 9.0 573.4 ms 1.00
Count .NET 10.0 504.6 ms 0.88

As with any optimization, auto-atomicity should never change observable behavior; it should just make things faster. And as such, every case where atomicity is automatically applied requires it being reasoned through to ensure that the optimization is of sound logic. In some cases, the optimization was written to be conservative, as the relevant reasoning through the logic wasn’t previously done. An example of that is addressed by dotnet/runtime#118191, which makes a few tweaks to how boundaries are handled in the auto-atomicity logic, removing some constraints that were put in place but which, as it turns out, are unnecessary. The core logic that implements the atomicity analysis is a method that looks like this:

private static bool CanBeMadeAtomic(RegexNode node, RegexNode subsequent, ...)

node is the representation for the part of the regex that’s being considered for becoming atomic (e.g. a loop) and subsequent is what comes immediately after it in the pattern; the method then proceeds to validate node against subsequent to see whether it can prove there wouldn’t be any behavioral changes if node were made atomic. However, not all cases are sufficiently handled just by validating against subsequent itself. Consider a pattern like a*b*\w, where node represents a* and subsequent represents b*. a and b are obviously disjoint, and so node can be made atomic with regards to subsequent, but… here subsequent is also “nullable,” meaning it might successfully match 0 characters (the loop has a lower bound of 0). And in such a case, what comes after the a* won’t necessarily be a b but could be what comes after the b*, which here is a \w, which overlaps with a, and as such, it would be a behavioral change to make this into (?>a*)b*\w. Consider an input of just "a". With the original pattern, a* would successfully match the empty string with 0 iterations, b* would successfully match the empty string with 0 iterations, and then \w would successfully match the input 'a'. But with the atomicized pattern, (?>a*) would successfully match the input 'a' with a single iteration, leaving nothing to match the \w. As such, when CanBeMadeAtomic detects that subsequent may be nullable and successfully match the empty string, it needs to iterate to also validate against what comes after subsequent (and possibly again and again if what comes next itself keeps being nullable).

CanBeMadeAtomic already factored in boundaries (\b and \B), but it did so with the conservative logic that since a boundary is “zero-width” (meaning it doesn’t consume any input), it must always require checking what comes after it. But that’s not actually the case. Even though a boundary is zero-width, it still makes guarantees about what comes next: if the prior character is a word character, the next is guaranteed to not be with a successful match. And as such, we can safely make this more liberal and not require checking what comes next.

This last example also highlights an interesting aspect of this auto-atomicity optimization in general. There is nothing this optimization provides that the developer writing the regex in the first place couldn’t have done themselves. Instead of a*b, a developer can write (?>a*)b. Instead of [a-z]+(?= ), a developer can write (?>[a-z]+)(?= ). And so on. But when was the last time you explicitly added an atomic group to a regex you authored? Of the almost 20,000 regular expression patterns in the aforementioned database of real-world regexes sourced from nuget, care to guess how many include an explicitly written atomic group? The answer: ~100. It’s just not something developers in general think to do, so although the optimization transforms the user’s pattern into something they could have written themselves, it’s an incredibly valuable optimization, especially since now in .NET 10 over 70% of those patterns have at least one construct upgraded to be atomic.

The auto-atomicity optimization is an example of the optimizer removing unnecessary work. A key example of that, but certainly not the only example. Several additional PRs in .NET 10 have also eliminated unnecessary work, in other ways.

dotnet/runtime#118084 is a fun example of this, but to understand it, we first need to understand lookarounds. A “lookaround” is a regex construct that makes its contents zero-width. Whereas when a set like “[abc]” matches it consumes a single character from the input, or when a loop like “[abc]{3,5}” matches it’ll consume between 3-5 characters from the input, lookarounds (as with other zero-width constructs, like anchors) don’t consume anything. You wrap a lookaround around a regex expression, and it effectively makes the consumption temporary, e.g. if I wrap [abc]{3,5} in a positive lookahead as (?=[abc]{3,5}), that will end up performing the whole match for the 3-5 set characters, but those characters won’t remain consumed after exiting the lookaround; the lookaround is just performing a test to ensure the inner pattern matches but the position in the input is reset upon exiting the lookaround. This is again visualized easily by looking at the code emitted by the regex source generator for a pattern like (?=[abc]{3,5})abc:

// Zero-width positive lookahead.
{
    int positivelookahead_starting_pos = pos;

    // Match a character in the set [a-c] atomically at least 3 and at most 5 times.
    {
        int iteration = 0;
        while (iteration < 5 && (uint)iteration < (uint)slice.Length && char.IsBetween(slice[iteration], 'a', 'c'))
        {
            iteration++;
        }

        if (iteration < 3)
        {
            return false; // The input didn't match.
        }

        slice = slice.Slice(iteration);
        pos += iteration;
    }

    pos = positivelookahead_starting_pos;
    slice = inputSpan.Slice(pos);
}

// Match the string "abc".
if (!slice.StartsWith("abc"))
{
    return false; // The input didn't match.
}

We can see that the lookaround is caching the starting position, then proceeding to try to match the loop it contains, and if successful, resetting the matching position to what it was when the lookaround was entered, then continuing on to perform the match for what comes after the lookaround.

These examples have been for a particular flavor of lookaround, called a positive lookahead. There are four variations of lookarounds composed of two choices: positive vs negative, and lookahead vs lookbehind. Lookaheads validate the pattern starting from the current position and proceeding forwards (as matching typically is), while lookbehinds validate the pattern starting from just before the current position and extending backwards. Positive indicates that the pattern should match, while negative indicates that the pattern should not match. So, for example, the negative lookbehind (?<!\w) will match if what comes before the current position is not a word character.

Negative lookarounds are particularly interesting, because, unlike every other regex construct, they guarantee that the pattern they contain doesn’t match. That also makes them special in other regards, in particular around capture groups. For a positive lookaround, even though they’re zero width, anything capture groups inside of the lookaround capture still remain to outside of the lookaround, e.g. ^(?=(abc))\1$, which entails a backreference successfully matching what’s captured by the capture group inside of the positive lookahead, will successfully match the input "abc". But because negative lookarounds guarantee their content doesn’t match, it would be counter-intuitive if anything captured inside of a negative lookaround persisted past the lookaround… so it doesn’t. The capture groups inside of a negative lookaround are still possibly meaningful, in particular if there’s a backreference also inside of the same lookaround that refers back to the capture group, e.g. the pattern ^(?!(ab)\1cd)ababc is checking to see whether the input does not begin with ababcd but does begin with ababc. But if there’s no backreference, the capture group is useless, and we don’t need to do any work for it as part of processing the regex (work like remembering where the capture occurred). Such capture groups can be completely eliminated from the node tree as part of the optimization phase, and that’s exactly what dotnet/runtime#118084 does. Just as developers often use backtracking constructs without thinking to make them atomic, developers also often use capture groups purely as a grouping mechanism without thinking of the possibility of making them non-capturing groups. Since captures in general need to persist to be examined by the Match object returned from a Regex, we can’t just eliminate all capture groups that aren’t used internally in the pattern, but we can for these negative lookarounds. Consider a pattern like (?<!(access|auth)\s)token, which is looking for the word "token" when it’s not preceeded by "access " or "auth "; the developer here (me, in this case) did what’s fairly natural, putting a group around the alternation so that the \s that follows either word can be factored out (if it were instead access|auth\s, the whitespace set would only be in the second branch of the alternation and wouldn’t apply to the first). But my “simple” grouping here is actually a capture group by default; to get it to be non-capturing, I’d either need to write it as a non-capturing group, i.e. (?<!(?:access|auth)\s)token, or I’d need to use RegexOptions.ExplicitCapture, which turns all non-named capture groups into non-capturing groups.

We can similarly remove other work related to lookarounds. As noted, positive lookarounds exist to transform any pattern into a zero-width pattern, i.e. don’t consume anything. That’s all they do. If the pattern being wrapped by the positive lookaround is already zero-width, the lookaround contributes nothing to the behavior of the expression and can be removed. So, for example, if you have (?=$), that can be transformed into just $. That’s exactly what dotnet/runtime#118091 does.

dotnet/runtime#118079 and dotnet/runtime#118111 handle other transformations relative to zero-width assertions, in particular with regards to loops. For whatever reason, you’ll see developers wrapping zero-width assertions inside of loops, either making such assertions optional (e.g. \b?) or with some larger upper bound (e.g. (?=abc)*). But these zero-width assertions don’t consume anything; their sole purpose is to flag whether something is true or false at the current position. If you make such a zero-width assertion optional, then you’re saying “check whether it’s true or false, and then immediately ignore the answer, because both answers are valid”; as such, the whole expression can be removed as a nop. Similarly, if you wrap a loop with an upper bound greater than 1 around such an expression, you’re saying “check whether it’s true or false, now without changing anything check again, and check again, and check again.” There’s a common English expression that’s something along the lines of “insanity is doing the same thing over and over again and expecting different results.” That applies here. There may be behavioral benefits to invoking the zero-width assertion once, but repeating it beyond that is a pure waste: if it was going to fail, it would have failed the first time. Mostly. There’s one specific case where the difference is actually observable, and that has to do with an interesting feature of .NET regexes: capture groups track all matched captures, not just the last. Consider this program:

// dotnet run -c Release -f net10.0

using System.Diagnostics;
using System.Text.RegularExpressions;

Match m = Regex.Match("abc", "^(?=(\\w+)){3}abc$");
Debug.Assert(m.Success);

foreach (Group g in m.Groups)
{
    foreach (Capture c in g.Captures)
    {
        Console.WriteLine($"Group: {g.Name}, Capture: {c.Value}");
    }
}

If you run that, you may be surprised to see that capture group #1 (the explicit group I have inside of the lookahead) provides three capture values:

Group: 0, Capture: abc
Group: 1, Capture: abc
Group: 1, Capture: abc
Group: 1, Capture: abc

That’s because the loop around the positive lookahead does three iterations, each iteration matches "abc", and each successful capture is persisted for subsequent inspection via the Regex APIs. As such, we can’t optimize any loop around zero-width assertions by lowering the upper bound from greater than 1 to 1; we can only do so if it doesn’t contain any captures. And that’s what these PRs do. Given a loop that wraps a zero-width assertion that does not contain a capture, if the lower bound of the loop is 0, the whole loop and its contents can be eliminated, and if the upper bound of the loop is greater than 1, the loop itself can be removed, leaving only its contents in its stead.

Any time work like this is eliminated, it’s easy to construct monstrous, misleading micro-benchmarks… but it’s also a lot of fun, so, I’ll allow myself it this time.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Text.RegularExpressions;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private static readonly string s_input = new HttpClient().GetStringAsync(@"https://www.gutenberg.org/cache/epub/3200/pg3200.txt").Result;
    private static readonly Regex s_regex = new Regex(@"(?=.*\bTwain\b.*\bConnecticut\b)*.*Mark", RegexOptions.Compiled);

    [Benchmark]
    public int Count() => s_regex.Count(s_input);
}
Method Runtime Mean Ratio
Count .NET 9.0 3,226.024 ms 1.000
Count .NET 10.0 6.605 ms 0.002

dotnet/runtime#118083 is similar. “Repeaters” are a name for a regex loop that has the same lower and upper bound, such that the contents of the loop “repeats” that fixed number of times. Typically you’ll see these written out using the {N} syntax, e.g. [abc]{3} is a repeater that requires three characters, any of which can be 'a', 'b', or 'c'. But of course it could also be written out in long-form, just by manually repeating the contents, e.g. [abc][abc][abc]. Just as we saw how we can condense loops around zero-width assertions when specified in loop form, we can do the exact same thing when manually written out. So, for example, \b\b is the same as just \b{2}, which is just \b.

Another nice example of removing unnecessary work is dotnet/runtime#118105. Boundary assertions are used in many expressions, e.g. it’s quite common to see a simple pattern like \b\w+\b, which is trying to match an entire word. When the regex engine encounters such an assertion, historically it’s delegated to the IsBoundary helper shown earlier. There is, however, some subtle unnecessary work here, which is more obvious when you see what the regex source generator outputs for an expression like \b\w+\b. This is what the output looks like on .NET 9:

// Match if at a word boundary.
if (!Utilities.IsBoundary(inputSpan, pos))
{
    return false; // The input didn't match.
}

// Match a word character atomically at least once.
{
    int iteration = 0;
    while ((uint)iteration < (uint)slice.Length && Utilities.IsWordChar(slice[iteration]))
    {
        iteration++;
    }

    if (iteration == 0)
    {
        return false; // The input didn't match.
    }

    slice = slice.Slice(iteration);
    pos += iteration;
}

// Match if at a word boundary.
if (!Utilities.IsBoundary(inputSpan, pos))
{
    return false; // The input didn't match.
}

Pretty straightforward: match the boundary, consume as many word characters as possible, then again match a boundary. Except if you look back at the definition of IsBoundary, you’ll notice that it’s doing two checks, one against the previous character and one against the next character.

internal static bool IsBoundary(ReadOnlySpan<char> inputSpan, int index)
{
    int indexM1 = index - 1;
    return ((uint)indexM1 < (uint)inputSpan.Length && RegexCharClass.IsBoundaryWordChar(inputSpan[indexM1])) !=
           ((uint)index < (uint)inputSpan.Length && RegexCharClass.IsBoundaryWordChar(inputSpan[index]));
}

Now, look at that, and look back at the generated code, and look at this again, and back at the source generated code again. See anything unnecessary? When we perform the first boundary comparison, we are dutifully checking the previous character, which is necessary, but then we’re checking the current character, which is about to checked against \w by the subsequent \w+ loop. Similarly for the second boundary check, we just finished matching \w+, which will have only successfully matched if there was at least one word character. While we still need to validate that the subsequent character is not a boundary character (there are two characters considered boundary characters that aren’t word characters), we don’t need to re-validate the previous character. So, dotnet/runtime#118105 overhauls boundary handling in the compiler and source generator to emit customized boundary checks based on surrounding knowledge. If it can prove that the subsequent construct will validate that a character is a word character, then it only needs to validate that the previous character is not a boundary character; similarly, if it can prove that the previous construct will have already validated that a character is a word character, then it only needs to validate that the next character isn’t. This leads to this tweaked source generated code now on .NET 10:

// Match if at a word boundary.
if (!Utilities.IsPreWordCharBoundary(inputSpan, pos))
{
    return false; // The input didn't match.
}

// Match a word character atomically at least once.
{
    int iteration = 0;
    while ((uint)iteration < (uint)slice.Length && Utilities.IsWordChar(slice[iteration]))
    {
        iteration++;
    }

    if (iteration == 0)
    {
        return false; // The input didn't match.
    }

    slice = slice.Slice(iteration);
    pos += iteration;
}

// Match if at a word boundary.
if (!Utilities.IsPostWordCharBoundary(inputSpan, pos))
{
    return false; // The input didn't match.
}

Those IsPreWordCharBoundary and IsPostWordCharBoundary helpers are just half the checks in the main boundary helper. In cases where there are lots of boundary tests being performed, the reduced check count can add up.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Text.RegularExpressions;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private static readonly string s_input = new HttpClient().GetStringAsync(@"https://www.gutenberg.org/cache/epub/3200/pg3200.txt").Result;
    private static readonly Regex s_regex = new Regex(@"\ba\b", RegexOptions.Compiled | RegexOptions.IgnoreCase);

    [Benchmark]
    public int CountStandaloneAs() => s_regex.Count(s_input);
}
Method Runtime Mean Ratio
CountStandaloneAs .NET 9.0 20.58 ms 1.00
CountStandaloneAs .NET 10.0 19.25 ms 0.94

The Regex optimizer is all about pattern recognition: it looks for sequences and shapes it recognizes and performs transforms over those to put them into a more efficiently-processable form. One example of this is with alternations around coalescable branches. Let’s say you have an alternation a|e|i|o|u. You could process that as an alternation, but it’s also much more efficiently represented and processed as the equivalent set [aeiou]. There is an optimization that does such transformations as part of handling alternations. However, through .NET 9, it only handled single characters and sets, but not negated sets. For example, it would transform a|e|i|o|u into [aeiou], and it would transform [aei]|[ou] into [aeiou], but it would not merge negations like [^\n], otherwise known as . (when not in RegexOptions.Singleline mode). When developers want a set that represents all characters, there are various idioms they employ, such as [\s\S], which says “this is a set of all whitespace and non-whitespace characters”, aka everything. Another common idiom is \n|., which is the same as \n|[^\n], which says “this is an alternation that matches either a newline or anything other than a newline”, aka also everything. Unfortunately, while examples like [\d\D] have been handled well, .|\n has not, because of the gap in the alternation optimization. dotnet/runtime#118109 improves that, such that such “not” cases are mergable as part of the existing optimization. That takes a relatively expensive alternation and converts it into a super fast set check. And while, in general, set containment checks are very efficient, this one is as efficient as you can get, as it’s always true. We can see an example of this with a pattern intended to match C-style comment blocks.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Text.RegularExpressions;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private const string Input = """
        /* This is a comment. */
        /* Another comment */
        /* Multi-line
           comment */
        """;
    private static readonly Regex s_regex = new Regex(@"/\*(?:.|\n)*?\*/", RegexOptions.Compiled);

    [Benchmark]
    public int Count() => s_regex.Count(Input);
}
Method Runtime Mean Ratio
Count .NET 9.0 344.80 ns 1.00
Count .NET 10.0 93.59 ns 0.27

Note that there’s another change that helps .NET 10 here, dotnet/runtime#118373, though I hesitate to call it out as a performance improvement since it’s really more of a bug fix. As part of writing this post, these benchmark numbers were showing some oddities (it’s important in general to be skeptical of benchmark results and to investigate anything that doesn’t align with reason and expectations). The result of investigating was a one-word change that yielded significant speedups on this test, specifically when using RegexOptions.Compiled (the bug didn’t exist in the source generator). As part of handling lazy loops, there’s a special-case for when the lazy loop is around a set that matches any character, which, thanks to the previous PR, (?:.|\n) now does. That special-case recognizes that if the lazy loop matches anything, we can efficiently find the end of the lazy loop by searching for whatever comes after the loop (e.g. in this test, the loop is followed by the literal "*/"). Unfortunately, the helper that emits that IndexOf call was passed the wrong node from the pattern: it was being passed the object representing the (?:.|\n) any-set rather than the "*/" literal, which resulted in it emitting the equivalent of IndexOfAnyInRange((char)0, '\uFFFF') rather than the equivalent of IndexOf("*/"). Oops. It was still functionally correct, in that the IndexOfAnyInRange call would successfully match the first character and the loop would re-evaluate from that location, but that means that rather than efficiently skipping using SIMD over a bunch of positions that couldn’t possibly match, we were doing non-trivial work for each and every position along the way.

dotnet/runtime#118087 represents another interesting transformation related to alternations. It’s very common to come across alternations with empty branches, possibly because that’s what the developer wrote, but more commonly as an outcome of other transformations that have happened. For example, given the pattern \r\n|\r, which is trying to match line endings that begin with \r, there is an optimization that will factor out a common prefix of all of the branches, producing the equivalent of \r(?:\n|); in other words, \r followed by either a line feed or empty. Such an alternation is a perfectly valid representation for this concept, but there’s a more natural one: ?. Behaviorally, this pattern is identical to \r\n?, and because the latter is more common and more canonical, the regex engine has more optimizations that recognize this loop-based form, for example coalescing with other loops, or auto-atomicity. As such, this PR finds all alternations of the form X| and transforms them into X?. Similarly, it finds all alternations of the form |X and transforms them into X??. The difference between X| and |X is whether X is tried first or empty is tried first; similarly, the difference between the greedy X? loop and the lazy X?? loop is whether X is tried first or empty is tried first. The impact of this can be seen in the code generated for the previously cited example. Here is the source-generated code for the heart of the matching routine for \r\n|\r on .NET 9:

// Match '\r'.
if (slice.IsEmpty || slice[0] != '\r')
{
    return false; // The input didn't match.
}

// Match with 2 alternative expressions, atomically.
{
    int alternation_starting_pos = pos;

    // Branch 0
    {
        // Match '\n'.
        if ((uint)slice.Length < 2 || slice[1] != '\n')
        {
            goto AlternationBranch;
        }

        pos += 2;
        slice = inputSpan.Slice(pos);
        goto AlternationMatch;

        AlternationBranch:
        pos = alternation_starting_pos;
        slice = inputSpan.Slice(pos);
    }

    // Branch 1
    {      
        pos++;
        slice = inputSpan.Slice(pos);
    }

    AlternationMatch:;
}

Now, here’s what’s produced on .NET 10:

// Match '\r'.
if (slice.IsEmpty || slice[0] != '\r')
{
    return false; // The input didn't match.
}

// Match '\n' atomically, optionally.
if ((uint)slice.Length > (uint)1 && slice[1] == '\n')
{
    slice = slice.Slice(1);
    pos++;
}

The optimizer recognized that the \r\n|\r was the same as \r(?:\n|), which is the same as \r\n?, which is the same as \r(?>\n?), which it can produce much simplified code for, given that it no longer needs any backtracking.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Text.RegularExpressions;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private static readonly string s_input = new HttpClient().GetStringAsync(@"https://www.gutenberg.org/cache/epub/3200/pg3200.txt").Result;
    private static readonly Regex s_regex = new Regex(@"ab|a", RegexOptions.Compiled);

    [Benchmark]
    public int Count() => s_regex.Count(s_input);
}
Method Runtime Mean Ratio
Count .NET 9.0 23.35 ms 1.00
Count .NET 10.0 18.73 ms 0.80

.NET 10 also features improvements to Regex that go beyond just this form of work elimination. Regex‘s matching routines are logically factored into two pieces: finding as quickly as possible the next place that could possibly match (TryFindNextPossibleStartingPosition), and then performing the full matching routine at that location (TryMatchAtCurrentPosition). It’s desirable that TryFindNextPossibleStartingPosition both does its work as quickly as possible while also significantly limiting the number of locations a full match should be performed. TryFindNextPossibleStartingPosition, for example, could operate very quickly just by always saying that the next index in the input should be tested, which would result in the full matching logic being performed at every index in the input; that’s not great for performance. Instead, the optimizer analyzes the pattern looking for things that would allow it to quickly search for viable starting locations, e.g. fixed strings or sets at known offsets in the pattern. Anchors are some of the most valuable things the optimizer can find, as they significantly inhibit the possible places matching is valid; the ideal pattern begins with a beginning anchor (^), which then means the only possible place matching can be successful is at index 0.

We previously discussed lookarounds, but as it turns out, until .NET 10, lookarounds weren’t factored into what TryFindNextPossibleStartingPosition should look for. dotnet/runtime#112107 changes that. It teaches the optimizer when and how to explore positive lookaheads at the beginning of a pattern for constructs that could help it more efficiently find starting locations. For example, in .NET 9, for the pattern (?=^)hello, here’s what the source generator emits for TryFindNextPossibleStartingPosition:

private bool TryFindNextPossibleStartingPosition(ReadOnlySpan<char> inputSpan)
{
    int pos = base.runtextpos;

    // Any possible match is at least 5 characters.
    if (pos <= inputSpan.Length - 5)
    {
        // The pattern has the literal "hello" at the beginning of the pattern. Find the next occurrence.
        // If it can't be found, there's no match.
        int i = inputSpan.Slice(pos).IndexOfAny(Utilities.s_indexOfString_hello_Ordinal);
        if (i >= 0)
        {
            base.runtextpos = pos + i;
            return true;
        }
    }

    // No match found.
    base.runtextpos = inputSpan.Length;
    return false;
}

The optimizer found the "hello" string in the pattern and is thus searching for that as part of finding the next possible place to do the full match. That would be excellent, if it weren’t for the lookahead that also says any match must happen at the beginning of the input. Now in .NET 10, we get this:

private bool TryFindNextPossibleStartingPosition(ReadOnlySpan<char> inputSpan)
{
    int pos = base.runtextpos;

    // Any possible match is at least 5 characters.
    if (pos <= inputSpan.Length - 5)
    {
        // The pattern leads with a beginning (\A) anchor.
        if (pos == 0)
        {
            return true;
        }
    }

    // No match found.
    base.runtextpos = inputSpan.Length;
    return false;
}

That pos == 0 check is critical, because it means we will only ever attempt the full match in one location and we can avoid the search that would happen even if we never found a good location to perform the match. Again, any time you eliminate work like this, you can construct tantalizing micro-benchmarks…

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Text.RegularExpressions;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private static readonly string s_input = new HttpClient().GetStringAsync(@"https://www.gutenberg.org/cache/epub/3200/pg3200.txt").Result;
    private static readonly Regex s_regex = new Regex(@"(?=^)hello", RegexOptions.Compiled);

    [Benchmark]
    public int Count() => s_regex.Count(s_input);
}
Method Runtime Mean Ratio
Count .NET 9.0 2,383,784.95 ns 1.000
Count .NET 10.0 17.43 ns 0.000

That same PR also improved optimizations over alternations. It’s already the case that the branches of alternations are analyzed looking for common prefixes that can be factored out. For example, given the pattern abc|abd, the optimizer will spot the shared "ab" prefix at the beginning of each branch and factor that out, resulting in ab(?:c|d), and will then see that each branch of the remaining alternation are individual characters, which it can convert into a set, ab[cd]. If, however, the branches began with anchors, these optimizations wouldn’t be applied. Given the pattern ^abc|^abd, the code generators would end up emitting this exactly as it’s written, with an alternation with two branches, the first branch checking for the beginning and then matching "abc", the second branch also checking for the beginning and then matching "abd". Now in .NET 10, the anchor can be factored out, such that ^abc|^abd ends up being rewritten as ^ab[cd].

As a small tweak, dotnet/runtime#112065 also helps improve the source generated code for repeaters by using a more efficient searching routine. Let’s take the pattern [0-9a-f]{32} as an example. This is looking for sequences of 32 lowercase hex digits. In .NET 9, the implementation of that ends up looking like this:

// Match a character in the set [0-9a-f] exactly 32 times.
{
    if ((uint)slice.Length < 32)
    {
        return false; // The input didn't match.
    }

    if (slice.Slice(0, 32).IndexOfAnyExcept(Utilities.s_asciiHexDigitsLower) >= 0)
    {
        return false; // The input didn't match.
    }
}

Simple, clean, fairly concise, and utilizing the vectorized IndexOfAnyExcept to very efficiently validate that the whole sequence of 32 characters are lowercase hex. We can do a tad bit better, though. The IndexOfAnyExcept method not only needs to find whether the span contains something other than one of the provided values, it needs to specify the index at which that found value occurs. That’s only a few instructions, but it’s a few unnecessary instructions, since here that exact index isn’t utilized… the implementation only cares whether it’s >= 0, meaning whether anything was found or not. As such, we can instead use the Contains variant of this method, which doesn’t need to spend extra cycles determining the exact index. Now in .NET 10, this is generated:

// Match a character in the set [0-9a-f] exactly 32 times.
if ((uint)slice.Length < 32 || slice.Slice(0, 32).ContainsAnyExcept(Utilities.s_asciiHexDigitsLower))
{
    return false; // The input didn't match.
}

Finally, the .NET 10 SDK includes a new analyzer related to Regex. It’s oddly common to see code that determines whether an input matches a Regex written like this: Regex.Match(...).Success. While functionally correct, that’s much more expensive than Regex.IsMatch(...). For all of the engines, Regex.Match(...) requires allocating a new Match object and supporting data structures (except when there isn’t a match found, in which case it’s able to use an empty singleton); in contrast, IsMatch doesn’t need to allocate such an instance because it doesn’t need to return such an instance (as an implementation detail, it may still use a Match object, but it can reuse one rather than creating a new one each time). It can also avoid other inefficiencies. RegexOptions.NonBacktracking is “pay-for-play” with the information it needs to gather. Determining just whether there’s a match is cheaper than determining exactly where the match begins and ends, which is cheaper still than determining all of the captures that make up that match. IsMatch is thus the cheapest, only needing to determine that there is a match, not exactly where it is or what the exact captures are, whereas Match needs to determine all of that. Regex.Matches(...).Count is similar; it’s having to gather all of the relevant details and allocate a whole bunch of objects, whereas Regex.Count(...) can do so in a much more efficient manner. dotnet/roslyn-analyzers#7547 adds CA1874 and CA1875, which flag these cases and recommend use of IsMatch and Count, respectively.

Analyzer and fixer for CA1874

Analyzer and fixer for CA1875

// dotnet run -c Release -f net10.0 --filter **

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Text.RegularExpressions;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private static readonly string s_input = new HttpClient().GetStringAsync(@"https://www.gutenberg.org/cache/epub/3200/pg3200.txt").Result;
    private static readonly Regex s_regex = new Regex(@"\b\w+\b", RegexOptions.NonBacktracking);

    [Benchmark(Baseline = true)]
    public int MatchesCount() => s_regex.Matches(s_input).Count;

    [Benchmark]
    public int Count() => s_regex.Count(s_input);
}
Method Mean Ratio Allocated Alloc Ratio
MatchesCount 680.4 ms 1.00 665530176 B 1.00
Count 219.0 ms 0.32 0.00

Regex is one form of searching, but there are other primitives and helpers throughout .NET for various forms of searching, and they’ve seen meaningful improvements in .NET 10, as well.

SearchValues

When discussing performance improvements in .NET 8, I called out two changes that were my favorites. The first was dynamic PGO. The second was SearchValues.

SearchValues provides a mechanism for precomputing optimal strategies for searching. .NET 8 introduced overloads of SearchValues.Create that produce SearchValues<byte> and SearchValues<char>, and corresponding overloads of IndexOfAny and friends that accept such instances. If there’s a set of values you’ll be searching for over and over and over, you can create one of these instances once, cache it, and then use it for all subsequent searches for those values, e.g.

private static readonly SearchValues<char> s_validBase64Chars = SearchValues.Create("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/");

internal static bool IsValidBase64(ReadOnlySpan<char> input) =>
    input.ContainsAnyExcept(s_validBase64Chars);

There are a plethora of different implementations used by SearchValues<T> behind the scenes, each of which is selected and configured based on the T and the exact nature of the target values for which we’re searching. dotnet/runtime#106900 adds another, which both helps to shave off several instructions in the core vectorized search loop, and helps to highlight just how nuanced these different algorithms can be. Previously, if four target byte values were provided, and they weren’t in a contiguous range, SearchValues.Create would choose an implementation that just uses four vectors, one per target byte, and does four comparisons (one against each target vector) for each input vector being tested. However, there’s already a specialization that’s used for more than five target bytes when all of the target bytes are ASCII. This PR allows that specialization to be used for both four or five targets when the lower nibble (the bottom four bits) of each of the targets is unique, and in doing so, it becomes several instructions cheaper: rather than doing four comparisons, it can do a single shuffle and equality check.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Buffers;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private static readonly byte[] s_haystack = new HttpClient().GetByteArrayAsync(@"https://www.gutenberg.org/cache/epub/3200/pg3200.txt").Result;
    private static readonly SearchValues<byte> s_needle = SearchValues.Create("\0\r&<"u8);

    [Benchmark]
    public int Count()
    {
        int count = 0;

        ReadOnlySpan<byte> haystack = s_haystack.AsSpan();
        int pos;
        while ((pos = haystack.IndexOfAny(s_needle)) >= 0)
        {
            count++;
            haystack = haystack.Slice(pos + 1);
        }

        return count;
    }
}
Method Runtime Mean Ratio
Count .NET 9.0 3.704 ms 1.00
Count .NET 10.0 2.668 ms 0.72

dotnet/runtime#107798 improves another such algorithm, when AVX512 is available. One of the fallback strategies used by SearchValues.Create<char> is a vectorized “probabilistic map”, basically a Bloom filter. It has a bitmap that stores a bit for each byte of the char; when testing to see whether the char is in the target set, it checks to see whether the bit for each of the char‘s bytes is set. If at least one isn’t set, the char definitely isn’t in the target set. If both are set, more validation will need to be done to determine the actual inclusion of that value in the set. This can make it very efficient to rule out large amounts of input that definitely are not in the set and then only spend more effort on input that might be. The implementation involves various shuffle, shift, and permute operations, and this change is able to use a better set of instructions that reduce the number needed.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Buffers;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private static readonly SearchValues<char> s_searchValues = SearchValues.Create("ßäöüÄÖÜ");
    private string _input = new string('\n', 10_000);

    [Benchmark]
    public int IndexOfAny() => _input.AsSpan().IndexOfAny(s_searchValues);
}
Method Runtime Mean Ratio
IndexOfAny .NET 9.0 437.7 ns 1.00
IndexOfAny .NET 10.0 404.7 ns 0.92

While .NET 8 introduced support for SearchValues<byte> and SearchValues<char>, .NET 9 introduced support for SearchValues<string>. SearchValues<string> is used a bit differently from SearchValues<byte> and SearchValues<char>; whereas SearchValues<byte> is used to search for target bytes within a collection of bytes and SearchValues<char> is used to search for target chars within a collection of chars, SearchValues<string> is used to search for target strings within a single string (or span of chars). In other words, it’s a multi-substring search. Let’s say you have the regular expression (?i)hello|world; that is specifying that it should look for either “hello” or “world” in a case-insensitive manner; the SearchValues equivalent of that is SearchValues.Create(["hello", "world"], StringComparison.OrdinalIgnoreCase) (in fact, if you specify that pattern, the Regex compiler and source generator will use such a SearchValues.Create call under the covers in order to optimize the search).

SearchValues<string> also gets better in .NET 10. A key algorithm used by SearchValues<string> whenever possible and relevant is called “Teddy,” and enables performing a vectorized search for multiple substrings. In its core processing loop, when using AVX512, there are two instructions, a PermuteVar8x64x2 and an AlignRight; dotnet/runtime#107819 recognizes that those can be replaced by a single PermuteVar64x8x2. Similarly, when on Arm64, dotnet/runtime#118110 plays the instructions game and replaces a use of ExtractNarrowingSaturateUpper with the slightly cheaper UnzipEven.

SearchValues<string> is also able to optimize searching for a single string, spending more time to come up with optimal search parameters than does a simpler IndexOf(string, StringComparison) call. Similar to the approach with the probabilistic maps employed earlier, the vectorized search can yield false positives that then need to be weeded out. In some cases by construction, however, we know that false positives aren’t possible; dotnet/runtime#108368 extends an existing optimization that was case-sensitive only to also apply in some case-insensitive uses, such that we can avoid doing the extra validation step in more cases. For the candidate verification that remains, dotnet/runtime#108365 also significantly reduces overhead in a variety of cases, including adding specialized handling for needles (the things being searched for) of up to 16 characters (previously it was only up to 8), and precomputing more information to make the verification faster.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Buffers;
using System.Text.RegularExpressions;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private static readonly string s_haystack = new HttpClient().GetStringAsync(@"https://www.gutenberg.org/cache/epub/3200/pg3200.txt").Result;

    private static readonly Regex s_the = new("the", RegexOptions.IgnoreCase | RegexOptions.Compiled);
    private static readonly Regex s_something = new("something", RegexOptions.IgnoreCase | RegexOptions.Compiled);

    [Benchmark]
    public int CountThe() => s_the.Count(s_haystack);

    [Benchmark]
    public int CountSomething() => s_something.Count(s_haystack);
}
Method Runtime Mean Ratio
CountThe .NET 9.0 9.881 ms 1.00
CountThe .NET 10.0 7.799 ms 0.79
CountSomething .NET 9.0 2.466 ms 1.00
CountSomething .NET 10.0 2.027 ms 0.82

dotnet/runtime#118108 also adds a “packed” variant of the single-string implementation, meaning it’s able to handle common cases like ASCII more efficiently by ignoring a character’s upper zero byte in order to fit twice as much into a vector.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Buffers;
using System.Text.RegularExpressions;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private static readonly string s_haystack = string.Concat(Enumerable.Repeat("Sherlock Holm_s", 8_000));
    private static readonly SearchValues<string> s_needles = SearchValues.Create(["Sherlock Holmes"], StringComparison.OrdinalIgnoreCase);

    [Benchmark] 
    public bool ContainsAny() => s_haystack.AsSpan().ContainsAny(s_needles);
}
Method Runtime Mean Ratio
ContainsAny .NET 9.0 58.41 us 1.00
ContainsAny .NET 10.0 16.32 us 0.28

MemoryExtensions

The searching improvements continue beyond SearchValues, of course. Prior to .NET 10, the MemoryExtensions class already had a wealth of support for searching and manipulating spans, with extension methods like IndexOf, IndexOfAnyExceptInRange, ContainsAny, Count, Replace, SequenceCompare, and more (the set was further extended as well by dotnet/runtime#112951, which added CountAny and ReplaceAny), but the vast majority of these were limited to work with T types constrained to be IEquatable<T>. And in practice, many of the types you want to search do in fact implement IEquatable<T>. However, you might be in a generic context with an unconstrained T, such that even if the T used to instatiate the generic type or method is equatable, it’s not evident in the type system and thus the MemoryExtensions method couldn’t be used. And of course there are scenarios where you want to be able to supply a different comparison routine. Both of these scenarios show up, for example, in the implementation of LINQ’s Enumerable.Contains; if the source IEnumerable<TSource> is actually something we could treat as a span, like TSource[] or List<TSource>, it’d be nice to be able to just delegate to the optimized MemoryExtensions.Contains<T>, but a) Enumerable.Contains doesn’t constrain its TSource : IEquatable<TSource>, and b) Enumerable.Contains accepts an optional comparer.

To address this, dotnet/runtime#110197 adds ~30 new overloads to the MemoryExtensions class. These overloads all parallel existing methods, but remove the IEquatable<T> (or IComparable<T>) constraint on the generic method parameter and accept an optional IEqualityComparer<T>? (or IComparer<T>). When no comparer or a default comparer is supplied, they can fall back to using the same vectorized logic for relevant types, and otherwise can provide as optimal an implementation as they can muster, based on the nature of T and the supplied comparer.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private IEnumerable<int> _data = Enumerable.Range(0, 1_000_000).ToArray();

    [Benchmark]
    public bool Contains() => _data.Contains(int.MaxValue, EqualityComparer<int>.Default);
}
Method Runtime Mean Ratio
Contains .NET 9.0 213.94 us 1.00
Contains .NET 10.0 67.86 us 0.32

(It’s also worth highlighting that with the “first-class” span support in C# 14, many of these extensions from MemoryExtensions now naturally show up directly on types like string.)

This kind of searching often shows up as part of other APIs. For example, encoding APIs often need to first find something to be encoded, and that searching can be accelerated by using one of these efficiently implemented search APIs. There are dozens and dozens of existing examples of that throughout the core libraries, many of the places using SearchValues or these various MemoryExtensions methods. dotnet/runtime#110574 adds another, speeding up string.Normalize‘s argument validation. The current implementation walks character by character looking for the first surrogate. The new implementation gives that a jump start by using IndexOfAnyInRange.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private string _input = "This is a test. This is only a test. Nothing to see here. \u263A\uFE0F";

    [Benchmark]
    public string Normalize() => _input.Normalize();
}
Method Runtime Mean Ratio
Normalize .NET 9.0 104.93 ns 1.00
Normalize .NET 10.0 88.94 ns 0.85

dotnet/runtime#110478 similarly updates HttpUtility.UrlDecode to use the vectorized IndexOfAnyInRange. It also avoids allocating the resulting string if nothing needs to be decoded.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Web;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    [Benchmark]
    public string UrlDecode() => HttpUtility.UrlDecode("aaaaabbbbb%e2%98%ba%ef%b8%8f");
}
Method Runtime Mean Ratio
UrlDecode .NET 9.0 59.42 ns 1.00
UrlDecode .NET 10.0 54.26 ns 0.91

Similarly, dotnet/runtime#114494 employs SearchValues in OptimizedInboxTextEncoder, which is the core implementation that backs the various encoders like JavaScriptEncoder and HtmlEncoder in the System.Text.Encodings.Web library.

JSON

JSON is at the heart of many different domains, having become the lingua franca of data interchange on the web. With System.Text.Json as the recommended library for working with JSON in .NET, it is constantly evolving to meet additional performance requirements. .NET 10 sees it updated with both improvements to the performance of existing methods as well as new methods specifically geared towards helping with performance.

The JsonSerializer type is layered on top of the lower-level Utf8JsonReader and Utf8JsonWriter types. When serializing, JsonSerializer needs an instance of Utf8JsonWriter, which is a class, and any associated objects, such as an IBufferWriter instance. For any temporary buffers it requires, it’ll use rented buffers from ArrayPool<byte>, but for these helper objects, it maintains its own cache, to avoid needing to recreate them at very high frequencies. That cache was being used for all asynchronous streaming serialization operations, but as it turns out, it wasn’t being used for synchronous streaming serialization operations. dotnet/runtime#112745 fixes that to make the use of the cache consistent, avoiding these intermediate allocations.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Text.Json;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private Data _data = new();
    private MemoryStream _stream = new();

    [Benchmark]
    public void Serialize()
    {
        _stream.Position = 0;
        JsonSerializer.Serialize(_stream, _data);
    }

    public class Data
    {
        public int Value1 { get; set; }
    }
}
Method Runtime Mean Ratio Allocated Alloc Ratio
Serialize .NET 9.0 115.36 ns 1.00 176 B 1.00
Serialize .NET 10.0 77.73 ns 0.67 0.00

Earlier when discussing collections, it was noted that OrderedDictionary<TKey, TValue> now exposes overloads of methods like TryAdd that return the relevant item’s index, which then allows subsequent access to avoid the more costly key-based lookup. As it turns out, JsonObject‘s indexer needs to do that, first indexing into the dictionary by key, doing some checks, and then indexing again. It’s now been updated to use these new overloads. As those lookups typically dominate the cost of using the setter, this can upwards of double throughput of JsonObject‘s indexer:

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Text.Json.Nodes;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private JsonObject _obj = new();

    [Benchmark]
    public void Set() => _obj["key"] = "value";
}
Method Runtime Mean Ratio
Set .NET 9.0 40.56 ns 1.00
Set .NET 10.0 16.96 ns 0.42

Most of the improvements in System.Text.Json, however, are actually via new APIs. This same “avoid a double lookup” issue shows up in other places, for example wanting to add a property to a JsonObject but only if it doesn’t yet exist. With dotnet/runtime#111229 from @Flu, that’s addressed with a new TryAdd method (as well as a TryAdd overload and an overload of the existing TryGetPropertyValue that, as with OrderedDictionary<>, returns the index of the property).

// dotnet run -c Release -f net10.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Text.Json.Nodes;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private JsonObject _obj = new();
    private JsonNode _value = JsonValue.Create("value");

    [Benchmark(Baseline = true)]
    public void NonOverwritingSet_Manual()
    {
        _obj.Remove("key");
        if (!_obj.ContainsKey("key"))
        {
            _obj.Add("key", _value);
        }
    }

    [Benchmark]
    public void NonOverwritingSet_TryAdd()
    {
        _obj.Remove("key");
        _obj.TryAdd("key", _value);
    }
}
Method Mean Ratio
NonOverwritingSet_Manual 16.59 ns 1.00
NonOverwritingSet_TryAdd 14.31 ns 0.86

dotnet/runtime#109472 from @karakasa also imbues JsonArray with new RemoveAll and RemoveRange methods. In addition to the usability benefits these can provide, they have the same performance benefits they have on List<T> (which is not a coincidence, given that JsonArray is, as an implementation detail, a wrapper for a List<JsonNode?>). Removing “incorrectly” from a List<T> can end up being an O(N^2) endeavor, e.g. when I run this:

// dotnet run -c Release -f net10.0

using System.Diagnostics;

for (int i = 100_000; i < 700_000; i += 100_000)
{
    List<int> items = Enumerable.Range(0, i).ToList();

    Stopwatch sw = Stopwatch.StartNew();
    while (items.Count > 0)
    {
        items.RemoveAt(0); // uh oh
    }
    Console.WriteLine($"{i} => {sw.Elapsed}");
}

I get output like this:

100000 => 00:00:00.2271798
200000 => 00:00:00.8328727
300000 => 00:00:01.9820088
400000 => 00:00:03.9242008
500000 => 00:00:06.9549009
600000 => 00:00:11.1104903

Note how as the list length grows linearly, the elapsed time is growing non-linearly. That’s primarily because each RemoveAt(0) is requiring the entire remainder of the list to shift down, which is O(N) in the length of the list. That means we get N + (N-1) + (N-2) + ... + 1 operations, which is N(N+1)/2, which is O(N^2). Both RemoveRange and RemoveAll are able to avoid those costs by doing the shifting only once per element. Of course, even without such methods, I could have written my previous removal loop in a way that keeps it linear, namely by repeatedly removing the last element rather than the first (and, of course, if I really intended on removing everything, I could have just used Clear). Typical use, however, ends up removing a smattering of elements, and being able to just delegate and not worry about accidentally incurring a non-linear overhead is helpful.

// dotnet run -c Release -f net10.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Text.Json.Nodes;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private JsonArray _arr;

    [IterationSetup]
    public void Setup() =>
        _arr = new JsonArray(Enumerable.Range(0, 100_000).Select(i => (JsonNode)i).ToArray());

    [Benchmark]
    public void Manual()
    {
        int i = 0;
        while (i < _arr.Count)
        {
            if (_arr[i]!.GetValue<int>() % 2 == 0)
            {
                _arr.RemoveAt(i);
            }
            else
            {
                i++;
            }
        }
    }

    [Benchmark]
    public void RemoveAll() => _arr.RemoveAll(static n => n!.GetValue<int>() % 2 == 0);
}
Method Mean Allocated
Manual 355.230 ms
RemoveAll 2.022 ms 24 B

(Note that while RemoveAll in this micro-benchmark is more than 150x faster, it does have that small allocation that the manual implementation doesn’t. That’s due to a closure in the implementation while delegating to List<T>.RemoveAll. This could be avoided in the future if necessary.)

Another frequently-requested new method is from dotnet/runtime#116363, which adds new Parse methods to JsonElement. If a developer wants a JsonElement and only needs it temporarily, the most efficient mechanism available today is still the right answer: Parse a JsonDocument, use its RootElement, and then only when done with the JsonElement, dispose of the JsonDocument, e.g.

using (JsonDocument doc = JsonDocument.Parse(json))
{
    DoSomething(doc.RootElement);
}

That, however, is really only viable when the JsonElement is used in a scoped manner. If a developer needs to hand out the JsonElement, they’re left with three options:

  1. Parse into a JsonDocument, clone its RootElement, dispose of the JsonDocument, hand out the clone. While using JsonDocument is good for the temporary case, making a clone like this entails a fair bit of overhead:
    JsonElement clone;
    using (JsonDocument doc = JsonDocument.Parse(json))
    {
        clone = doc.RootElement.Clone();
    }
    return clone;
  2. Parse into a JsonDocument and just hand out its RootElement. Please do not do this! JsonDocument.Parse creates a JsonDocument that’s backed by an array from the ArrayPool<>. If you don’t Dispose of the JsonDocument in this case, an array will be rented and then never returned to the pool. That’s not the end of the world; if someone else requests an array from the pool and the pool doesn’t have one cached to give them, it’ll just manufacture one, so eventually the pool’s arrays will be replenished. But the arrays in the pool are generally “more valuable” than others, because they’ve generally been around longer, and are thus more likely to be in higher generations. By using an ArrayPool array rather than a new array for a shorter-lived JsonDocument, you’re more likely throwing away an array that’ll have net more impact on the overall system. The impact of that is not easily seen in a micro-benchmark.
    return JsonDocument.Parse(json).RootElement; // please don't do this
  3. Use JsonSerializer to deserialize a JsonElement. This is a simple and reasonable one-liner, but it does invoke the JsonSerializer machinery, which brings in more overhead.
    return JsonSerializer.Deserialize<JsonElement>(json);

Now in .NET 10, there’s a fourth option:

  • Use JsonElement.Parse. This is the right answer. Use this instead of (1), (2), or (3).
    //  dotnet run -c Release -f net10.0 --filter "*"
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    using System.Text.Json;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [MemoryDiagnoser(displayGenColumns: false)]
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
    public partial class Tests
    {
        private const string JsonString = """{ "name": "John", "age": 30, "city": "New York" }""";
    
        [Benchmark]
        public JsonElement WithClone()
        {
            using JsonDocument d = JsonDocument.Parse(JsonString);
            return d.RootElement.Clone();
        }
    
        [Benchmark]
        public JsonElement WithoutClone() =>
            JsonDocument.Parse(JsonString).RootElement; // please don't do this in production code
    
        [Benchmark]
        public JsonElement WithDeserialize() =>
            JsonSerializer.Deserialize<JsonElement>(JsonString);
    
        [Benchmark]
        public JsonElement WithParse() =>
            JsonElement.Parse(JsonString);
    }
    Method Mean Allocated
    WithClone 303.7 ns 344 B
    WithoutClone 249.6 ns 312 B
    WithDeserialize 397.3 ns 272 B
    WithParse 261.9 ns 272 B

With JSON being used as an encoding for many modern protocols, streaming large JSON payloads has become very common. And for most use cases, it’s already possible to stream JSON well with System.Text.Json. However, in previous releases there wasn’t been a good way to stream partial string properties; string properties had to have their values written in one operation. If you’ve got small strings, that’s fine. If you’ve got really, really large strings, and those strings are lazily-produced in chunks, however, you ideally want the ability to write those chunks of the property as you have them, rather than needing to buffer up the value in its entirety. dotnet/runtime#101356 augmented Utf8JsonWriter with a WriteStringValueSegment method, which enables such partial writes. That addresses the majority case, however there’s a very common case where additional encoding of the value is desirable, and an API that automatically handles that encoding helps to be both efficient and easy. These modern protocols often transmit large blobs of binary data within the JSON payloads. Typically, these blobs end up being Base64 strings as properties on some JSON object. Today, outputting such blobs requires Base64-encoding the whole input and then writing the resulting bytes or chars in their entirety into the Utf8JsonWriter. To address that, dotnet/runtime#111041 adds a WriteBase64StringSegment method to Utf8JsonWriter. For those sufficiently motivated to reduce memory overheads, and to enable the streaming of such payloads, WriteBase64StringSegment enables passing in a span of bytes, which the implementation will Base64-encode and write to the JSON property; it can be called multiple times with isFinalSegment=false, such that the writer will continue appending the resulting Base64 data to the property, until it’s called with a final segment that ends the property. (Utf8JsonWriter has long had a WriteBase64String method, this new WriteBase64StringSegment simply enables it to be written in pieces.) The primary benefit of such a method is reduced latency and working set, as the entirety of the data payload needn’t be buffered before being written out, but we can still come up with a throughput benchmark that shows benefits:

//  dotnet run -c Release -f net10.0 --filter "*"

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Buffers;
using System.Text.Json;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private Utf8JsonWriter _writer = new(Stream.Null);
    private Stream _source = new MemoryStream(Enumerable.Range(0, 10_000_000).Select(i => (byte)i).ToArray());

    [Benchmark]
    public async Task Buffered()
    {
        _source.Position = 0;
        _writer.Reset();

        byte[] buffer = ArrayPool<byte>.Shared.Rent(0x1000);

        int totalBytes = 0;
        int read;
        while ((read = await _source.ReadAsync(buffer.AsMemory(totalBytes))) > 0)
        {
            totalBytes += read;
            if (totalBytes == buffer.Length)
            {
                byte[] newBuffer = ArrayPool<byte>.Shared.Rent(buffer.Length * 2);
                Array.Copy(buffer, newBuffer, totalBytes);
                ArrayPool<byte>.Shared.Return(buffer);
                buffer = newBuffer;
            }
        }

        _writer.WriteStartObject();
        _writer.WriteBase64String("data", buffer.AsSpan(0, totalBytes));
        _writer.WriteEndObject();
        await _writer.FlushAsync();

        ArrayPool<byte>.Shared.Return(buffer);
    }

    [Benchmark]
    public async Task Streaming()
    {
        _source.Position = 0;
        _writer.Reset();

        byte[] buffer = ArrayPool<byte>.Shared.Rent(0x1000);

        _writer.WriteStartObject();
        _writer.WritePropertyName("data");
        int read;
        while ((read = await _source.ReadAsync(buffer)) > 0)
        {
            _writer.WriteBase64StringSegment(buffer.AsSpan(0, read), isFinalSegment: false);
        }
        _writer.WriteBase64StringSegment(default, isFinalSegment: true);
        _writer.WriteEndObject();
        await _writer.FlushAsync();

        ArrayPool<byte>.Shared.Return(buffer);
    }
}
Method Mean
Buffered 3.925 ms
Streaming 1.555 ms

.NET 9 saw the introduction of the JsonMarshal class and the GetRawUtf8Value method, which provides raw access to the underlying bytes of property values fronted by a JsonElement. For situations where the name of the property is also needed, dotnet/runtime#107784 from @mwadams provides a corresponding JsonMarshal.GetRawUtf8PropertyName method.

Diagnostics

Over the years, I’ve seen a fair number of codebases introduce a struct-based ValueStopwatch; I think there are even a few still floating around the Microsoft.Extensions libraries. The premise behind these is that System.Diagnostics.Stopwatch is a class, but it simply wraps a long (a timestamp), so rather than writing code like the following that allocates:

Stopwatch sw = Stopwatch.StartNew();
... // something being measured
sw.Stop();
TimeSpan elapsed = sw.Elapsed;

you could write:

ValueStopwatch sw = ValueStopwatch.StartNew();
... // something being measured
sw.Stop();
TimeSpan elapsed = sw.Elapsed;

and avoid the allocation. Stopwatch subsequently gained helpers that make such a ValueStopwatch less appealing, since as of .NET 7, I can write it instead like this:

long start = Stopwatch.GetTimestamp();
... // something being measured
long end = Stopwatch.GetTimestamp();
TimeSpan elapsed = Stopwatch.GetElapsedTime(start, end);

However, that’s not quite as natural as the original example, that just uses Stopwatch. Wouldn’t it be nice if you could write the original example and have it executed as if it were the latter? With all the investments in .NET 9 and .NET 10 around escape analysis and stack allocation, you now can. dotnet/runtime#111834 streamlines the Stopwatch implementation so that StartNew, Elapsed, and Stop are fully inlineable. At that point, the JIT can see that the allocated Stopwatch instance never escapes the frame, and it can be stack allocated.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Diagnostics;
using System.Runtime.CompilerServices;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[DisassemblyDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    [Benchmark]
    public TimeSpan WithGetTimestamp()
    {
        long start = Stopwatch.GetTimestamp();
        Nop();
        long end = Stopwatch.GetTimestamp();

        return Stopwatch.GetElapsedTime(start, end);
    }

    [Benchmark]
    public TimeSpan WithStartNew()
    {
        Stopwatch sw = Stopwatch.StartNew();
        Nop();
        sw.Stop();

        return sw.Elapsed;
    }

    [MethodImpl(MethodImplOptions.NoInlining)]
    private static void Nop() { }
}
Method Runtime Mean Ratio Code Size Allocated Alloc Ratio
WithGetTimestamp .NET 9.0 28.95 ns 1.00 148 B NA
WithGetTimestamp .NET 10.0 28.32 ns 0.98 130 B NA
WithStartNew .NET 9.0 38.62 ns 1.00 341 B 40 B 1.00
WithStartNew .NET 10.0 28.21 ns 0.73 130 B 0.00

dotnet/runtime#117031 is a nice improvement that helps reduce working set for anyone using an EventSource and that has events with really large IDs. For efficiency purposes, EventSource was using an array to map event ID to the data for that ID; lookup needs to be really fast, since the lookup is performed on every event write in order to look up the metadata for the event being written. In many EventSources, the developer authors events with a small, contiguous range of IDs, and the array ends up being very dense. But if a developer authors any event with a really large ID (which we’ve seen happen in multiple real-world projects, due to splitting events into multiple partial class definitions shared between different projects and selecting IDs for each file unlikely to conflict with each other), an array is still created with a length to accomodate that large ID, which can result in a really big allocation that persists for the lifetime of the event source, and a lot of that allocation ends up just being wasted space. Thankfully, since EventSource was written years ago, the performance of Dictionary<TKey, TValue> has increased significantly, to the point where it’s able to efficiently handle the lookups without needing the event IDs to be dense. Note that there should really only ever be one instance of a given EventSource-derived type; the recommended pattern is to store one into a static readonly field and just use that one. So the overheads incurred as part of this are primarily about the impact that single large allocation has on working set for the duration of the process. To make it easier to demonstrate, though, I’m doing something you’d never, ever do, and creating a new instance per event. Don’t try this at home, or at least not in production.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Diagnostics;
using System.Diagnostics.Tracing;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private MyListener _listener = new();

    [Benchmark]
    public void Oops()
    {
        using OopsEventSource oops = new();
        oops.Oops();
    }

    [EventSource(Name = "MyTestEventSource")]
    public sealed class OopsEventSource : EventSource
    {
        [Event(12_345_678, Level = EventLevel.Error)]
        public void Oops() => WriteEvent(12_345_678);
    }

    private sealed class MyListener : EventListener
    {
        protected override void OnEventSourceCreated(EventSource eventSource) => 
            EnableEvents(eventSource, EventLevel.Error);
    }
}
Method Runtime Mean Ratio Allocated Alloc Ratio
Oops .NET 9.0 1,876.21 us 1.00 1157428.01 KB 1.000
Oops .NET 10.0 22.06 us 0.01 19.21 KB 0.000

dotnet/runtime#107333 from @AlgorithmsAreCool reduces thread contention involved in starting and stopping an Activity. ActivitySource maintains a thread-safe list of listeners, which only changes on the rare occasion that a listener is registered or unregistered. Any time an Activity is created or destroyed (which can happen at very high frequency), each listener gets notified, which requires walking through the list of listeners. The previous code used a lock to protect that listeners list, and to avoid notifying the listener while holding the lock, the implementation would take the lock, determine the next listener, release the lock, notify the listener, and rinse and repeat until it had notified all listeners. This could result in significant contention, as multiple threads started and stopped Activitys. Now with this PR, the list switches to be an immutable array. Each time the list changes, a new array is created with the modified set of listeners. This makes the act of changing the listeners list much more expensive, but, as noted, that’s generally a rarity. And in exchange, notifying listeners becomes much cheaper.

dotnet/runtime#117334 from @petrroll avoids the overheads of callers needing to interact with null loggers by excluding them in LoggerFactory.CreateLoggers, while dotnet/runtime#117342 seals the NullLogger type so type checks against NullLogger (e.g. if (logger is NullLogger) can be made more efficient by the JIT. And dotnet/roslyn-analyzers# from @mpidash will help developers to realize that their logging operations aren’t as cheap as they thought they might be. Consider this code:

[LoggerMessage(Level = LogLevel.Information, Message = "This happened: {Value}")]
private static partial void Oops(ILogger logger, string value);

public static void UnexpectedlyExpensive()
{
    Oops(NullLogger.Instance, $"{Guid.NewGuid()} {DateTimeOffset.UtcNow}");
}

It’s using the logger source generator, which will emit an implementation dedicated to this log method, including a log level check so that it doesn’t pay the bulk of the costs associated with logging unless the associated level is enabled:

[global::System.CodeDom.Compiler.GeneratedCodeAttribute("Microsoft.Extensions.Logging.Generators", "6.0.5.2210")]
private static partial void Oops(global::Microsoft.Extensions.Logging.ILogger logger, global::System.String value)
{
    if (logger.IsEnabled(global::Microsoft.Extensions.Logging.LogLevel.Information))
    {
        __OopsCallback(logger, value, null);
    }
}

Except, the call site is doing non-trivial work, creating a new Guid, fetching the current time, and allocating a string via string interpolation, even though it might be wasted work if LogLevel.Information isn’t available. This CA1873 analyzer flags that: Analyzer for expensive logging sites

Cryptography

A ton of effort went into cryptography in .NET 10, almost entirely focused on post‑quantum cryptography (PQC). PQC refers to a class of cryptographic algorithms designed to resist attacks from quantum computers, machines that could one day render classic cryptographic algorithms like Rivest–Shamir–Adleman (RSA) or Elliptic Curve Cryptography (ECC) insecure by efficiently solving problems such as integer factorization and discrete logarithms. With the looming threat of “harvest now, decrypt later” attacks (where a well-funded attacker idly captures encrypted internet traffic, expecting that they’ll be able to decrypt and read it later) and the multi-year process required to migrate critical infrastructure, the transition to quantum‑safe cryptographic standards has become an urgent priority. In this light, .NET 10 adds support for ML-DSA (a National Institute of Standards and Technology PQC digital signature algorithm), Composite ML-DSA (a draft Internet Engineering Task Force specification for creating signatures that combine ML-DSA with a classical crypto algorithm like RSA), SLH-DSA (another NIST PQC signature algorithm), and ML-KEM (a NIST PQC key encapsulation algorithm). This is an important step towards quantum-resistant security, enabling developers to begin experimenting with and planning for post-quantum identity and authenticity scenarios. While this PQC effort is not about performance, the design of them is very much focused on more modern sensibilities that have performance as a key motivator. While older types, like those that derive from AsymmetricAlgorithm, are design around arrays, with support for spans tacked on later, the new types are design with spans at the center, and with array-based APIs available only for convenience.

There are, however, some cryptography-related changes in .NET 10 that are focused squarely on performance. One is around improving OpenSSL “digest” performance. .NET’s cryptography stack is built on top of the underlying platform’s native cryptographic libraries; on Linux, that means using OpenSSL, making it a hot path for common operations like hashing, signing, and TLS. “Digest algorithms” are the family of cryptographic hash functions (for example, SHA‑256, SHA‑512, SHA‑3) that turn arbitrary input into fixed‑size fingerprints; they’re used all of the place, from verifying packages to TLS handshakes to content de-duplication. While .NET can use OpenSSL 1.x if that’s what’s offered by the OS, since .NET 6 it’s been focusing more and more on optimizing for and lighting-up with OpenSSL 3 (the previously-discussed PQC support requires OpenSSL 3.5 or later). With OpenSSL 1.x, OpenSSL exposed getter functions like EVP_sha256(), which were cheap functions that just returned a direct pointer to the EVP_MD for the relevant hash implementation. OpenSSL 3.x introduced a provider model, with a fetch function (EVP_MD_fetch) for retrieving the provider-backed implementation. To keep source compatibility, the 1.x-era getter functions were changed to return pointers to compatibility shims: when you pass one of these legacy EVP_MD pointers into operations like EVP_DigestInit_ex, OpenSSL performs an “implicit fetch” under the covers to resolve the actual implementation. That implicit fetch path adds extra work, on each use. Instead, OpenSSL recommends consumers do an explicit fetch and then cache the result for reuse. That’s what dotnet/runtime#118613 does. The result is leaner and faster cryptographic hash operations on OpenSSL‑based platforms.

// dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Security.Cryptography;

BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);

[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
public partial class Tests
{
    private byte[] _src = new byte[1024];
    private byte[] _dst = new byte[SHA256.HashSizeInBytes];

    [GlobalSetup]
    public void Setup() => new Random(42).NextBytes(_src);

    [Benchmark]
    public void Hash() => SHA256.HashData(_src, _dst);
}
Method Runtime Mean Ratio
Hash .NET 9.0 1,206.8 ns 1.00
Hash .NET 10.0 960.6 ns 0.80

A few other performance niceties have also found their way in.

  • AsnWriter.Encode. dotnet/runtime#106728 and dotnet/runtime#112638 add and then use throughout the crypto stack a callback-based mechanism to AsnWriter that enables encoding without forced allocation for the temporary encoded state.
  • SafeHandle singleton. dotnet/runtime#109391 employs a singleton SafeHandle in more places in X509Certificate to avoid temporary handle allocation.
  • Span-based ProtectedData. dotnet/runtime#109529 from @ChadNedzlek adds Span<byte>-based overloads to the ProtectedData class that enable protecting data without requiring the source or destinations to be in allocated arrays.
  • PemEncoding UTF-8. dotnet/runtime#109438 adds UTF-8 support to PemEncoding. PemEncoding, a utility class for parsing and formatting PEM (Privacy-Enhanced Mail)-encoded data such as that used in certificates and keys, previously worked only with chars. As was then done in dotnet/runtime#109564, this change makes it possible to parse UTF8 data directly without first needing to transcode to UTF16.
  • FindByThumbprint. dotnet/runtime#109130 adds an X509Certification2Collection.FindByThumbprint method. The implementation uses a stack-based buffer for the thumbprint value for each candidate certificate, eliminating the arrays that would otherwise be created in a naive manual implementation. dotnet/runtime#113606 then utilized this in SslStream.
  • SetKey dotnet/runtime#113146 adds a span-based SymmetricAlgorithm.SetKey method which can then be used to avoid creating unnecessary arrays.

Peanut Butter

As in every .NET release, there are a large number of PRs that help with performance in some fashion. The more of these that are addressed, the more the overall overhead for applications and services is lowered. Here are a smattering from this release:

  • GC. DATAS (Dynamic Adaptation To Application Sizes) was introduced in .NET 8 and enabled by default in .NET 9. Now in .NET 10, dotnet/runtime#105545 tuned DATAS to improve its overall behavior, cutting unnecessary work, smoothing out pauses (especially under high allocation rates), correcting fragmentation accounting that could cause extra short collections (gen1), and other such tweaks. The net result is fewer unnecessary collections, steadier throughput, and more predictable latency for allocation-heavy workloads. dotnet/runtime#118762 also adds several knobs for configuring how DATAS behaves, and in particular settings to fine-tune how Gen0 grows.
  • GCHandle. The GC supports various types of “handles” that allow for explicit management of resources in relation to GC operation. For example, you can create a “pinning handle,” which ensures that the GC will not move the object in question. Historically, these handles were surfaced to developers via the GCHandle type, but it has a variety of issues, including that it’s really easy to misuse due to lack of strong typing. To help address that, dotnet/runtime#111307 introduces a few new strongly-typed flavors of handles, with GCHandle<T>, PinnedGCHandle<T>, and WeakGCHandle<T>. These should not only address some of the usability issues, they’re also able to shave off a bit of the overheads incurred by the old design.
    // dotnet run -c Release -f net10.0 --filter "*"
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    using System.Runtime.InteropServices;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
    public partial class Tests
    {
        private byte[] _array = new byte[16];
    
        [Benchmark(Baseline = true)]
        public void Old() => GCHandle.Alloc(_array, GCHandleType.Pinned).Free();
    
        [Benchmark]
        public void New() => new PinnedGCHandle<byte[]>(_array).Dispose();
    }
    Method Mean Ratio
    Old 27.80 ns 1.00
    New 22.73 ns 0.82
  • Mono interpreter. The mono interpreter gained optimized support for several opcodes, including switches (dotnet/runtime#107423), new arrays (dotnet/runtime#107430), and memory barriers (dotnet/runtime#107325). But arguably more impactful was a series of more than a dozen PRs that enabled the interpreter to vectorize more operations with WebAssembly (Wasm). This included contributions like dotnet/runtime#114669, which enabled vectorization of shift operations, and dotnet/runtime#113743, which enabled vectorization of a plethora of operations like Abs, Divide, and Truncate. Other PRs used the Wasm-specific intrinsic APIs in more places, in order to accelerate on Wasm routines that were already accelerated on other architectures using architecture-specific intrinsics, e.g. dotnet/runtime#115062 used PackedSimd in the workhorse methods behind the hex conversion routines on Convert, like Convert.FromBase64String.
  • FCALLs. There are many places in the lower-layers of System.Private.CoreLib where managed code needs to call into native code in the runtime. There are two primary ways this transition from managed to native has happened, historically. One method is through what’s called a “QCALL”, essentially just a DllImport (P/Invoke) into native functions exposed by the runtime. The other, which historically was the dominant mechansim, is an “FCALL,” which is a more complex and specialized pathway that allows direct access to managed objects from native code. FCALLs were once the standard, but over time, more of them were converted to QCALLs. This shift improves reliability (since FCALLs are notoriously tricky to implement correctly) and can also boost performance, as FCALLs require helper method frames, which QCALLs can often avoid. A ton of PRs in .NET 10 went into removing FCALLs, like dotnet/runtime#107218 for helper method frames in Exception, GC, and Thread, dotnet/runtime#106497 for helper method frames in object, dotnet/runtime#107152 for those used in connecting to profilers, dotnet/runtime#108415 and dotnet/runtime#108535 for ones in reflection, and over a dozen others. In the end, all FCALLS that touched managed memory or threw exceptions were removed.
  • Converting hex. Recent .NET releases added methods to Convert like FromHexString and TryToHexStringLower, but such methods all used UTF16. dotnet/runtime#117965 adds overloads of these that work with UTF8 bytes.
  • Formatting. String interpolation is backed by “interpolated string handlers.” When you interpolate with a string target type, by default you get the DefaultInterpolatedStringHandler that comes from System.Runtime.CompilerServices. That implementation is able to use stack-allocated memory and the ArrayPool<> for reduced allocation overheads as it’s buffering up text formatted to it. While very advanced, other code, including other interpolated string handlers, can use DefaultInterpolatedStringHandler as an implementation detail. However, when doing so, such code only could get access to the final output as a string; the underlying buffer wasn’t exposed. dotnet/runtime#112171 adds a Text property to DefaultInterpolatedStringHandler for code that wants access to the already formatted text in a ReadOnlySpan<char>.
  • Enumeration-related allocations. dotnet/runtime#118288 removes a handful of allocations related to enumeration, for example removing a string.Split call in EnumConverter and replacing it with a MemoryExtensions.Split call that doesn’t need to allocate either the string[] or the individual string instances.
  • NRBF decoding. dotnet/runtime#107797 from @teo-tsirpanis removes an array allocation used in a decimal constructor call, replacing it instead with a collection expression targeting a span, which will result in the state being stack allocated.
  • TypeConverter allocations. dotnet/runtime#111349 from @AlexRadch reduces some parsing overheads in the TypeConverters for Size, SizeF, Point, and Rectangle by using more modern APIs and constructs, such as the span-based Split method and string interpolation.
  • Generic math conversions. Most of the TryConvertXx methods using the various primitive’s implementations of the generic math interfaces are marked as MethodImplOptions.AggressiveInlining, to help the JIT realize they should always be inlined, but a few stragglers were left out. dotnet/runtime#112061 from @hez2010 fixes that.
  • ThrowIfNull. C# 14 now supports the ability to write extension static methods. This is a huge boon for libraries that need to support downlevel targeting, as it means static methods can be polyfilled just as instance methods can be. There are many libraries in .NET that build not only for the latest runtimes but also for .NET Standard 2.0 and .NET Framework, and those libraries have been unable to use helper static methods like ArgumentNullException.ThrowIfNull, which can help to streamline call sites and make methods more inlineable (in addition, of course, to tidying up the code). Now that the dotnet/runtime repo builds with a C# 14 compiler, dotnet/runtime#114644 replaced ~2500 call sites in such libraries with use of a ThrowIfNull polyfill.
  • FileProvider Change Tokens. dotnet/runtime#116175 reduces allocation in PollingWildCardChangeToken by using allocation-free mechanisms for computing hashes, while dotnet/runtime#115684 from @rameel reduces allocation in CompositeFileProvider by avoiding taking up space for nop NullChangeTokens.
  • String interpolation. dotnet/runtime#114497 removes a variety of null checks when dealing with nullable inputs, shaving off some overheads of the interpolation operation.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
    public partial class Tests
    {
        private string _value = " ";
    
        [Benchmark]
        public string Interpolate() => $"{_value} {_value} {_value} {_value}";
    }
    Method Runtime Mean Ratio
    Interpolate .NET 9.0 34.21 ns 1.00
    Interpolate .NET 10.0 29.47 ns 0.86
  • AssemblyQualifiedName. Type.AssemblyQualifiedName previously recomputed the result on every access. As of dotnet/runtime#118389, it’s now cached.
    // dotnet run -c Release -f net9.0 --filter "*" --runtimes net9.0 net10.0
    
    using BenchmarkDotNet.Attributes;
    using BenchmarkDotNet.Running;
    
    BenchmarkSwitcher.FromAssembly(typeof(Tests).Assembly).Run(args);
    
    [MemoryDiagnoser(displayGenColumns: false)]
    [HideColumns("Job", "Error", "StdDev", "Median", "RatioSD")]
    public partial class Tests
    {
        [Benchmark]
        public string AQN() => typeof(Dictionary<int, string>).AssemblyQualifiedName!;
    }
    Method Runtime Mean Ratio Allocated Alloc Ratio
    AQN .NET 9.0 132.345 ns 1.007 712 B 1.00
    AQN .NET 10.0 1.218 ns 0.009 0.00

What’s Next?

Whew! After all of that, I hope you’re as excited as I am about .NET 10, and more generally, about the future of .NET.

As you’ve seen in this tour (and in those for previous releases), the story of .NET performance is one of relentless iteration, systemic thinking, and the compounding effect of many targeted improvements. While I’ve highlighted micro-benchmarks to show specific gains, the real story isn’t about these benchmarks… it’s about making real-world applications more responsive, more scalable, more sustainable, more economical, and ultimately, more enjoyable to build and use. Whether you’re shipping high-throughput services, interactive desktop apps, or resource-constrained mobile experiences, .NET 10 offers tangible performance benefits to you and your users.

The best way to appreciate these improvements is to try .NET 10 RC1 yourself. Download it, run your workloads, measure the impact, and share your experiences. See awesome gains? Find a regression that needs fixing? Spot an opportunity for further improvement? Shout it out, open an issue, even send a PR. Every bit of feedback helps make .NET better, and we look forward to continuing to build with you.

Happy coding!

The post Performance Improvements in .NET 10 appeared first on .NET Blog.

Let Copilot Coding Agent handle the busy work