![]() These benchmarks used 1,769 MB, which unlocks most of the available vCPU performance, but not all of it. When it comes to minimizing cold start duration, the more memory, the better. That said, the lack of features may not make it a good choice just yet. However, the clear winner is the new JSON source generator with only 70 ms in cold start overhead compared to our baseline. It only adds 100 ms to our cold start duration and is 9% cheaper to run compared to Json.NET. The newer library has a compelling performance and value benefit over Json.NET. It adds at least 210 ms to our cold start duration and it's also the most expensive JSON library to run. Json.NET is truly a Swiss army knife for serialization and this flexibility comes at a cost. It shouldn't be a surprise that Json.NET, which has been around for a long time, has accumulated a lot of cruft. However, you can explore the result set in the interactive Google spreadsheet. NET Core 3.1 since I don't consider a viable target runtime anymore. ![]() Here are our observed lower bounds for the JSON serialization libraries, as well as the baseline performance on. My biggest recommendation here is to thoroughly validate the output to ensure any behavior changes are caught during development. That said, for some smaller projects, it might be viable. In particular, the lack of custom type converters to override the default JSON serialization behavior is a blocker for me. However, I don't consider this iteration to be production ready, because it is missing some features I rely on. Personally, as someone who cares a lot about performance, I find source generators a really exciting addition to our developer toolbox. NET 6 is the ability to generate the JSON serialization code during compilation instead of relying on reflection at runtime. ![]() This is probably just an oddity, but it serves to highlight that nothing is ever obvious and why benchmarking the actual code is so important! Most fascinating to me is that ARM64 is cheaper with 512 MB memory, while x86-64 is cheaper with 256 MB. Similarly, Tiered Compilation is disabled for all because it introduces additional overhead during the warm INVOKE phases. That makes intuitively sense since this option shifts some cost from the first INVOKE phase to the free INIT phase and only has a small overhead penalty otherwise. Instead, we have a 50/50 split with x86-64 winning ever so slightly.Īlso interesting is that the cheapest execution cost always uses the PreJIT option. I would have expected ARM64 to be the obvious choice since the execution cost is 20% lower.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |