Posted by iliketrains 14 hours ago
So in C++ terms, it's really just benchmarking "-O2" instead of "-O2 -DNDEBUG". This seems fine.
Setting up release benchmarks is much more complex and we develop the game in Debug mode, so it is very natural to get the first results there, and if promising, validate them in Release.
Also, since our team works in Debug mode, even gains that only speed things up in Debug mode are valuable for us, but I haven't encountered a case where I would see 20%+ perf gain in Debug mode that would not translate to Release mode.
But also, as far as this article, it's noting a noting a more specific use case that is fairly 'real world'; Reading a file (I/O), doing some form of deserialization (likely with a library unless format is proprietary) and whatever 'generating a map' means.
Again, this all feels pretty realistic for a use case so it's good food for thought.
> Can someone explain what benchmarks were actually used?
This honestly would be useful in the article itself, as well as, per above, some 'deep dives' into where the performance issues were. Was it in file I/O (possibly Interop related?) Was it due to some pattern in the serialization library? Was it the object allocation pattern (When I think of C# code friendly for Mono I think of Cysharp libraries which sometimes do curious things)? Not diving deeper into the profiling doesn't help anyone know where the focus needs to be (unless it's a more general thing, in which case I'd hope for a better deep dive on that aspect.)
Edited to add:
Reading your article again, I wonder whether your compiler is just not doing the right things to take advantage of the performance boosts available via CoreCLR?
E.x. can you do things like stackalloc temp buffers to avoid allocation, and does the stdlib do those things where it is advantageous?
Also, I know I vaguely hit on this above, but also wondering whether the IL being done is just 'not hitting the pattern'. where a lot of CoreCLR will do it's best magic if things are arranged a specific way in IL based on how Roslyn outputs, but even for the 'expected' C# case, deviations can lead to breaking the opt.
I don't beleive such a large regression from .NET framework to CoreCLR.
> Unity uses the Mono framework to run C# programs and back in 2006 it was one of the only viable multi-platform implementations of .NET. Mono is also open-source, allowing Unity to do some tweaks to better suit game development. [...] An interesting twist happened nearly 10 years later.
Not mentioned is that Mono itself of course improved a lot over the years, and even prior to MS's about-face on open source, it was well known that Unity was hindered by sticking with an old and out-of-date Mono, and they were very successful at deflecting the blame for this by throwing the Mono folks under the bus. Any time complaints about game developers' inability to use newer C# features came up, Mono/Xamarin would invariably receive the ire* because Unity couldn't come to an agreement with them about their license and consulting fees. (Mono was open source under LGPL instead of MIT licensed at the time, and Unity had initially bought a commercial license that allowed them exemptions from even the soft copyleft provisions in the LGPL, but the exemption was limited and not, for example, for any and all future versions, too, indefinitely.) Reportedly, they were trying to charge too much (whatever that means) for Unity's attempts to upgrade to the modern versions.
It's now 10+ years later, and now not only Mono (after being relicensed under MIT) but .NET CoreCLR are both available for Unity at no cost, but despite this it still took Unity years before they'd upgraded their C# language support and to a slightly more modern runtime.
Something else to note: Although, LGPL isn't inherently incompatible with commercial use or even use in closed source software, one sticking point was that some of the platforms Unity wished to be able to deploy have developer/publisher restrictions that are incompatible with the soft copyleft terms in the LGPL that require that users (or in this case game developers) be allowed to relink against other versions of the covered software (including, for example, newer releases). Perversely, it's because Unity sought and obtained exemptions to the LGPL that both end users and game developers were hamstrung and kept from being able to upgrade Mono themselves! (It wouldn't have helped on, say, locked down platforms like Nintendo's for example, but certainly would have been viable on platforms without the first-party restrictions, like PC gaming or Android.)
By now, Unity has gone on to pull a lot of other shenanigans with their own pricing that seems to have sufficiently pissed off the game development community, but it should still not be forgotten when they were willing to pass the blame to an open source project over the development/support that the company felt it was entitled to for a price lower than they were told it would cost, and that they themselves were slow to make progress on even when the price of the exemption literally became $0.
* you can find threads with these sorts of comments from during this period right here on HN, too, but it was everywhere
So much this. According to a 2023 blog article from Unity [0], Unity uses Boehm GC. But Mono itself introduced another, generational GC called SGen [1] more than 10 years ago that became the default at some point. It is just Unity stuck on old mono versions, missing out on all the changes and improvements that went into Mono after their fork, essentially.
[0]: https://unity.com/blog/engine-platform/porting-unity-to-core... [1]: https://www.mono-project.com/docs/advanced/garbage-collector...
1. Uses "I"
2. Look how many times it switches verb tenses here:
"One day I was debugging an issue in map generation and it was time-consuming [...]. To make debugging faster, I’ve written a unit test, hoping to cut down on the turn-around time since Unity takes 15+ seconds just to crunch new DLLs [...]. When I ran the test, it finished in 40 seconds."