Posted by ivankahl 6 days ago
The only winning move is not to play. Mapping libraries, even with source generators, produce lots of bugs and surprising behavior. Just write mappers by hand.
So far, there have been no surprises, and the library warns about potential issues very explicitly, I quite like it.
Of course, if it's just a handful of fields that need mapping, than write manually is the way to go, specially if said fields require a custom mapping, where the library would not facilitate.
It was meant to enforce a convention. Not to avoid the tedium of writing mapping code by hand (although that is another result).
This describes more than half of .net community packages and patterns. So much stuff driven by chasing "oh that's clever" high - forgetting that clever code is miserable to support in prod/maintain. Even when it's your code, but when it's third party libs - it's just asking for weekend debugging sessions and all nighters 2 months past initial delivery date. At some point you just get too old for that shit.
In the EF Core and Automapper type of cases, I consider it an anti-pattern that something outside the class is taking a dependency on a private member of the class in the first place, so the compiler is really doing you a favor by hiding away the private backing field more obscurely.
It's another variation of the "parse don't validate" dance. Just because you can do model validation in property setters doesn't always mean it is the best place to do model validation. If you are trying to bypass the setter in a DB Model, then you may have data in your database that doesn't validate, you just want to "parse" it and move on.
It is similar with auto-mapping scenarios, with the complication that automapping was originally meant to be the Validation step in some workflows and code architectures. I think that's personally why AutoMapper and other similar libraries have had a code smell to me as where those tools are often used are "parsing boundaries" more than they should be "validation boundaries" and the coupling between validation logic and AutoMapper logic to me starts to feel like a big ball of spaghetti to me versus a dedicated validation layer that is only concerned with validation not also doing a lot of heavy lifting in copying data around.
It is the compiler's job to guard encapsulation boundaries in most situations, but it's also not necessarily the compiler's job to guard encapsulation boundaries in all situations. There are a lot of good reasons code may want to marshall/serialize raw data. There are a lot of good reasons where cross-cutting is desirous (logging, debugging, meta-programming), which is a part of why .NET has such rich runtime reflection tools.
That's longstanding behaviour. Ever since features such as anonymous types or lambdas arrived, they mean that classes and methods need to be generated from them. And of course these need names, assigned by the compiler. But these names are deliberately not allowed from the code. The compiler allows itself a wider set of names, including the "<>" chars.
I have heard them referred to as "unspeakable names" because it's not that they're unknown, you literally can't say them in the code.
e.g. by Jon Skeet, here https://codeblog.jonskeet.uk/category/async/ from 2013.
> they’re all "unspeakable" names including angle-brackets, just like all compiler-generated names.
field doesn't stop this.
> I also have the benefit of seeing the field in my debugger
The debugger could still show it. The backing field is still there.
The compiler knows what you're doing. A keyword like 'field's inside a function's braces just isn't valid. Putting 'field' after a type name in a variable declaration makes as much sense as 'private int class;'
Part of why C# has been so successful in introducing new contextual keywords is that they've been there all along. I think C# 1.0 was ahead of the game on that, and it's interesting how much contextual keywords have started being a bigger tool in language design since C# (all of ES3 of ES4 and some of ES5 were predicated on keywords are always keywords and ES6/ES2015 is where you first start to the shift in JS to a broader contextual keyword approach which seems equal parts inspired by C# as not).
That said, they will also throw compiler warnings in console during build if you are using an all lowercase word with some small number of characters. I don't remember the exact trigger or warning, but it says something like "such words may be reserved for future use by the compiler" to disincentivize use.