Posted by volatileint 12/9/2025
I remember fretting about these rules when reading Scott Meyer's Effective C++11, and then later to realize it's better not to use auto at all. Explicit types are good types
const auto start = std::chrono::steady_clock::now();
do_some_work(size);
const auto end = std::chrono::steady_clock::now();
const std::chrono::duration<double> diff = end - start;
std::cout << "diff = " << diff << "; size = " << size << '\n';
Looking up the (current standard's) return type of std::chrono::steady_clock::now() and spelling it out would serve no purpose here. TP start = TP::clock::now();
do_some_work(size);
TP end = TP::clock::now();Strong agree here. It's not just because it reduces cognitive load, it's because explicit types allows and requires the compiler to check your work.
Even if this isn't a problem when the code is first written, it's a nice safety belt for when someone does a refactor 6-12 months (or even 5+ years) down the road that changes a type. With auto, in the best case you might end up with 100+ lines of unintelligible error messages. In the worst case the compiler just trudges on and you have some subtle semantic breakage that takes weeks or months to chase down.
The only exceptions I like are iterators (whose types are a pita in C++), and lambda types, where you sometimes don't have any other good options because you can't afford the dynamic dispatch of std::function.
The number of times someone has assigned a return type to int, triggering an implicit conversion.
I'll take "unintelligible compile errors" any day over subtle runtime bugs.
I have not encountered that many issues with the usual arithmetic conversions on return types, at least not in a way that auto would prevent. Clearly your experience is different.
Perhaps we can both agree that it would be nice to force explicit numerical conversions?
You can _almost_ get this by wrapping the arithmetic types in a union with only one member, but that may incur a performance hit, which is often not a viable trade off.
> I'll take "unintelligible compile errors" any day over subtle runtime bugs.
As would the rest of us, but that’s not the choice that auto gives you. auto can cause subtle bugs with class types. It doesn’t necessarily protect you from integer narrowing, either, as you eventually have to give the compiler a concrete non-auto type.
I also prefer not to use auto when getting iterators from STL containers. Often I use a typedef for most STL containers that I use. The one can write MyNiceContainerType::iterator.
auto var = FunctionCall(...);
Then, in the IDE, hover over auto to show what the actual type is, and then replace auto with that type. Useful when the type is complicated, or is in some nested namespace.
Really? I've never experienced this myself.
There's nothing special about auto here. It deduces the same type as a template parameter, with the same collapsing rules.
decltype(auto) is a different beast and it's much more confusing. It means, more or less, "preserve the type of the expression, unless given a simple identifier, in which case use the type of the identifier."
So for example I'd write:
var x = new List<Foo>();
Because writing: List<Foo> x = new List<Foo>();
Feels very redundantWhereas I'd write:
List<Foo> x = FooBarService.GetMyThings();
Because it's not obvious what the type is otherwise ( Some IDEs will overlay hint the type there though ).Although with newer language features you can also write:
List<Foo> x = new();
Which is even better.With good naming it should be pretty obvious it's a Foo, and then either you know the type by heart, or will need to look up the definition anyway.
With standard containers, you can have the assumption that everyone knows the type, at least high level. So knowing whether it's a list, a vector, a stack, a map or a multimap, ... is pretty useful and avoid a lookup.
List<Foo> x = new();
since it gives me better alignment and since it's not confused with dynamic.Nowadays I only use
var x = new List<Foo>();
in non-merged code as a ghetto TODO if I'm considering base types/interface.auto it = some_container.begin();
Not even once have I wished to know the actual type of the iterator.
IDEs are an invention from the late 1970's, early 1980's.
Having syntax highlighting makes me slightly faster, but I want to still be able to understand things, when looking at a diff or working over SSH and using cat.
For example:
auto a;
will always fail to compile not matter what flags.
int a;
is valid.
Also it prevents implicit type conversions, what you get as type on auto is the type you put at the right.
That's good.
What do you mean it is not a source of bugs?
I think what they mean, and what I also think is that the bug does not come from the existence of uninitialized variables. It comes from the USE of uninitialized variables. Making the variables initialized does not make the bug go away, at most it silences it. Making the program invalid instead (which is what UB fundamentally is) is way more helpful for making programs have less bugs. That the compiler still emits a program is a defect, although an unfixable one.
As to my knowledge C (and derivatives like C++) is the only common language where the question "Is this a program?" has false positives. It is certainly an interesting choice.
So, I see uninitialized variables as a good way to find such logic errors. And, therefore, advice to always initialize variable - bad practice.
Of course if you already have a good value to initialize variable - do it. But if you have no - better leave it uninitialized.
Moreover - this will not cause safety issues in production builds because you can use `-ftrivial-auto-var-init` to initialize automatic variables to e.g. zeroes (`-fhardened` will do this too)
That said, there are some contexts in which “auto” definitely improves the situation.
Regarding the "auto" in C++, and technically in any language, it seems conceptually wrong. The ONLY use-case I can imagine is when the type name is long, and you don't want to type it manually, or the abstractions went beyond your control, which again I don't think is a scalable approach.
In both Rust and C++ we need this because we have unnameable types, so if their type can't be inferred (in C++ deduced) we can't use these types at all.
In both languages all the lambdas are unnameable and in Rust all the functions are too (C++ doesn't have a type for functions themselves only for function pointers and we can name a function pointer type in either language)
C has this, so I think C++ has as well. You can use a typedef'ed function to declare a function, not just for the function pointer.
typedef void * (type) (void * args);
type foo;
a = foo (b);
works?A function type describes a function with specified return type. A function type is characterized by its return type and the number and types of its parameters. A function type is said to be derived from its return type, and if its return type is T , the function type is sometimes called ‘‘function returning T’’. The construction of a function type from a return type is called ‘‘function type derivation’’.
> Per ISO/IEC 9899:TC3:
What is it supposed to tell me?
You can read a "draft" of that document here: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3220.pdf
[If you've ever been under the impression that "real" people use the actual ISO text, disabuse yourself of that notion, ISO takes ages to turn the same exact words into an official branded document, then charges $$$ for a PDF, ain't nobody got time or money for that]
I can't tell you what they intended by TC3. It might be a typo or it might be some way to refer to a specific draft or a section within that draft. I doubt this particular section changes frequently so I wouldn't worry about it.
It sounds like this text is essentially the same in C23, maybe moved around a bit.
Would also be my default goto version. Reasonable old to be supported everywhere, some quality of live improvements like initialization syntax, without all the modern fluff.
edit: you literally said this in your original comment. I failed at reading comprehension.
I regularly use that in C, to make sure a function matches an abstract interface. Sure, that often ends up in a function pointer, but not always and when I declare the type signature, it isn't yet a function pointer.
> but not define
I think that is because the type signature only contains the types, but no parameter names, which are required for a definition. This is arbitrary, since for data types, the member names are part of the type. It sounds totally fixable, but then you either have two types of function types, one where all parameter names are qualified and one where they aren't and only could use the former for function definitions. Or you would make names also mandatory for function declarations.
> It sounds totally fixable, but then you either have two types of function types, one where all parameter names are qualified and one where they aren't and only could use the former for function definitions
Making the names part of the type would be a bit weird, although we have seen stranger things. The biggest problem is that it would be a breaking change at least in C++.
Exactly, I believe that to be the case in C as well. In C23 the rules for types to be considered compatible were actually relaxed, so this proposal wouldn't make any sense. It would be a useless breaking change for no gain, other than a bit less typing and maybe feeding some language layers, so there is really no reason to do that. It actually makes the code less clear, since you now need to always lookup the type definition, which would be against C's philosophy.
void func(std::vector<double> vec) {
for (auto &v : vec) {
// do stuff
}
}
Here it's obvious that v is of type double.I've seen much more perf-murdering things being caused by
std::map<std::string, int> my_map;
for(const std::pair<std::string, int>& v: my_map) {
...
}
than with auto though> warning: loop variable 'v' of type 'const std::pair<std::__cxx11::basic_string<char>, int>&' binds to a temporary constructed from type 'std::pair<const std::__cxx11::basic_string<char>, int>' [-Wrange-loop-construct] 11 | for (const std::pair<std::string, int>& v: m) {
As they say, the power of names...
Is it that iterating over map yields something other than `std::pair`, but which can be converted to `std::pair` (with nontrivial cost) and that result is bound by reference?
std::pair<const std::string, int>
vs std::pair<std::string, int>There's no such thing as "changing the type" in c++. Function returns an object type A, your variable is of type B, compiler tries to see if there is a conversion of the value of type A to a new value of type B
Is it really? I rather think that a missing & is easier to spot with "auto" simply because there is less text to parse for the eye.
> If you see "for (auto v : vec)" looks good right?
For me the missing & sticks out like a sore thumb.
> It's easy to forget (or not notice) that auto will not resolve to a reference in this case
Every feature can be misused if the user forgets how it works. I don't think people suddenly forget how "auto" works, given how ubiquitous it is.
If auto deduced reference types transparently, it would actually be more dangerous.
So I guess I depart from you there and thus my issue here is not really about auto
Things are different in Rust because of lifetimes and destructive moves. In this context, copying would be a bad default indeed.
> because who said that's even achievable, let alone cheap?
Nobody said that. The thing is that user-defined types can be anything from tiny and cheap to huge and expensive. A language has to pick one default and be consistent. You can complain one way or the other.
Yes, languages like Rust can automatically move variables if the compiler can prove that they will not be used anymore. Unfortunately, this is not possible in C++, so the user has to move explicitly (with std::move).
Honestly why not? A locally used variable sounds to be very much something the compiler can reason about. And a variable only declared in a loop, which is destroyed at the end of each iteration and only read from should be able to be optimized away. I don't know Rust, I mostly write C.
void checkFoo(const Foo&);
Foo getFoo();
void example() {
std::vector<Foo> vec;
Foo foo = getFoo();
if (checkFoo(foo)) {
// *We* know that checkFoo() does not store a
// reference to 'foo' but the compiler does not
// know this. Therefore it cannot automatically
// move 'foo' into the std::vector.
vec.push_back(std::move(foo));
}
}
The fundamental problem is that C++ does not track object lifetimes. You would end up with a system where the compiler would move objects only under certain circumstances, which would be very hard to reason about.Note that link time optimization only works on a particular binary. What if the function is implemented in a shared library?
> It is less of a problem with C, since you explicitly tell the compiler, whether you want things to get passed as value or pointer.
It works the exact same way in C++, though.
If it is in the public API/ABI of a shared library, than the calling semantics including lifetime and ownership rules are part of the public interface, so of course the compiler can't just change it. You the programmer are responsible for drawing abstraction boundaries and choosing the interface.
> It works the exact same way in C++, though.
Only if write C in C++. The issue here are references, of which the compiler figures out whether this should work like a value or like a pointer. This doesn't exist in C, there the programmer needs to make up its mind and choose. The whole type conversion by making a copy issue also doesn't exist there, because either the type matches or the compiler throws an error.
> The issue here are references, of which the compiler figures out whether this should work like a value or like a pointer.
I'm not sure I understand. A C++ reference always has reference semantics. Can you give an example?
My LS can infer it anytime.
On top of that:
* Reduce refactoring overhead (a type can be evolved or substituted, if it duck types the same, the autos don't change)
* If using auto instead of explicit types makes your code unclear, it's not clear enough code to begin with. Improve the names of methods being called or variable names being assigned to.
I can see explicit types when you are shipping a library, as part of the API your library exposes. Then you want types to be explicit and changes to be explicit.
std::pair x {1, 2.0};
auto [v, w] = x;
Why is the second element a float according to the blog post? std::tuple_element<0, std::pair<int, float> >::type&For any seriously templated or metaprogrammed code nowadays a concept/requires is going to make it a lot more obvious what your code is actually doing and give you actually useful errors in the event someone is misusing your code.
1. Consistency across the board (places where it's required for metaprogramming, lambdas, etc). And as a nicety it forces function/method names to be aligned instead of having variable character counts for the return type before the names. IMHO it makes skimming code easier.
2. It's required for certain metaprogramming situations and it makes other situations an order of magnitude nicer. Nowadays you can just say `auto foo()` but if you can constrain the type either in that trailing return or in a requires clause, it makes reading code a lot easier.
3. The big one for everyday users is that trailing return type includes a lot of extra name resolution in the scope. So for example if the function is a member function/method, the class scope is automatically included so that you can just write `auto Foo::Bar() -> Baz {}` instead of `Foo::Baz Foo::Bar() {}`.
2. It's incredibly rare for it to be required. It's not like 10% of the time, it's more like < 0.1% of the time. Just look at how many functions are in your code and how many of them actually can't be written without a trailing return type. You don't change habits to fit the tiny minority of your code.
3. This is probably the best reason to use it and the most subjective, but still not a particularly compelling argument for doing this everywhere, given how much it diverges from existing practice. And the downside is the scope also includes function parameters, which means people will refer to parameters in the return type much more than warranted, which is decidedly not always a good thing.
I have been programming in C++ for 25 years, so I'm so used to the original syntax that I don't default to auto ... ->, but I will definitely use it when it helps simplify some complex signatures.
And lifetime specifiers can be jarring. You have to think about how the function will be used at the declaration site. For example usually a function which takes two string views require different lifetimes, but maybe at call site you would only need one. It is just more verbose.
C++ has a host of complexities that come with header/source splits, janky stdlib improvements like lock_guard vs scoped_lock, quirky old syntax like virtual = 0, a lack of build systems and package managers.
Anything in any language can be very verbose and confusing if you one-line it or obfuscate it or otherwise write it in a deliberately confusing manner. That's not a meaningful point imo. What you have to do is compare what idiomatic code looks like between the two languages.
C++ has dozens of pages of dense standardese to specify how to initialise an object, full with such text as
> Only (possibly cv-qualified) non-POD class types (or arrays thereof) with automatic storage duration were considered to be default-initialized when no initializer is used. Each direct non-variant non-static data member M of T has a default member initializer or, if M is of class type X (or array thereof), X is const-default-constructible, if T is a union with at least one non-static data member, exactly one variant member has a default member initializer, if T is not a union, for each anonymous union member with at least one non-static data member (if any), exactly one non-static data member has a default member initializer, and each potentially constructed base class of T is const-default-constructible.
For me, it's all about inherent complexity vs incidental complexity. The having to pay attention to lifetimes is just Rust making explicit the inherent complexity of managing values and pointers thereof while making sure there isn't concurrent mutation, values moving while pointers to them exist, and no data races. This is just tough in itself. The aforementioned C++ example is just the language being byzantine and giving you 10,000 footguns when you just want to initialise a class.
That's just a list of very simple rules for each kind of type. As a C++-phob person, C++ has a lot of footguns, but this isn't one of them.
There are absolutely places where that is required and in Rust those situations become voodoo to write.
C++ be default has more complexity but has the same complexity regardless of domain.
Rust by default has much less complexity, but in obscure situations outside of the beaten path the complexity dramatically ramps up far above C++.
This is not an argument for or against either language, it's a compromise on language design, you can choose to dislike the compromise but that doesn't mean it was the wrong one, it just means you don't like it.
A relatively simple but complex example, I want variable X to be loaded into a registerer in thos function and only written to memory at the end of the function.
That is complex in C/C++ but you can look at decompilation and attempt to coerce the compiler into that.
In rust everything is so abstracted I wouldn't know where to begin looking to coerce the compiler into generating that machine code and might just decide to implement it in ASM, which defeats the point of using a high level language.
Granted you might go the FFMPEG route ans just choose to do that regardless but rust makes it much harder.
You don't always need that level of control but when you do it seems absurdly complex.
> That is complex in C/C++ but you can look at decompilation and attempt to coerce the compiler into that.
> In rust everything is so abstracted I wouldn't know where to begin looking
I don't know if I fully understand what you want to do, but (1) controlling register allocation is the realm of inline asm, be it in C, C++, or Rust. And (2) if "nudging" the compiler is what you want, then it's literally the same thing in Rust as in C++, it's a matter of inspecting the asm yourself or plonking your function onto godbolt.
I agree that you will probably just end up writing ASM but it was a trivial example, there are non-trivial examples involving jump tables and unrolling loops etc.
Effectively weird optimisations that rely on the virtual machine the compiler is building for vs reality, there's just more abstractions with rust than with C++ by the virtue of the safety mechanism, it's just plain not possible to have the one without the other.
The hardware can do legal things that rust cannot allow or can allow but you need to write extremely convoluted code, C/C++ is closer to the metal in that regard.
Don't get me wrong I am all for the right abstractions, it allows insane optimisations that humans couldn't dream of, but there is a flip side.
Rust basically takes the opposite approach of making false positives go to zero, which makes the false negatives go up, which you need to work around with unsafe or type gymnastics.
The third approach is to make both false positives and negatives be zero, by restricting the set of programs, which is what non systems languages do.
That type was not intentionally obfuscated or complex, it is actually pretty common to see such things. YMMV.