Posted by volatileint 6 days ago
I remember fretting about these rules when reading Scott Meyer's Effective C++11, and then later to realize it's better not to use auto at all. Explicit types are good types
const auto start = std::chrono::steady_clock::now();
do_some_work(size);
const auto end = std::chrono::steady_clock::now();
const std::chrono::duration<double> diff = end - start;
std::cout << "diff = " << diff << "; size = " << size << '\n';
Looking up the (current standard's) return type of std::chrono::steady_clock::now() and spelling it out would serve no purpose here. TP start = TP::clock::now();
do_some_work(size);
TP end = TP::clock::now();Strong agree here. It's not just because it reduces cognitive load, it's because explicit types allows and requires the compiler to check your work.
Even if this isn't a problem when the code is first written, it's a nice safety belt for when someone does a refactor 6-12 months (or even 5+ years) down the road that changes a type. With auto, in the best case you might end up with 100+ lines of unintelligible error messages. In the worst case the compiler just trudges on and you have some subtle semantic breakage that takes weeks or months to chase down.
The only exceptions I like are iterators (whose types are a pita in C++), and lambda types, where you sometimes don't have any other good options because you can't afford the dynamic dispatch of std::function.
I also prefer not to use auto when getting iterators from STL containers. Often I use a typedef for most STL containers that I use. The one can write MyNiceContainerType::iterator.
auto var = FunctionCall(...);
Then, in the IDE, hover over auto to show what the actual type is, and then replace auto with that type. Useful when the type is complicated, or is in some nested namespace.
There's nothing special about auto here. It deduces the same type as a template parameter, with the same collapsing rules.
decltype(auto) is a different beast and it's much more confusing. It means, more or less, "preserve the type of the expression, unless given a simple identifier, in which case use the type of the identifier."
So for example I'd write:
var x = new List<Foo>();
Because writing: List<Foo> x = new List<Foo>();
Feels very redundantWhereas I'd write:
List<Foo> x = FooBarService.GetMyThings();
Because it's not obvious what the type is otherwise ( Some IDEs will overlay hint the type there though ).Although with newer language features you can also write:
List<Foo> x = new();
Which is even better.With good naming it should be pretty obvious it's a Foo, and then either you know the type by heart, or will need to look up the definition anyway.
With standard containers, you can have the assumption that everyone knows the type, at least high level. So knowing whether it's a list, a vector, a stack, a map or a multimap, ... is pretty useful and avoid a lookup.
List<Foo> x = new();
since it gives me better alignment and since it's not confused with dynamic.Nowadays I only use
var x = new List<Foo>();
in non-merged code as a ghetto TODO if I'm considering base types/interface.auto it = some_container.begin();
Not even once have I wished to know the actual type of the iterator.
IDEs are an invention from the late 1970's, early 1980's.
Having syntax highlighting makes me slightly faster, but I want to still be able to understand things, when looking at a diff or working over SSH and using cat.
For example:
auto a;
will always fail to compile not matter what flags.
int a;
is valid.
Also it prevents implicit type conversions, what you get as type on auto is the type you put at the right.
That's good.
What do you mean it is not a source of bugs?
I think what they mean, and what I also think is that the bug does not come from the existence of uninitialized variables. It comes from the USE of uninitialized variables. Making the variables initialized does not make the bug go away, at most it silences it. Making the program invalid instead (which is what UB fundamentally is) is way more helpful for making programs have less bugs. That the compiler still emits a program is a defect, although an unfixable one.
As to my knowledge C (and derivatives like C++) is the only common language where the question "Is this a program?" has false positives. It is certainly an interesting choice.
So, I see uninitialized variables as a good way to find such logic errors. And, therefore, advice to always initialize variable - bad practice.
Of course if you already have a good value to initialize variable - do it. But if you have no - better leave it uninitialized.
Moreover - this will not cause safety issues in production builds because you can use `-ftrivial-auto-var-init` to initialize automatic variables to e.g. zeroes (`-fhardened` will do this too)
Regarding the "auto" in C++, and technically in any language, it seems conceptually wrong. The ONLY use-case I can imagine is when the type name is long, and you don't want to type it manually, or the abstractions went beyond your control, which again I don't think is a scalable approach.
In both Rust and C++ we need this because we have unnameable types, so if their type can't be inferred (in C++ deduced) we can't use these types at all.
In both languages all the lambdas are unnameable and in Rust all the functions are too (C++ doesn't have a type for functions themselves only for function pointers and we can name a function pointer type in either language)
C has this, so I think C++ has as well. You can use a typedef'ed function to declare a function, not just for the function pointer.
typedef void * (type) (void * args);
type foo;
a = foo (b);
works?A function type describes a function with specified return type. A function type is characterized by its return type and the number and types of its parameters. A function type is said to be derived from its return type, and if its return type is T , the function type is sometimes called ‘‘function returning T’’. The construction of a function type from a return type is called ‘‘function type derivation’’.
> Per ISO/IEC 9899:TC3:
What is it supposed to tell me?
You can read a "draft" of that document here: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3220.pdf
[If you've ever been under the impression that "real" people use the actual ISO text, disabuse yourself of that notion, ISO takes ages to turn the same exact words into an official branded document, then charges $$$ for a PDF, ain't nobody got time or money for that]
I can't tell you what they intended by TC3. It might be a typo or it might be some way to refer to a specific draft or a section within that draft. I doubt this particular section changes frequently so I wouldn't worry about it.
It sounds like this text is essentially the same in C23, maybe moved around a bit.
Would also be my default goto version. Reasonable old to be supported everywhere, some quality of live improvements like initialization syntax, without all the modern fluff.
edit: you literally said this in your original comment. I failed at reading comprehension.
I regularly use that in C, to make sure a function matches an abstract interface. Sure, that often ends up in a function pointer, but not always and when I declare the type signature, it isn't yet a function pointer.
> but not define
I think that is because the type signature only contains the types, but no parameter names, which are required for a definition. This is arbitrary, since for data types, the member names are part of the type. It sounds totally fixable, but then you either have two types of function types, one where all parameter names are qualified and one where they aren't and only could use the former for function definitions. Or you would make names also mandatory for function declarations.
> It sounds totally fixable, but then you either have two types of function types, one where all parameter names are qualified and one where they aren't and only could use the former for function definitions
Making the names part of the type would be a bit weird, although we have seen stranger things. The biggest problem is that it would be a breaking change at least in C++.
Exactly, I believe that to be the case in C as well. In C23 the rules for types to be considered compatible were actually relaxed, so this proposal wouldn't make any sense. It would be a useless breaking change for no gain, other than a bit less typing and maybe feeding some language layers, so there is really no reason to do that. It actually makes the code less clear, since you now need to always lookup the type definition, which would be against C's philosophy.
That said, there are some contexts in which “auto” definitely improves the situation.
void func(std::vector<double> vec) {
for (auto &v : vec) {
// do stuff
}
}
Here it's obvious that v is of type double.I've seen much more perf-murdering things being caused by
std::map<std::string, int> my_map;
for(const std::pair<std::string, int>& v: my_map) {
...
}
than with auto though> warning: loop variable 'v' of type 'const std::pair<std::__cxx11::basic_string<char>, int>&' binds to a temporary constructed from type 'std::pair<const std::__cxx11::basic_string<char>, int>' [-Wrange-loop-construct] 11 | for (const std::pair<std::string, int>& v: m) {
As they say, the power of names...
Is it that iterating over map yields something other than `std::pair`, but which can be converted to `std::pair` (with nontrivial cost) and that result is bound by reference?
std::pair<const std::string, int>
vs std::pair<std::string, int>Is it really? I rather think that a missing & is easier to spot with "auto" simply because there is less text to parse for the eye.
> If you see "for (auto v : vec)" looks good right?
For me the missing & sticks out like a sore thumb.
> It's easy to forget (or not notice) that auto will not resolve to a reference in this case
Every feature can be misused if the user forgets how it works. I don't think people suddenly forget how "auto" works, given how ubiquitous it is.
If auto deduced reference types transparently, it would actually be more dangerous.
So I guess I depart from you there and thus my issue here is not really about auto
Things are different in Rust because of lifetimes and destructive moves. In this context, copying would be a bad default indeed.
> because who said that's even achievable, let alone cheap?
Nobody said that. The thing is that user-defined types can be anything from tiny and cheap to huge and expensive. A language has to pick one default and be consistent. You can complain one way or the other.
Yes, languages like Rust can automatically move variables if the compiler can prove that they will not be used anymore. Unfortunately, this is not possible in C++, so the user has to move explicitly (with std::move).
Honestly why not? A locally used variable sounds to be very much something the compiler can reason about. And a variable only declared in a loop, which is destroyed at the end of each iteration and only read from should be able to be optimized away. I don't know Rust, I mostly write C.
void checkFoo(const Foo&);
Foo getFoo();
void example() {
std::vector<Foo> vec;
Foo foo = getFoo();
if (checkFoo(foo)) {
// *We* know that checkFoo() does not store a
// reference to 'foo' but the compiler does not
// know this. Therefore it cannot automatically
// move 'foo' into the std::vector.
vec.push_back(std::move(foo));
}
}
The fundamental problem is that C++ does not track object lifetimes. You would end up with a system where the compiler would move objects only under certain circumstances, which would be very hard to reason about.My LS can infer it anytime.
On top of that:
* Reduce refactoring overhead (a type can be evolved or substituted, if it duck types the same, the autos don't change)
* If using auto instead of explicit types makes your code unclear, it's not clear enough code to begin with. Improve the names of methods being called or variable names being assigned to.
I can see explicit types when you are shipping a library, as part of the API your library exposes. Then you want types to be explicit and changes to be explicit.
``` struct dummy{}; dummy d = ANYTHING_YOU_WANT_GO_GET_THE_TYPE; ```
Compile it with g++, and get the type info from compilation error -:)
For any seriously templated or metaprogrammed code nowadays a concept/requires is going to make it a lot more obvious what your code is actually doing and give you actually useful errors in the event someone is misusing your code.
1. Consistency across the board (places where it's required for metaprogramming, lambdas, etc). And as a nicety it forces function/method names to be aligned instead of having variable character counts for the return type before the names. IMHO it makes skimming code easier.
2. It's required for certain metaprogramming situations and it makes other situations an order of magnitude nicer. Nowadays you can just say `auto foo()` but if you can constrain the type either in that trailing return or in a requires clause, it makes reading code a lot easier.
3. The big one for everyday users is that trailing return type includes a lot of extra name resolution in the scope. So for example if the function is a member function/method, the class scope is automatically included so that you can just write `auto Foo::Bar() -> Baz {}` instead of `Foo::Baz Foo::Bar() {}`.
2. It's incredibly rare for it to be required. It's not like 10% of the time, it's more like < 0.1% of the time. Just look at how many functions are in your code and how many of them actually can't be written without a trailing return type. You don't change habits to fit the tiny minority of your code.
3. This is probably the best reason to use it and the most subjective, but still not a particularly compelling argument for doing this everywhere, given how much it diverges from existing practice. And the downside is the scope also includes function parameters, which means people will refer to parameters in the return type much more than warranted, which is decidedly not always a good thing.
I have been programming in C++ for 25 years, so I'm so used to the original syntax that I don't default to auto ... ->, but I will definitely use it when it helps simplify some complex signatures.