They're right that trait objects are dynamically sized types, which means they can't be passed by value to functions, but wrong that they need to be boxed; they can instead be put behind a reference. Both of the following are valid types.
type DynFutureBox = Pin<Box<dyn Future<Output = ()>>>;
type DynFutureRef<'f> = Pin<&'f dyn Future<Output = ()>>;
You can see this in the Rust Playground here: https://play.rust-lang.org/?version=stable&mode=debug&editio...To me traits are like a definition of capabilities. A way to duck type things.
See https://doc.rust-lang.org/reference/types/trait-object.html (`dyn Trait`, runtime dynamic dispatch) vs https://doc.rust-lang.org/reference/types/impl-trait.html (`impl Trait`, compile-time monomorphization)
Traits conceptually are kind of like definitions of capabilities. So you're not really wrong about that, that understanding probably may even help you.
I read the 3 parts of this website and 'Wowsa'... I'm definitely not going in that direction with Rust. I'll stick to dumb Go code, Swift or Python if I do async heavy stuff.
It's hard enough to write good code, I don't see the point of finding walls to smash my head into.
Think about it, if you write a lot of async code, chances are you have a ton of latency, waiting on I/O, disk, network etc. Using Rust for this in the first place isn't the wisest since performance isn't as important, most of your performance is wasted 'waiting' on things. Besides Rust wants purity and I/O is gritty little dirt.
Sorry my comment turned into a bit of a hot take, feel free to disagree. This async stuff, doesn't look fun at all.
It is a shame that the dominance of the "async/await" paradigm has made us think in terms of "synchronous" or "async/await"
> Think about it, if you write a lot of async code, chances are you have a ton of latency, waiting on I/O, disk, network etc
Yes. For almost all code anyone writes blocking code and threads are perfectly OK
Asynchronous programming is more trouble, but if trying to deal with a lot of access to those high latency resources asynchronous code really shines.
The trouble is that "async/await" is a really bad fit for rust. Every time you say `await` you start invisible magic happening. (A state machine starts spinning I believe in Rust - I may be mistaken)
"No invisible magic" was a promise that Rust made to us. What you say is what you mean, and what you mean is what you get.
No more, if you use async/await Rust
I really do not understand why people who are comfortable with "invisible magic" are not using a language with a garbage collector - that *really* useful invisible magic.
Asynchronous programming is the bees knees. It lets you get so much more from your hardware. I learnt to do it implementing telephone switching systems on MS-DOS. We could run seven telephone lines on a 486, with DOS, in (?) about 1991.
Async/await has so poisoned the well in Rust that many Rust people do not understand there is more to asynchronous programming than that
- Multiple cores
- DMA or other dedicated hardware
- GPU programming
- Distributed systems (e.g. the CAN network in your car)
- Threads
- Interrupts
- Event loops
- Coroutines
It's more that an async function in Rust is compiled completely differently: it's turned into a state machine at that point, with the code between 'awaits' being the transitions. In and of itself, it's not actually particularly difficult to grok (I'd say you have about as much an idea of what the resulting machine code looks like as with an optimized non-async function), the headaches are all in the edges of what the language can currently support when compiling under this model.
Honest question, where did you get that promise from?
The 1.0 release didn't really emphasize that: https://blog.rust-lang.org/2015/05/15/Rust-1.0.html
The current rhetoric is more about empowering more people to have confidence in systems programming: https://doc.rust-lang.org/book/foreword.html
Some of graydon's ideas starting almost a decade before 1.0 might have included that https://github.com/graydon/rust-prehistory/blob/df8cc964772b...
but his recent posts on what he would have done differently if he was BDFL include a bunch of stuff that's arguably more magical, not less: https://graydon2.dreamwidth.org/307291.html
I believe best you can do in other languages is using continuations as the state.
Can you elaborate on that? What about green threads?
If Rust manages to solve the coloring problem of async (e.g. by adopting effect systems [2] or alternatives), then stackful and stackless coroutines syntactic sugar could conceivably exist within the std language (perhaps leaving out stackless on nostd).
The reason you don’t see both stackless and stackful coroutines in a single language like Rust is the coloring problem is made 50% worse.
[1] https://crates.io/crates/may
[2] https://blog.yoshuawuyts.com/extending-rusts-effect-system/
I wasn't trying to recommend may specifically of course. Or are you saying that stackful coroutines must have soundness issues due to missing language features to make it safe?
I am unsure if it's inherent to stackful coroutines or not, it's been a minute since I've dug into that.
To be fair though, I think people generally just avoid TLS when running with green thread systems.
I also don’t think it’s hard to reason about in practice. Tutorials tend to get much deeper into the weeds than you typically need to go.
In the end, very helpful (and hardcore -- like the main author of Tokio) people unblocked me. I am not sure I was left very enlightened though; but I likely didn't stay for long enough for the whole thing to stick firmly into my memory. It's likely that.
I also think you've really got to be willing to be pragmatic when writing async code. If you like to do functional chains, you've got to be willing to let go of that and go for simple imperative code with match statements.
Where I find it gets complicated is dealing with streams and stuff, but for most application authoring use-cases, you can just await stuff that other people have written, or throw it into a `join_all` or whatever.
This slogan sucks. If it compiles, it type checks. Yes, Rust has a more sophisticated type system than Python with annotations, so it catches more type errors.
No, the type system cannot prevent logic bugs or design bugs. Stop pretending it does.
Obviously a type system cannot catch all your logic errors, but you can write code such that more of the logic is encapsulated in types, which _does_ help to catch a lot of logic errors.
There's a strong qualitative difference working with the Rust compiler versus working with Python or C++. Do you have a better suggestion for how to express that?
Also, no, the Rust compiler will happily pass code which will crash your program, all it takes is an out of bounds array access. That's the kind of puffery many of us are tired of. The "if it compiles, it works" slogan is, bluntly, wrong.
The only two languages I’ve worked with that gave me the feeling that I could generally trust a compiling program to be approximately correct are Rust and Haskell. That difference relative to other languages is meaningful enough in practice that it seems to me to be worth a slogan. I believe it’s meant to be more of a “works, relative to what you might expect from other languages” kind of thing versus, “is a completely perfect program.”
And, if you care about maximizing the “if it compiles it works” feeling, it’s possible to use .get() for array access, to put as much logic in the type system as is feasible, etc. This is probably more idiomatic and is generally how I write code, so it does often feel that way to me, regardless of whether it is completely, objectively, literally true.
It's not tautological at all, because the type system in Rust and Haskell is not a trivial condition of the language.
> not particularly effective as a saying or slogan
Neither is "if it compiles it runs", rather less so in fact, everyone is sick of hearing it, and rolls their eyes so hard it's actually audible.
Every one of these 764 bugs compiled and passed type checks:
https://github.com/tokio-rs/tokio/labels/C-bug
Not picking on tokio in particular, mind you, finding and fixing bugs is a sign of quality in a library or program.
> I believe it’s meant to be more of a “works, relative to what you might expect from other languages” kind of thing versus, “is a completely perfect program.”
Which is why I describe it as meaningless puffery. What you're saying here is that you know full well it isn't true, but want to keep saying it anyway. My reply is find a way to express yourself which is true, rather than false. I bet you can figure one out.
^ your words, that statement is false. the type system _can_ prevent logic bugs or design bugs, exhaustive pattern matching is an obvious example.
I bet you can find a way to express yourself which is true, e.g. "the type system cannot prevent _all_ logic bugs or design bugs"
But isn't most code going to perform some I/O at some time? Whether is calling an API, writing on disk or writing to a DB?
Rust was a legendary pain to untangle when learning to do async, though as I admitted in a comment down-thread this was also because I didn't stay for long enough for everything to cement itself in my head. It was still an absolute hell to get into. I needed help from Tokio's author to have some pieces of code even compile because I couldn't for the life of me understand why they didn't.
...BUT, with that being said, Rust has a much smaller memory footprint and that is an actual and measurable advantage on cloud deployments. It could be painful to make it compile and run but then it'll give you your money's worth and then some. So it's worth even only for that (and "that" is a lot!), if you are optimizing for those values. I plan to do that in the future. In the meantime Golang is an amazing compromise between productivity and machine performance.
The thread-based I/O example with the compute bound poll loop is kind of strange.
"Join" isn't really that useful when you have unrelated threads running finite tasks. Usually, you let the thread do its thing,finish, put results on a queue, and let the thread exit. Then it doesn't matter who finishes first. Rust join is actually optional. You don't have to join to close out a thread and reap the thread resources. It's not like zombies in Unix/Linux, where some resources are tied up until the parent process calls wait().
Loops where you join all the threads that are supposedly finished are usually troublesome. If somebody gets stuck, the joiner stalls. Clean exits from programs with lots of channels are troublesome. Channels with multiple senders don't close until all senders exit, which can be hard to arrange when something detects an error.
In Rust, the main thread is special. (I consider this unfortunate, but web people like it, because inside browsers, the main thread is very special.) If the main thread exits, all the other threads are silently killed.
It's more that we don't do anything to prevent it, other than coarse process-wide memory / CPU time limits. IIRC, Rust-spawned threads on Linux use 2MiB of stack space by default, so that seems like a likely cap.
Note that the playground is only 2 cores and you are sharing with everyone else, so you aren't likely to really benefit.
This is amazing, I use it all the time with no performance issues so I expected it to be much beefier to support many simultaneous users.
How many users does it serve? (Monthly or daily user and/or compilation job sent). And what tricks are used to keep it working? (I suspect it can re-use already compiled binaries of all supported dependencies and only need to compile the user's code and link it, but is there other clever strategies?)
I don't really track users, but over the last 24 hours, there were 47.8k meaningful [1] requests taking a total of 28.2 hours. That ~0.5 requests per second number has been relatively consistent.
> re-use already compiled binaries of all supported dependencies and only need to compile the user's code and link it, but is there other clever strategies?
Yes, we pre-compile all the available dependencies [2] and that's about it.
> I use it all the time with no performance issues
That's good to hear! There's been a long-running bug where the playground binary loses track of the child Docker container (maybe?) and then the machine runs out of memory and the OOM killer often does more harm than good [3]. While trying to pin that down, I've recently caused the entire process to get into what appears to be a complete deadlock where no requests can be serviced at all. This tends to happen while I'm asleep so either I have no chance to debug it before it is auto-killed or the playground is unresponsive for 8+ hours.
[1]: compiling / executing code, running clippy/miri/rustfmt, expanding macros
[2]: https://github.com/rust-lang/rust-playground/blob/c4d00b90aa...
[3]: somehow it does something that ends up killing the network stack and then the machine is basically dead in the water. Very similar to what is reported in https://serverfault.com/q/1125634/119136
Beyond the running costs of the machine itself, has the rust playground been any trouble, or has it mostly been smooth sailing after the initial setup?
I was recently surprised to learn that returning from main() with background threads still running is more or less UB in C++, because those threads can race against static destructors: https://www.reddit.com/r/cpp/comments/1fu0y6n/when_a_backgro.... C doesn't have this issue, though, as far as I know?
I wish join-with-timeout was a more common/supported operation.
Rust inherits this from `pthread_detach()`:
The detached attribute merely determines the behavior of the
system when the thread terminates; it does not prevent the thread
from being terminated if the process terminates using exit(3) (or
equivalently, if the main thread returns).
In principle Rust could have defined its environment to not make the main thread special, but then it would need some additional runtime magic on Unix systems, including having the main thread poll for all other threads to exit, which in turn would require it to add a layer of indirection to the system's threading runtime (e.g. wrapping pthreads) to be able to track all threads.
Not to mention they'd have to be very careful with what they do on the main thread after they start up the application's first thread (e.g. allocating memory via malloc() is out), since there are quite a few things that are not safe to do (like fork() that's not immediately followed by exec()) in a multi-threaded program. So even a "single-threaded" Rust program would become multi-threaded, and assume all those problems.
That makes sense if the main thread is actually doing any useful work, but when its only job is to spawn threads and wait for them to finish before exiting, then it's a pretty common idiom.