The most convincing description of this idea imo is this video by Tsoding: https://www.youtube.com/watch?v=fCoQb-zqYDI
When I was first learning Haskell a million years ago, I was completely confused by the concept of a monad; I could, after enough fighting with the compiler, usually get something working, but it was a stochastic guess-and-check process trying to figure out what `IO` actually means. Even the `Maybe` was confusing to me, because I couldn't really figure out how the hell you relate something like "checking for null" with "writing to the disk".
I can't remember where I saw it, probably on the Haskell wiki somewhere, but when they pointed out the List is a monad, and after seeing an example of how it worked, I suddenly got it: in a hand-wavey way, a monad is basically just a value with a wrapper context [1], and from a practical perspective that's all it is. In the case of a List its wrapper context is that there might be 0 or many of those things in there, in the case of a Maybe its wrapper context is that it might exist or it might not, in the case of IO its wrapper context is that it's interfacing with the outside world, and once you abstract away the entire idea of context, you can suddenly open up an entire world of reusability.
This is a good tutorial, I will probably be linking it to people if they ever make the mistake of asking about monads.
[1] I don't need a lecture on the minutia of this, I know that there's a lot more to it in the theory world, I went to graduate school specifically to study functional language verification. I'm keeping it simple.
But its not a very robust one: its never true of fast programs on realistic hardware for example (not for a long time now). And all the rule bending (-fstrict-alias, bunch of stuff) exists in this tension between the grade school natural language paradigm and the reality of computers. I say grade school not to be pejorative, but rather because it is roughly the boundary where written natural languages begin to have interesting tensions around past and future and simultaneous, changing and not changing.
Functors and applicatives and monads and other type classes like these are the source of endless analogies because there isn't an accepted, broadly-understood terminology for this "well its roughly what would happen if you had a piece of paper and wrote things on it at every statement boundary and scratched off the old ones" (though Turing and von Neumann did formalize this in useful ways, they just don't generalize well to realistic computers anymore).
Monads are the mathematical object that is forced on you if you want a rigorous way to describe the semantics of program execution in the vicinity of this "common sense" notion. That's really what everyone is dancing around: your program is only well defined with either:
- a big rulebook full of exceptions and edge cases
- a compositional rule strict enough to give some useful predictability but lax enough to admit most useful programs.
It is this rigor/laxity tension as concerns text on a page and gates on a semiconductor that gives monads a privileged place in the towers of categories. When I worked on Sigma we were among the earlier adoptors of ApplicativeDo, for example, because we wanted a slightly different rigor/laxity tradeoff for performance reasons.
Monads are what happens when you do shift the giant pile of "back of the book" compiler details that describe program execution semantics into a much simpler set of rules, but at the cost of increasing the barrier to entry because you need to know the rules before you can print "hello world".
The statement as-is breaks pretty much immediately because, while there is a canonical list monad, there isn't a list monad, there are in fact several[1].
There are several more correct ways of phrasing the idea among:
"List can be given a monad instance"
"List forms a monad with pure and bind as defined"
"List is the underlying functor of a monad"
The point is that picking any old list implementation is likely not a monad without the supporting structure.
Will any of these help you learn what a monad is? Likely not. Monadology is a Mary's Room[2] problem; there is a qualia, a subjective sensation, when one understands monads having experienced them first hand. Subsequently monad tutorials are the best case against physicalism[3] yet devised.
1. https://hackage.haskell.org/package/exotic-list-monads-1.1.0...
The monad interface only requires ways to construct object using callbacks. The ‘bind’ operation takes a callback as an argument, but says nothing about when it’s actually called; it could be immediately, deferred, multiple times, or even never. It’s up to the implementation of the monad, as well as the language, if it’s a lazy language.
This is basically a framework. Like with other frameworks, the principle is “don’t call us; we’ll call you.” Arbitrary computation can happen between callbacks. The framework can do whatever control flow it wants, and this is what often makes frameworks opaque. Hiding control flow is what frameworks do, for better or worse.
So far, none of this is specific to a Monad. The Monad part comes from the type signature of the callback function passed in to flatmap(), which allows ‘bind’ operations to be nested.
Once you know what kind of thing you’re dealing with (frameworks) then you can go into why some frameworks qualify as a monad.
Meaning that every collection is a set of possible inputs to the computation that is provided as the argument to a `flatMap` operation. Each `flatMap`, by definition, returns a new collection of possible outputs for each of the inputs, and each of those collections gets concatenated. Every item in the final output collection represents the result of following some path through the computations, selecting a single item at each step. Importantly, the type of the output of each `flatMap` operation can differ from the input.
You can imagine extending this by assigning probabilities, or making the domain continuous (I think...). These extensions would still be monads, just without being simple collections.
It's kind of like how multiplication over whole numbers is repeated addition, but that metaphor becomes less useful for other domains of numbers.
https://hackage.haskell.org/package/Agda-2.6.4.2/docs/Agda-S...
Like, why would you embarrassed about not understanding the details of MPLS networking if you're not a network engineer who works with MPLS?