Posted by mathgenius 10/28/2024
By all means, you should be familiar with the evaluation semantics of your runtime environment. If you don’t know the search depth of your input or can’t otherwise constrain stack height beforehand beware the specter of symbolic recursion, but recursion is so much more.
Functional reactive programs do not suffer from stack-overflow on recursion (implementation details notwithstanding). By Church-Turing every sombolic-recursive function can be translated to a looped iteration over some memoization of intermeduate results. The stack is just a special case of such memoization. Popular functional programming patterns provide other bottled up memoization strategies: Fold/Reduce, map/zip/join, either/amb/cond the list goes on (heh). Your iterating loop is still traversing the solution space in a recursive manner, provided you dot your ‘i’s and carry your remainders correctly.
Heck, any feedback circuit is essentially recursive and I have never seen an IIR-filter blow the stack (by itself, mind you).
Ah, so functional reactive programs don’t suffer from stack overflow on recursion, they just suffer from memoisation overflow? ;)
Electronic circuits with feedback could be thought of as tail end recursion. :)
"go back to 0x00, then jump GOTO return register bits ahead"
you still have a variable, the goto return register.
it is your nested depth, the nesting limit til that register overflows.
please explain to me cuz im confused fr
I came to a similar conclusion, my implementation uses recursion but also leans on shared mutable variables, which is something I usually avoid especially for more "mathy" code. https://jasonhpriestley.com/short-simple-unification
It is easy to dismiss, but this may be a situation to learn where it is actually useful.
- Recursion allows you to show correctness both in programming and mathematics.
- plenty of compilers will reuse the current stack frame in a tail call.Also, by a similar argument, the O(log n) by matrix exponentiation is really O(n log(n)^2 log(log(n))) by Schönhage-Strassen, I think. (The sequence is exponential in n, so the number of digits is O(n).) I may have miscounted the logs.
If exponentiation is done using floating-point arithmetic, this is O(1)
This part I agree with.
Isn’t this why we have the State monad, or even just fold-like functions? There are multiple ways to tame this problem without resorting to mutable state and side-effects. We know how that deal with the devil usually turns out.
(And I have no idea what "spooky action at a distance" means in this context. If anything, functional programming eliminates such shenanigans by expressing behavior clearly in the type system.)
老僧三十年前未参禅时、见山是山、见水是水、及至后夹亲见知识、有箇入处、见山不是山、见水不是水、而今得箇体歇处、依然见山秪是山、见水秪是水。——青原行思
https://www.youtube.com/watch?v=XcMM5-zBCEc
Pride can be the greatest of all of the obstacles to knowledge. For you mountains are not yet mountains again.
AIM-443: Lambda: The Ultimate Goto (1977)
https://dspace.mit.edu/bitstream/handle/1721.1/5753/AIM-443....
Lagniappe: https://www.99-bottles-of-beer.net
This is the Curry–Howard correspondence.
While it seems everyone wants to force individuals into one of the formal vs constructivist, it is best to have both in your toolbox.
Contex and personal preference and ability apply here, but in general it is easier for us mortals to use the constructivist approach when programming.
This is because, by simply choosing to not admit PEM a priori, programs become far more like constructive proofs.
If you look at the similarities between how Church approached the Entscheidungsproblem, the equivalence of TM and lambda calculus, the implications of Post's theorm and an assumption of PEM, it will be clear that the formalist approach demands much more of the programmer.
Functional programming grew out of those proofs-as-programs concepts, so if you prefer functional styles, the proofs are the program you write.
Obviously the above has an absurd amount of oversimplification, but I am curious what would have been different if one had started with more intuitional forms of Unification.
For me, I would have probably just moved to the formalist model to be honest like the author.
I just wanted to point out there is a real reason many people find writing algorithms in a functional/constructivist path.
There's absolutely a constructivist form of unification, the description of a particular unification algorithm is typically a rewrite system (OP has one here: https://www.philipzucker.com/assets/traat/unify_rules.png) which qualifies as constructive imho (it builds an explicit solution, ie a substitution). But the imperative implementation of such a rewrite system can be readable too (especially if you still have pattern matching as in, say, Rust or OCaml).
I don't think there's any link to be had with the Curry Howard correspondence here. No types, no lambda calculus, nothing of the sort.
The Curry-Howard correspondence results in an isomorphism 'Programs'(Haskel Functions) and intuitional proof systems, if you assume they are traditional proofs internally you will have a bad time.
What is important is what we lose in respect to what we often assume due to the typical conventions.
Note that this has been a popular resource to introduce the concepts for a while, and I was encouraging you to mostly learn the tradoffs of call/cc to help on that path and avoid globals, not that it is the only or even good solution.
> This course is an introduction to the research trying to connect the proof theory of classical logic and computer science. We omit important and standard topics, among them the connection between the computational interpretation of classical logic and the programming operator callcc.
Grounding and negation as failure are also popular, especially with ML which under the VC dimensionality interpretation of learnability is still a choice function due to the set shattering.
https://cdn.aaai.org/ojs/7590/7590-13-11120-1-2-20201228.pdf
That said, I do retract my suggestion as you are in JavaScript, the obviousness in Scheme that callcc is external is not there.
Using tail recursion instead of loops forces you to name the procedure, which is helpful documentation. Also, unlike a loop, you have to explicitly call back into it. For me, the edge cases are usually clearer with tail recursion and I find it harder to write infinite loops using tail recursion.
Regular non tail recursion can be fine in many cases so long as the recursion depth is limited. I have tested code that made at least 50000 recursive calls before blowing the stack. In many cases, you can just assume that won't happen or you can rewrite the function in a tail recursive way by adding additional parameters to the function with a slight loss of elegance to the code.
Fixed-point combinators can be made to work with tail recursion, so I don't think tail recursion forces anyone to name anything. Certainly, how easy it is to go with this particular approach depends a lot on the semantics of your chosen (or imposed) programming language.
To cover all cases, I feel like there needs to be a cross-platform drop-in library to properly blow the stack in languages which feature tail call recursion. :)
There are also languages where the standard guarantees code will use tail recursion in certain cases. An example is scheme (https://people.csail.mit.edu/jaffer/r5rs/Proper-tail-recursi...)
There also are languages where you can write recursive functions in a way that makes the compiler err out of it cannot make the calls in a tail-recursive way. An example is scala (https://www.scala-lang.org/api/2.12.1/scala/annotation/tailr...)
That removes the possibility that a programmer mistakenly believes the compiler will use tail recursion.
define walk-tree (tree fn &optional remaining-branches)
match tree
(Leaf l) ->
funcall fn l
if remaining-branches
walk-tree (first remaining-branches) fn (rest remaining-branches)
(Tree t r l) ->
funcall fn t
walk-tree r fn (push l remaining-branches)tmtvl's comment at https://news.ycombinator.com/item?id=41983916 shows an algorithm that requires unbounded space.
Traversing a mutable binary tree can be done in fixed space but is not easier to do with generalized forms of recursion than with tail recursion or imperative iteration.
You youngun's have hidden behind APIs for so long you don't even know how things work.
Recursion seems really clever if you live in a universe with clever coworkers maintaining your code a few years from now.