enord an hour ago

I find the prevailing use of “recursion”, i.e. β-reduction all the way to ground terms, to be an impoverished sense of the term.

By all means, you should be familiar with the evaluation semantics of your runtime environment. If you don’t know the search depth of your input or can’t otherwise constrain stack height beforehand beware the specter of symbolic recursion, but recursion is so much more.

Functional reactive programs do not suffer from stack-overflow on recursion (implementation details notwithstanding). By Church-Turing every sombolic-recursive function can be translated to a looped iteration over some memoization of intermeduate results. The stack is just a special case of such memoization. Popular functional programming patterns provide other bottled up memoization strategies: Fold/Reduce, map/zip/join, either/amb/cond the list goes on (heh). Your iterating loop is still traversing the solution space in a recursive manner, provided you dot your ‘i’s and carry your remainders correctly.

Heck, any feedback circuit is essentially recursive and I have never seen an IIR-filter blow the stack (by itself, mind you).

jhp123 2 hours ago

> It is somewhat surprising that unification is cleaner and easier to implement in an loopy imperative mutational style than in a recursive functional pure style.

I came to a similar conclusion, my implementation uses recursion but also leans on shared mutable variables, which is something I usually avoid especially for more "mathy" code. https://jasonhpriestley.com/short-simple-unification

  • nyrikki an hour ago

    I think your choice is more readable and maintainable, but did you consider call-with-current-continuation?

    It is easy to dismiss, but this may be a situation to learn where it is actually useful.

nyrikki 2 hours ago

>It is somewhat surprising that unification is cleaner and easier to implement in an loopy imperative mutational style than in a recursive functional pure style. Typically theorem proving / mathematical algorithms are much cleaner in the second style in my subjective opinion.

This is the Curry–Howard correspondence.

While it seems everyone wants to force individuals into one of the formal vs constructivist, it is best to have both in your toolbox.

Contex and personal preference and ability apply here, but in general it is easier for us mortals to use the constructivist approach when programming.

This is because, by simply choosing to not admit PEM a priori, programs become far more like constructive proofs.

If you look at the similarities between how Church approached the Entscheidungsproblem, the equivalence of TM and lambda calculus, the implications of Post's theorm and an assumption of PEM, it will be clear that the formalist approach demands much more of the programmer.

Functional programming grew out of those proofs-as-programs concepts, so if you prefer functional styles, the proofs are the program you write.

Obviously the above has an absurd amount of oversimplification, but I am curious what would have been different if one had started with more intuitional forms of Unification.

For me, I would have probably just moved to the formalist model to be honest like the author.

I just wanted to point out there is a real reason many people find writing algorithms in a functional/constructivist path.

munchler an hour ago

> Unification has too much spooky action at a distance, a threaded state

Isn’t this why we have the State monad, or even just fold-like functions? There are multiple ways to tame this problem without resorting to mutable state and side-effects. We know how that deal with the devil usually turns out.

(And I have no idea what "spooky action at a distance" means in this context. If anything, functional programming eliminates such shenanigans by expressing behavior clearly in the type system.)

  • anon291 28 minutes ago

    Spooky action at a distance I presume would refer to the sudden reification of a type variable that is established as an existential based on its use in a non-obvious part of the code base. For example, a common one is writing a monadic expression using `do` notation with an existential monad (but not an explicitly typed one) and then one of your monadic actions happens to have a particular type. Suddenly, you'll get a slough of error messages regarding type mismatches, perhaps missing constraints, etc.

anon291 26 minutes ago

Unification, etc, is one of the good use cases of the so-called Tardis monad.

PaulHoule 4 hours ago

When you study the old AI programming you eventually realize that non-determinism rules and recursion drools. Recursion is great for certain things that would be hard otherwise (drawing fractals) but it shouldn’t be the first tool you reach for. It’s known to be dangerous in embedded systems because it can blow out the stack. Turns very simple O(N) problems into O(N^2) but unfortunately a generation of computer science pedagogues taught people that recursion > iteration because BASIC was the teaching language of the 1970s and first half of the 1980s and they couldn’t get over that Lisp, Forth, UCSD Pascal and other contenders didn’t make it to the finish.

  • toolslive 3 hours ago

    Ok, I'll bite.

      - Recursion allows you to show correctness both in programming and mathematics. 
      - plenty of compilers will reuse the current stack frame in a tail call.
    • PaulHoule 3 hours ago

      Tail call recursion doesn't turn the O(N^2) version of Fibonacci into O(N) but at least memoization turns it to O(N log N) or something. It's true that recursive algorithms can be easy to analyze. If you really knew your math you could do Fibonacci as O(1).

      • xyzzyz 3 hours ago

        You cannot do Fibonacci in O(1). You can reasonably do it in O(log n), though. Recursive Fibonacci is not O(n^2) either, it’s exponential.

        • tomsmeding 2 hours ago

          If you have O(1) arbitrary-precision floating-point operations in O(1), you can do Fibonacci in O(1) by Binet's formula. But you don't, so you can't.

          Also, by a similar argument, the O(log n) by matrix exponentiation is really O(n log(n)^2 log(log(n))) by Schönhage-Strassen, I think. (The sequence is exponential in n, so the number of digits is O(n).) I may have miscounted the logs.

      • norir 2 hours ago

        There is a simple tail recursive implementation of fibonacci that is O(N) and doesn't use memoization. I'll leave it as an exercise for the reader.

      • bodhiandphysics 2 hours ago

        no... you can do fibonacci as O(log n)... you cannot represent (1 + sqrt(5))/2 on a computer.

        • Smaug123 2 hours ago

          You literally just did! The problem is not representing the number, it's computing digits.

  • peterfirefly 3 hours ago

    > When you study the old AI programming you eventually realize that non-determinism rules and recursion drools.

    This part I agree with.

CodeWriter23 3 hours ago

I live by "don't recurse, iterate". Don't get me wrong, recursion seems really clever, if you live in a universe with an infinite stack.

  • norir 2 hours ago

    I personally take the opposite approach. Tail recursion fixes the stack problem and just about any problem that can be solved recursively can be reworked to be made tail recursive.

    Using tail recursion instead of loops forces you to name the procedure, which is helpful documentation. Also, unlike a loop, you have to explicitly call back into it. For me, the edge cases are usually clearer with tail recursion and I find it harder to write infinite loops using tail recursion.

    Regular non tail recursion can be fine in many cases so long as the recursion depth is limited. I have tested code that made at least 50000 recursive calls before blowing the stack. In many cases, you can just assume that won't happen or you can rewrite the function in a tail recursive way by adding additional parameters to the function with a slight loss of elegance to the code.

  • Someone an hour ago

    As others indicated, tail recursion exists.

    There are also languages where the standard guarantees code will use tail recursion in certain cases. An example is scheme (https://people.csail.mit.edu/jaffer/r5rs/Proper-tail-recursi...)

    There also are languages where you can write recursive functions in a way that makes the compiler err out of it cannot make the calls in a tail-recursive way. An example is scala (https://www.scala-lang.org/api/2.12.1/scala/annotation/tailr...)

    That removes the possibility that a programmer mistakenly believes the compiler will use tail recursion.

    • CodeWriter23 25 minutes ago

      Try doing a binary tree with tail recursion.

  • lioeters 15 minutes ago

    It depends on the language and problem space, but I agree in general. Rewriting a recursive algorithm to be iterative is better done at an early stage of a program, or start iteration to begin with, because it gets increasingly difficult as the program grows in size and complexity. It's too late to think about if/when the stack blows, so better think about it before/as you write the logic.

  • philzook 2 hours ago

    I like what I find clearest (a moving target for all sorts of reasons). I typically find recursion clearest for things dealing with terms/ASTs. My coding style usually leans towards comprehensions or fold/map etc rather than loops. That's why I find the loop being clearer for this algorithm surprising.

    • CodeWriter23 31 minutes ago

      Calling this out. A loop calling a function in terms of style is roughly the same as a function calling itself.

    • tines an hour ago

      Same here, the answer is easy for me: recursive algorithms are for recursive data structures.

  • readthenotes1 2 hours ago

    "recursion seems really clever, if you live in a universe with an infinite stack."

    Recursion seems really clever if you live in a universe with clever coworkers maintaining your code a few years from now.