enord 8 months ago

I find the prevailing use of “recursion”, i.e. β-reduction all the way to ground terms, to be an impoverished sense of the term.

By all means, you should be familiar with the evaluation semantics of your runtime environment. If you don’t know the search depth of your input or can’t otherwise constrain stack height beforehand beware the specter of symbolic recursion, but recursion is so much more.

Functional reactive programs do not suffer from stack-overflow on recursion (implementation details notwithstanding). By Church-Turing every sombolic-recursive function can be translated to a looped iteration over some memoization of intermeduate results. The stack is just a special case of such memoization. Popular functional programming patterns provide other bottled up memoization strategies: Fold/Reduce, map/zip/join, either/amb/cond the list goes on (heh). Your iterating loop is still traversing the solution space in a recursive manner, provided you dot your ‘i’s and carry your remainders correctly.

Heck, any feedback circuit is essentially recursive and I have never seen an IIR-filter blow the stack (by itself, mind you).

  • taneq 8 months ago

    > By Church-Turing every sombolic-recursive function can be translated to a looped iteration over some memoization of intermeduate results. The stack is just a special case of such memoization.

    Ah, so functional reactive programs don’t suffer from stack overflow on recursion, they just suffer from memoisation overflow? ;)

    Electronic circuits with feedback could be thought of as tail end recursion. :)

    • enord 8 months ago

      Yes indeed! Except you can also do no memoization, such as the goto statement!

      • rad_gruchalski 8 months ago

        And in imperative languages one can get away with no recursion. For example setTimeout(f, 0) in js or go routine in go. Of course one needs an accumulator in that case.

        • enord 8 months ago

          Ah! That’s actually recursion again.

      • Jerrrrrrry 8 months ago

        explain to me, exactly, how it knows what line to go to, and how to get there.

        "go back to 0x00, then jump GOTO return register bits ahead"

        you still have a variable, the goto return register.

        it is your nested depth, the nesting limit til that register overflows.

        please explain to me cuz im confused fr

        • enord 8 months ago

          Memoization is the word here. Can be done in different ways—- of which storing the stack frame is one.

jhp123 8 months ago

> It is somewhat surprising that unification is cleaner and easier to implement in an loopy imperative mutational style than in a recursive functional pure style.

I came to a similar conclusion, my implementation uses recursion but also leans on shared mutable variables, which is something I usually avoid especially for more "mathy" code. https://jasonhpriestley.com/short-simple-unification

  • nyrikki 8 months ago

    I think your choice is more readable and maintainable, but did you consider call-with-current-continuation?

    It is easy to dismiss, but this may be a situation to learn where it is actually useful.

    • convolvatron 8 months ago

      the author used the phrase "some wacky continuation thing" to describe that alternative. I agree, but I also have used recursion here, and struggle to find why given the text the author finds it so problematic. the simple mapping binds the evaluation order at compile time - maybe that's the real problem they are identifying.

c-cube 8 months ago

I want to disagree, but honestly the recursive method I tend to use is almost isomorphic to a loop + stack/deque. Eager elimination of `x=t` pairs, or eager normalization of substitutions sounds potentially inefficient, imho it's better to check that on the fly upon selecting a pair.

PaulHoule 8 months ago

When you study the old AI programming you eventually realize that non-determinism rules and recursion drools. Recursion is great for certain things that would be hard otherwise (drawing fractals) but it shouldn’t be the first tool you reach for. It’s known to be dangerous in embedded systems because it can blow out the stack. Turns very simple O(N) problems into O(N^2) but unfortunately a generation of computer science pedagogues taught people that recursion > iteration because BASIC was the teaching language of the 1970s and first half of the 1980s and they couldn’t get over that Lisp, Forth, UCSD Pascal and other contenders didn’t make it to the finish.

  • toolslive 8 months ago

    Ok, I'll bite.

      - Recursion allows you to show correctness both in programming and mathematics. 
      - plenty of compilers will reuse the current stack frame in a tail call.
    • PaulHoule 8 months ago

      Tail call recursion doesn't turn the O(N^2) version of Fibonacci into O(N) but at least memoization turns it to O(N log N) or something. It's true that recursive algorithms can be easy to analyze. If you really knew your math you could do Fibonacci as O(1).

      • norir 8 months ago

        There is a simple tail recursive implementation of fibonacci that is O(N) and doesn't use memoization. I'll leave it as an exercise for the reader.

      • xyzzyz 8 months ago

        You cannot do Fibonacci in O(1). You can reasonably do it in O(log n), though. Recursive Fibonacci is not O(n^2) either, it’s exponential.

        • tomsmeding 8 months ago

          If you have O(1) arbitrary-precision floating-point operations in O(1), you can do Fibonacci in O(1) by Binet's formula. But you don't, so you can't.

          Also, by a similar argument, the O(log n) by matrix exponentiation is really O(n log(n)^2 log(log(n))) by Schönhage-Strassen, I think. (The sequence is exponential in n, so the number of digits is O(n).) I may have miscounted the logs.

          • xyzzyz 8 months ago

            You’re exactly right, I omitted the Schonhage Strassen just to simplify the point.

        • WolfeReader 8 months ago

          https://en.m.wikipedia.org/wiki/Fibonacci_sequence#Closed-fo...

          If exponentiation is done using floating-point arithmetic, this is O(1)

          • chowells 8 months ago

            If you're fine getting wrong answers after the first 73, sure. Some people have higher standards than that.

          • xyzzyz 8 months ago

            Exponentiation is not a constant time operation, and floating points have finite precision.

            • WolfeReader 8 months ago

              Exponentiation is constant-time on floats. But as noted by sibling comments, the precision is a problem.

              • xyzzyz 8 months ago

                Everything is constant time is your precision is limited. Even the naive, recursive algorithm on integers is constant time if your integer size is limited (it will just be an exponentially big constant). To be able to meaningfully talk about complexity, you need to assume that something is not limited, and once you do that for floats, you’ll find that their exponentiation is no longer constant time.

          • xigoi 8 months ago

            Floats only have a finite precision.

      • atn34 8 months ago

        The natural recursive implementation is exponential, and the memoized version is O(n)

  • peterfirefly 8 months ago

    > When you study the old AI programming you eventually realize that non-determinism rules and recursion drools.

    This part I agree with.

anon291 8 months ago

Unification, etc, is one of the good use cases of the so-called Tardis monad.

munchler 8 months ago

> Unification has too much spooky action at a distance, a threaded state

Isn’t this why we have the State monad, or even just fold-like functions? There are multiple ways to tame this problem without resorting to mutable state and side-effects. We know how that deal with the devil usually turns out.

(And I have no idea what "spooky action at a distance" means in this context. If anything, functional programming eliminates such shenanigans by expressing behavior clearly in the type system.)

  • anon291 8 months ago

    Spooky action at a distance I presume would refer to the sudden reification of a type variable that is established as an existential based on its use in a non-obvious part of the code base. For example, a common one is writing a monadic expression using `do` notation with an existential monad (but not an explicitly typed one) and then one of your monadic actions happens to have a particular type. Suddenly, you'll get a slough of error messages regarding type mismatches, perhaps missing constraints, etc.

rurban 8 months ago

I get that some mediocre programmers don't understand recursion. But telling other programmers to work with inferior methods, which are harder to understand and maintain is a sin. He is wrong of course. Do implement unification with recursion.

  • kragen 8 months ago

    顿悟之前砍柴挑水,顿悟之后砍柴挑水。——吴力。

    老僧三十年前未参禅时、见山是山、见水是水、及至后夹亲见知识、有箇入处、见山不是山、见水不是水、而今得箇体歇处、依然见山秪是山、见水秪是水。——青原行思

    https://www.youtube.com/watch?v=XcMM5-zBCEc

    Pride can be the greatest of all of the obstacles to knowledge. For you mountains are not yet mountains again.

nyrikki 8 months ago

>It is somewhat surprising that unification is cleaner and easier to implement in an loopy imperative mutational style than in a recursive functional pure style. Typically theorem proving / mathematical algorithms are much cleaner in the second style in my subjective opinion.

This is the Curry–Howard correspondence.

While it seems everyone wants to force individuals into one of the formal vs constructivist, it is best to have both in your toolbox.

Contex and personal preference and ability apply here, but in general it is easier for us mortals to use the constructivist approach when programming.

This is because, by simply choosing to not admit PEM a priori, programs become far more like constructive proofs.

If you look at the similarities between how Church approached the Entscheidungsproblem, the equivalence of TM and lambda calculus, the implications of Post's theorm and an assumption of PEM, it will be clear that the formalist approach demands much more of the programmer.

Functional programming grew out of those proofs-as-programs concepts, so if you prefer functional styles, the proofs are the program you write.

Obviously the above has an absurd amount of oversimplification, but I am curious what would have been different if one had started with more intuitional forms of Unification.

For me, I would have probably just moved to the formalist model to be honest like the author.

I just wanted to point out there is a real reason many people find writing algorithms in a functional/constructivist path.

  • c-cube 8 months ago

    It's not really clear that functional style is easier/closer to proofs, tbh: https://hillelwayne.com/post/theorem-prover-showdown/ :-).

    There's absolutely a constructivist form of unification, the description of a particular unification algorithm is typically a rewrite system (OP has one here: https://www.philipzucker.com/assets/traat/unify_rules.png) which qualifies as constructive imho (it builds an explicit solution, ie a substitution). But the imperative implementation of such a rewrite system can be readable too (especially if you still have pattern matching as in, say, Rust or OCaml).

    I don't think there's any link to be had with the Curry Howard correspondence here. No types, no lambda calculus, nothing of the sort.

    • nyrikki 8 months ago

      When I was saying 'easier' if we chose to do logic inside a theory using the Curry-Howard correspondence then we find out that it is intuitionistic, and as you mentioned 'proofs' that means it may be important to you vs. someone who is say using lambda calculus as just a programing language.

      The Curry-Howard correspondence results in an isomorphism 'Programs'(Haskel Functions) and intuitional proof systems, if you assume they are traditional proofs internally you will have a bad time.

      What is important is what we lose in respect to what we often assume due to the typical conventions.

      Note that this has been a popular resource to introduce the concepts for a while, and I was encouraging you to mostly learn the tradoffs of call/cc to help on that path and avoid globals, not that it is the only or even good solution.

      > This course is an introduction to the research trying to connect the proof theory of classical logic and computer science. We omit important and standard topics, among them the connection between the computational interpretation of classical logic and the programming operator callcc.

      Grounding and negation as failure are also popular, especially with ML which under the VC dimensionality interpretation of learnability is still a choice function due to the set shattering.

      https://cdn.aaai.org/ojs/7590/7590-13-11120-1-2-20201228.pdf

      That said, I do retract my suggestion as you are in JavaScript, the obviousness in Scheme that callcc is external is not there.

CodeWriter23 8 months ago

I live by "don't recurse, iterate". Don't get me wrong, recursion seems really clever, if you live in a universe with an infinite stack.

  • norir 8 months ago

    I personally take the opposite approach. Tail recursion fixes the stack problem and just about any problem that can be solved recursively can be reworked to be made tail recursive.

    Using tail recursion instead of loops forces you to name the procedure, which is helpful documentation. Also, unlike a loop, you have to explicitly call back into it. For me, the edge cases are usually clearer with tail recursion and I find it harder to write infinite loops using tail recursion.

    Regular non tail recursion can be fine in many cases so long as the recursion depth is limited. I have tested code that made at least 50000 recursive calls before blowing the stack. In many cases, you can just assume that won't happen or you can rewrite the function in a tail recursive way by adding additional parameters to the function with a slight loss of elegance to the code.

    • evujumenuk 8 months ago

      In my younger years, I used to employ the Y combinator as a way to avoid naming recursive functions because I didn't want to necessarily dignify single-use code by abstracting it into a named function that I'd never call anywhere else. Instead of calling itself in the non-base case, the anonymous lambda simply calls a first-class function passed into it, with the necessary plumbing abstracted into the generic combinator.

      Fixed-point combinators can be made to work with tail recursion, so I don't think tail recursion forces anyone to name anything. Certainly, how easy it is to go with this particular approach depends a lot on the semantics of your chosen (or imposed) programming language.

    • jancsika 8 months ago

      > Tail recursion fixes the stack problem and just about any problem that can be solved recursively can be reworked to be made tail recursive.

      To cover all cases, I feel like there needs to be a cross-platform drop-in library to properly blow the stack in languages which feature tail call recursion. :)

  • Someone 8 months ago

    As others indicated, tail recursion exists.

    There are also languages where the standard guarantees code will use tail recursion in certain cases. An example is scheme (https://people.csail.mit.edu/jaffer/r5rs/Proper-tail-recursi...)

    There also are languages where you can write recursive functions in a way that makes the compiler err out of it cannot make the calls in a tail-recursive way. An example is scala (https://www.scala-lang.org/api/2.12.1/scala/annotation/tailr...)

    That removes the possibility that a programmer mistakenly believes the compiler will use tail recursion.

    • CodeWriter23 8 months ago

      Try doing a binary tree with tail recursion.

      • tmtvl 8 months ago

        'Doing a binary tree'? You mean visiting every node?

          define walk-tree (tree fn &optional remaining-branches)
            match tree
              (Leaf l) ->
                funcall fn l
                if remaining-branches
                  walk-tree (first remaining-branches) fn (rest remaining-branches)
              (Tree t r l) ->
                funcall fn t
                walk-tree r fn (push l remaining-branches)
      • tylerhou 8 months ago

        It’s not as difficult as you might think; take the recursive code and turn it into CPS style.

      • Someone 8 months ago

        A binary tree is a data structure, tail recursion is used in code ⇒ what algorithm are you thinking of that can be implemented in fixed space but is hard or impossible to do with tail recursion?

        • CodeWriter23 8 months ago

          I’m not going to apologize for being harsh. But I will link a resource to help you understand. http://cslibrary.stanford.edu/110/BinaryTrees.html

          • kragen 8 months ago

            This page does not identify an algorithm that can be implemented in fixed space but is hard or impossible to do with tail recursion. No such algorithm exists. It is unclear which of the dozens of beautiful algorithms it presents you mistakenly believed to be one.

            tmtvl's comment at https://news.ycombinator.com/item?id=41983916 shows an algorithm that requires unbounded space.

            Traversing a mutable binary tree can be done in fixed space but is not easier to do with generalized forms of recursion than with tail recursion or imperative iteration.

          • saagarjha 8 months ago

            You should, because not only are you being rude you are also wrong.

            • CodeWriter23 8 months ago

              By saying code is used to access a binary tree? lol

              • saagarjha 8 months ago

                For saying that you cannot write algorithms to deal with binary trees using tail recursion. Saying code is used to access a binary tree is a trivial statement with no value to it. Nobody is going to even bother responding to that because of how obvious it is.

        • CodeWriter23 8 months ago

          are you kidding me right now? How do you think the data structure is accessed? Hint: code

          You youngun's have hidden behind APIs for so long you don't even know how things work.

  • philzook 8 months ago

    I like what I find clearest (a moving target for all sorts of reasons). I typically find recursion clearest for things dealing with terms/ASTs. My coding style usually leans towards comprehensions or fold/map etc rather than loops. That's why I find the loop being clearer for this algorithm surprising.

    • tines 8 months ago

      Same here, the answer is easy for me: recursive algorithms are for recursive data structures.

    • CodeWriter23 8 months ago

      Calling this out. A loop calling a function in terms of style is roughly the same as a function calling itself.

  • readthenotes1 8 months ago

    "recursion seems really clever, if you live in a universe with an infinite stack."

    Recursion seems really clever if you live in a universe with clever coworkers maintaining your code a few years from now.

  • lioeters 8 months ago

    It depends on the language and problem space, but I agree in general. Rewriting a recursive algorithm to be iterative is better done at an early stage of a program, or start iteration to begin with, because it gets increasingly difficult as the program grows in size and complexity. It's too late to think about if/when the stack blows, so better think about it before/as you write the logic.

  • jasdfywu 8 months ago

    iterative recursion does not use stack frames in languages that support TCO.