In Chapter 9 of The Little Schemer, the authors introduce the Y-combinator and the penultimate question asks: "What is (Y Y)". They answer: "Who knows, but it works very hard."
I tried reasoning about it and had a hard time. I assume this just causes an infinite loop, but maybe I am missing something. Are higher-order fixed-point combinators even interesting?
The book defines the Y-combinator as:
(define Y
(lambda(le)
((lambda(f) (f f))
(lambda(f)
(le (lambda(x) ((f f) x)))))))
However, using a let-binding is clearer, for me:
(define Y (lambda(exp)
(let ([a (lambda(f)
(exp (lambda(x) ( (f f) x))))])
(a a))))
and simulating the length function works fine.
((Y (lambda(s)
(lambda(lat)
(cond
((null? lat) 0)
(else (add1 (s (cdr lat)))))))) '(1 2 4))
=> 3
When I try to evaluate (Y Y) in the Dr Racket IDE, the program crashes and runs out of memory.
How could I step through and reason about (Y Y) where I define Y using the let-binding above?
Maybe writing it in λ-calculus can help: Y = λf.(λx.f(xx))(λx.f(xx)). It has the following β-reduction (aka execution):
Yf → f(Yf) → f(f(Yf)) → ...
Thus YY → Y(YY) → (YY)(Y(YY)) and you can keep on reducing, making the size of the term explode, as well as the number of arguments. Basically, you recursively apply to itself something that expects a function and applies it recursively...