I am wondering if there can be a version of Ackermann function with better time complexity than the standard variation.
This is not a homework and I am just curious. I know the Ackermann function doesn't have any practical use besides as a performance benchmark, because of the deep recursion. I know the numbers grow very large very quickly, and I am not interested in computing it.
Even though I use Python 3 and the integers won't overflow, I do have finite time, but I have implemented a version of it myself according to the definition found on Wikipedia, and computed the output for extremely small values, just to make sure the output is correct.
def A(m, n):
if not m:
return n + 1
return A(m - 1, A(m, n - 1)) if n else A(m - 1, 1)
The above code is a direct translation of the image, and is extremely slow, I don't know how it can be optimized, is it impossible to optimize it?
One thing I can think of is to memoize it, but the recursion runs backwards, each time the function is recursively called the arguments were not encountered before, each successive function call the arguments decrease rather than increase, therefore each return value of the function needs to be calculated, memoization doesn't help when you call the function with different arguments the first time.
Memoization can only help if you call it with the same arguments again, it won't compute the results and will retrieve cached result instead, but if you call the function with any input with (m, n) >= (4, 2) it will crash the interpreter regardless.
I also implemented another version according to this answer:
def ack(x, y):
for i in range(x, 0, -1):
y = ack(i, y - 1) if y else 1
return y + 1
But it is actually slower:
In [2]: %timeit A(3, 4)
1.3 ms ± 9.75 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [3]: %timeit ack(3, 4)
2 ms ± 59.9 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
Theoretically can Ackermann function be optimized? If not, can it be definitely proven that its time complexity cannot decrease?
I have just tested A(3, 9)
and A(4, 1)
will crash the interpreter, and the performance of the two functions for A(3, 8)
:
In [2]: %timeit A(3, 8)
432 ms ± 4.63 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [3]: %timeit ack(3, 8)
588 ms ± 10.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
I did some more experiments:
from collections import Counter
from functools import cache
c = Counter()
def A1(m, n):
c[(m, n)] += 1
if not m:
return n + 1
return A(m - 1, A(m, n - 1)) if n else A(m - 1, 1)
def test(m, n):
c.clear()
A1(m, n)
return c
The arguments indeed repeat.
But surprisingly caching doesn't help at all:
In [9]: %timeit Ackermann = cache(A); Ackermann(3, 4)
1.3 ms ± 10.1 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
Caching only helps when the function is called with the same arguments again, as explained:
In [14]: %timeit Ackermann(3, 2)
101 ns ± 0.47 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
I have tested it with different arguments numerous times, and it always gives the same efficiency boost (which is none).
Solution
I recently wrote a bunch of solutions based on the same paper that templatetypedef mentioned. Many use generators, one for each m-value, yielding the values for n=0, n=1, n=2, etc. This one might be my favorite:
Explanation
Consider the generator
a(m)
. It yields A(m,0), A(m,1), A(m,2), etc. The definition of A(m,n) uses A(m-1, A(m, n-1)). Soa(m)
at its index n yields A(m,n), computed like this:a(m)
generator itself at index n-1. Which is just the previous value (x) yielded by this generator.a(m-1)
generator at index x. So thea(m)
generator iterates over thea(m-1)
generator and grabs the value at index i == x.Benchmark
Here are times for computing all A(m,n) for m≤3 and n≤17, also including templatetypedef's solution:
Note: Even faster (much faster) solutions using math insights/formulas are possible, see my comment and pts's answer. I intentionally didn't do that, as I was interested in coding techniques, for avoiding deep recursion and avoiding re-calculation. I got the impression that that's also what the question/OP wanted, and they confirmed that now (under a deleted answer, visible if you have enough reputation).
Code