Efficient Algorithm to Solve a Recursive Formula

882 Views Asked by At

I am given a formula f(n) where f(n) is defined, for all non-negative integers, as:

f(0) = 1

f(1) = 1

f(2) = 2

f(2n) = f(n) + f(n + 1) + n (for n > 1)

f(2n + 1) = f(n - 1) + f(n) + 1 (for n >= 1)

My goal is to find, for any given number s, the largest n where f(n) = s. If there is no such n return None. s can be up to 10^25.

I have a brute force solution using both recursion and dynamic programming, but neither is efficient enough. What concepts might help me find an efficient solution to this problem?

4

There are 4 best solutions below

0
On BEST ANSWER

I want to add a little complexity analysis and estimate the size of f(n).

If you look at one recursive call of f(n), you notice, that the input n is basically divided by 2 before calling f(n) two times more, where always one call has an even and one has an odd input.
So the call tree is basically a binary tree where always the half of the nodes on a specific depth k provides a summand approx n/2k+1. The depth of the tree is log₂(n).

So the value of f(n) is in total about Θ(n/2 ⋅ log₂(n)).

Just to notice: This holds for even and odd inputs, but for even inputs the value is about an additional summand n/2 bigger. (I use Θ-notation to not have to think to much about some constants).

Now to the complexity:

Naive brute force

To calculate f(n) you have to call f(n) Θ(2log₂(n)) = Θ(n) times.
So if you want to calculate the values of f(n) until you reach s (or notice that there is no n with f(n)=s) you have to calculate f(n) s⋅log₂(s) times, which is in total Θ(s²⋅log(s)).

Dynamic programming

If you store every result of f(n), the time to calculate a f(n) reduces to Θ(1) (but it requires much more memory). So the total time complexity would reduce to Θ(s⋅log(s)).

Notice: Since we know f(n) ≤ f(n+2) for all n, you don't have to sort the values of f(n) and do a binary search.

Using binary search

Algorithm (input is s):

  1. Set l = 1 and r = s
  2. Set n = (l+r)/2 and round it to the next even number
  3. calculate val = f(n).
  4. if val == s then return n.
  5. if val < s then set l = n
    else set r = n.
  6. goto 2

If you found a solution, fine. If not: try it again but round in step 2 to odd numbers. If this also does not return a solution, no solution exists at all.

This will take you Θ(log(s)) for the binary search and Θ(s) for the calculation of f(n) each time, so in total you get Θ(s⋅log(s)).

As you can see, this has the same complexity as the dynamic programming solution, but you don't have to save anything.

Notice: r = s does not hold for all s as an initial upper limit. However, if s is big enough, it holds. To be save, you can change the algorithm:
check first, if f(s) < s. If not, you can set l = s and r = 2s (or 2s+1 if it has to be odd).

4
On

This recursive in every iteration for 2n and 2n+1 is increasing values, so if in any moment you will have value bigger, than s, then you can stop your algorithm.

To make effective algorithm you have to find or nice formula, that will calculate value, or make this in small loop, that will be much, much, much more effective, than your recursion. Your recursion is generally O(2^n), where loop is O(n). This is how loop can be looking:

int[] values = new int[1000];
values[0] = 1;
values[1] = 1;
values[2] = 2;
for (int i = 3; i < values.length /2 - 1; i++) {
    values[2 * i] = values[i] + values[i + 1] + i;
    values[2 * i + 1] = values[i - 1] + values[i] + 1;
}

And inside this loop add condition of possible breaking it with success of failure.

0
On

Unfortunately, you don't mention how fast your algorithm should be. Perhaps you need to find some really clever rewrite of your formula to make it fast enough, in this case you might want to post this question on a mathematics forum.

The running time of your formula is O(n) for f(2n + 1) and O(n log n) for f(2n), according to the Master theorem, since:

T_even(n) = 2 * T(n / 2) + n / 2

T_odd(n) = 2 * T(n / 2) + 1

So the running time for the overall formula is O(n log n).

So if n is the answer to the problem, this algorithm would run in approx. O(n^2 log n), because you have to perform the formula roughly n times.

You can make this a little bit quicker by storing previous results, but of course, this is a tradeoff with memory.

Below is such a solution in Python.

D = {}

def f(n):
    if n in D:
        return D[n]
    if n == 0 or n == 1:
        return 1
    if n == 2:
        return 2
    m = n // 2
    if n % 2 == 0:
        # f(2n) = f(n) + f(n + 1) + n (for n > 1)
        y = f(m) + f(m + 1) + m
    else:
        # f(2n + 1) = f(n - 1) + f(n) + 1 (for n >= 1)
        y = f(m - 1) + f(m) + 1
    D[n] = y
    return y

def find(s):
    n = 0
    y = 0
    even_sol = None
    while y < s:
        y = f(n)
        if y == s:
            even_sol = n
            break
        n += 2
    n = 1
    y = 0
    odd_sol = None
    while y < s:
        y = f(n)
        if y == s:
            odd_sol = n
            break
        n += 2
    print(s,even_sol,odd_sol)

find(9992)
0
On
  1. Can you calculate the value of f(x) which x is from 0 to MAX_SIZE only once time?

    what i mean is : calculate the value by DP.

    f(0) = 1
    f(1) = 1
    f(2) = 2
    f(3) = 3
    f(4) = 7
    f(5) = 4
    ... ...
    f(MAX_SIZE) = ???

  2. If the 1st step is illegal, exit. Otherwise, sort the value from small to big.
    Such as 1,1,2,3,4,7,...
    Now you can find whether exists n satisfied with f(n)=s in O(log(MAX_SIZE)) time.