Location>code7788 >text

Recursion of Basic Data Structures

Popularity:734 ℃/2024-09-26 15:28:54

recursive (calculation)

1) General

define

In computer science, recursion is a method of solving computational problems in which the solution depends on a smaller subset of the same type of problem

In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem.

Take for example the example of recursive traversal of a single linked table:

void f(Node node) {
    if(node == null) {
        return;
    }
    println("before:" + )
    f();
    println("after:" + )
}

Description:

  1. Calling yourself, if each function corresponds to a solution, calling yourself implies that the solution is the same (with regularity)
  2. Each time the function is called, the data processed by the function shrinks (subsets) from the last time, and eventually shrinks to the point where there is no need to continue the recursion
  3. The inner function call (subset processing) is complete before the outer function is considered to be called.

principle

Assuming that there are 3 nodes in the chain table with values 1, 2, and 3, the execution of the above code would look like the followingpseudocode

// 1 -> 2 -> 3 -> null  f(1)

void f(Node node = 1) {
    println("before:" + ) // 1
    void f(Node node = 2) {
        println("before:" + ) // 2
        void f(Node node = 3) {
            println("before:" + ) // 3
            void f(Node node = null) {
                if(node == null) {
                    return;
                }
            }
            println("after:" + ) // 3
        }
        println("after:" + ) // 2
    }
    println("after:" + ) // 1
}

reasoning

  1. Determine if recursive solving can be used
  2. Derive the recursive relationship, i.e., the relationship between the parent problem and the subproblem, and the end condition for recursion

For example, the recursive relationship for traversing a chained table previously was

\[f(n) = \begin{cases} stop & n = null \\\\ f() & n \neq null \end{cases} \]

  • Going down to the innermost level is calledgradually increase or decrease
  • From the innermost level, it's called(of a responsibility) be taken care of by
  • existgradually increase or decreaseThe local variables (and method arguments) within the outer function do not disappear during the process of(of a responsibility) be taken care of byIt can also be used when the

2) Single Recursion

E01. Multiplication of steps

Finding factorials by recursive methods

  • Definition of factorial\(n!= 1⋅2⋅3⋯(n-2)⋅(n-1)⋅n\)which\(n\) is a natural number, of course\(0! = 1\)

  • recurrence relation

\[f(n) = \begin{cases} 1 & n = 1\\ n * f(n-1) & n > 1 \end{cases} \]

coding

private static int f(int n) {
    if (n == 1) {
        return 1;
    }
    return n * f(n - 1);
}

disassemblepseudocodeAs follows, suppose the initial value of n is 3

f(int n = 3) { // can't be solved, pass
    return 3 * f(int n = 2) { // can't be solved, keep passing
        return 2 * f(int n = 1) { if (n == 1) { // solvable, start passing.
            if (n == 1) { // solved, start recursion
                return 1; }
            }
        }
    }
}

E02. Reverse print string

Print the string recursively in reverse, with n being the index position of the character in the entire string str

  • gradually increase or decrease: n starts at 0, n + 1 each time, and continues until n == () - 1.
  • (of a responsibility) be taken care of by: Starting with n == () and printing from the reductions, naturally in reverse order

recurrence relation

\[f(n) = \begin{cases} stop & n = () \\\\\ f(n+1) & 0 \leq n \leq () - 1 \end{cases} \]

The code is

public static void reversePrint(String str, int index) {
    if (index == ()) {
        return;
    }
    reversePrint(str, index + 1);
    ((index));
}

disassemblepseudocodeAs follows, suppose the string is "abc"

void reversePrint(String str, int index = 0) {
    void reversePrint(String str, int index = 1) {
        void reversePrint(String str, int index = 2) {
            void reversePrint(String str, int index = 3) {
                if (index == ()) {
                    return; // commencement date
                }
            }
            ((index)); // printable c
        }
        ((index)); // printable b
    }
    ((index)); // printable a
}

E03. Dichotomous search (one-way recursion)

public static int binarySearch(int[] a, int target) {
    return recursion(a, target, 0,  - 1);
}

public static int recursion(int[] a, int target, int i, int j) {
    if (i > j) {
        return -1;
    }
    int m = (i + j) >>> 1;
    if (target < a[m]) {
        return recursion(a, target, i, m - 1);
    } else if (a[m] < target) {
        return recursion(a, target, m + 1, j);
    } else {
        return m;
    }
}

E04. bubbling sort (one-way recursion)

public static void main(String[] args) {
    int[] a = {3, 2, 6, 1, 5, 4, 7};
    bubble(a, 0,  - 1);
    ((a));
}

private static void bubble(int[] a, int low, int high) {
    if(low == high) {
        return;
    }
    int j = low;
    for (int i = low; i < high; i++) {
        if (a[i] > a[i + 1]) {
            swap(a, i, i + 1);
            j = i;
        }
    }
    bubble(a, low, j);
}

private static void swap(int[] a, int i, int j) {
    int t = a[i];
    a[i] = a[j];
    a[j] = t;
}
  • low and high are unsorted ranges
  • j denotes the unsorted boundary of the next recursion of high
    • Exchange occurs, implying disorder
    • At the last exchange (no subsequent disorder), the left-hand side i is still disordered and the right-hand side i+1 is already ordered
  • The video explains the case of considering only the high boundary, refer to the above code to understand the processing method in the low . high range

E05. Insertion sort (one-way recursion)

public static void main(String[] args) {
    int[] a = {3, 2, 6, 1, 5, 7, 4};
    insertion(a, 1,  - 1);
    ((a));
}

private static void insertion(int[] a, int low, int high) {
    if (low > high) {
        return;
    }
    int i = low - 1;
    int t = a[low];
    while (i >= 0 && a[i] > i) {
        a[i + 1] = a[i];
        i--;
    }
    if(i + 1 != low) {
        a[i + 1] = t;
    }    
    insertion(a, low + 1, high);
}
  • Sorted regions: [0 ... i ... low-1]
  • Unordered regions: [low ... high]
  • The video explains the case where only the low boundary is considered, refer to the code above to understand the low-1 ... high range
  • Extension: improved code for finding insertion locations using binary lookup leftmost version

E06. Joseph's problem[^16] (one-way recursion)

\(n\) Individuals line up in a circle and start counting from the top, counting each time to the number\(m\) Individuals (\(m\) through (a gap)\(1\) (Start) Kill it and continue to repeat the above process from the next person, find who is the last person to survive?

Method 1

The last survivor a is pushed back to its index number in the previous round.

f(n,m) Index for this round In order for a to be at this index, the last round should look like this rule (e.g. of science)
f(1,3) 0 x x x a (0 + 3) % 2
f(2,3) 1 x x x 0 a (1 + 3) % 3
f(3,3) 1 x x x 0 a (1 + 3) % 4
f(4,3) 0 x x x a (0 + 3) % 5
f(5,3) 3 x x x 0 1 2 a (3 + 3) % 6
f(6,3) 0 x x x a

Method 2

Let n be the total number of people, m be the number of reports, and the solution returns the index of those people, starting from 0

f(n, m) acrobatic display (esp. on horseback) (old) rule (e.g. of science)
f(1, 3) 0
f(2, 3) 0 1 => 1 3%2=1
f(3, 3) 0 1 2 => 0 1 3%3=0
f(4, 3) 0 1 2 3 => 3 0 1 3%4=3
f(5, 3) 0 1 2 3 4 => 3 4 0 1 3%5=3
f(6, 3) 0 1 2 3 4 5 => 3 4 5 0 1 3%6=3

I. Finding equivalent functions

The rule: the starting point for the next report is\(k = m \% n\)

  • The serial number of the first listed person is\(k-1\)spare\(n-1\) Individuals reconstitute the Joseph Ring
  • Next time from\(k\) Starting with the number, the serial numbers are as follows
    • \(k,\ k+1, \ ...\ ,\ 0,\ 1,\ k-2\)As in the above example\(3\ 4\ 5\ 0\ 1\)

This function is called\(g(n-1,m)\), which has the same end result as\(f(n,m)\) It's the same.

II. Finding the mapping function

Now find a way to find it.\(g(n-1,m)\) together with\(f(n-1, m)\) The correspondence between the

\[3 \rightarrow 0 \\ 4 \rightarrow 1 \\ 5 \rightarrow 2 \\ 0 \rightarrow 3 \\ 1 \rightarrow 4 \\ \]

The mapping function is

\[mapping(x) = \begin{cases} x-k & x=[k..n-1] \\ x+n-k & x=[0..k-2] \end{cases} \]

is equivalent to the following function

\[mapping(x) = (x + n - k)\%{n} \]

Substitute a test.

\[3 \rightarrow (3+6-3)\%6 \rightarrow 0 \\ 4 \rightarrow (4+6-3)\%6 \rightarrow 1 \\ 5 \rightarrow (5+6-3)\%6 \rightarrow 2 \\ 0 \rightarrow (0+6-3)\%6 \rightarrow 3 \\ 1 \rightarrow (1+6-3)\%6 \rightarrow 4 \\ \]

in summary

\[f(n-1,m) = mapping(g(n-1,m)) \]

III. Finding the Inverse Mapping Function

The mapping function is to compute y from x. The inverse mapping function is to get x from y.

\[mapping^{-1}(x) = (x + k)\%n \]

Substitute a test.

\[0 \rightarrow (0+3)\%6 \rightarrow 3 \\ 1 \rightarrow (1+3)\%6 \rightarrow 4 \\ 2 \rightarrow (2+3)\%6 \rightarrow 5 \\ 3 \rightarrow (3+3)\%6 \rightarrow 0 \\ 4 \rightarrow (4+3)\%6 \rightarrow 1 \\ \]

It is therefore possible to find

\[g(n-1,m) = mapping^{-1}(f(n-1,m)) \]

IV. Recursive

(math.) deduce from substitution

\[\begin{aligned} f(n,m) = \ & g(n-1,m) \\ = \ & mapping^{-1}(f(n-1,m)) \\ = \ & (f(n-1,m) + k) \% n \\ = \ & (f(n-1,m) + m\%n) \% n \\ = \ & (f(n-1,m) + m) \% n \\ \end{aligned} \]

The last step of the simplification makes use of the modulo algorithm

\((a+b)\%n = (a\%n + b\%n) \%n\) for example

  • \((6+6)\%5 = 2 = (6+6\%5)\%5\)
  • \((6+5)\%5 = 1 = (6+5\%5)\%5\)
  • \((6+4)\%5 = 0 = (6+4\%5)\%5\)

Final recursive

\[f(n,m) = \begin{cases} (f(n-1,m) + m) \% n & n>1\\ 0 & n = 1 \end{cases} \]

3) Multi Recursion Multi Recursion

E01. Fibonacci series -- Leetcode 70

  • In the previous example, each recursive function contained only one call to itself, which is called single recursion.
  • If each recursion contains more than one call to itself, it is called multi recursion.

recurrence relation

\[f(n) = \begin{cases} 0 & n=0 \\ 1 & n=1 \\ f(n-1) + f(n-2) & n>1 \end{cases} \]

The following table lists the first few terms of the series

F0 F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13
0 1 1 2 3 5 8 13 21 34 55 89 144 233

realization

public static int f(int n) {
    if (n == 0) {
        return 0;
    }
    if (n == 1) {
        return 1;
    }
    return f(n - 1) + f(n - 2);
}

Implementation process

  • Green means execution is in progress (corresponds to recursion), gray means execution is finished (corresponds to reversal)
  • Recursion is not possible and cannot be generalized, which corresponds to a depth-first search

time complexity

  • The number of recursions also conforms to the Fibonacci law that\(2 * f(n+1)-1\)
  • Time complexity derivation process
    • Fibonacci general formula\(f(n) = \frac{1}{\sqrt{5}}*({\frac{1+\sqrt{5}}{2}}^n - {\frac{1-\sqrt{5}}{2}}^n)\)
    • Simplify to:\(f(n) = \frac{1}{2.236}*({1.618}^n - {(-0.618)}^n)\)
    • Bring in the formula for the number of recursions\(2*\frac{1}{2.236}*({1.618}^{n+1} - {(-0.618)}^{n+1})-1\)
    • The time complexity is\(\Theta(1.618^n)\)
  1. More Fibonacci References [8][9][^10]
  2. The above time complexity analysis does not take into account the summation of large numbers

Variant 1 - The Rabbit Problem[^8]

  • First month with a pair of immature rabbits (black, note the smaller size in the picture)
  • In the second month, they matured
  • In the third month, they can give birth to a new pair of bunnies (blue)
  • All rabbits follow the same pattern.\(n\) Number of rabbits aged 6 months

analyze

How does the rabbit problem relate to Fibonacci? Let the number of rabbits in month n be\(f(n)\)

  • \(f(n)\) = Number of rabbits in the previous month + Number of newborn bunnies
  • And [the number of newborn bunnies] is actually [the number of bunnies that matured last month]
  • Since it takes one month for rabbits to mature, [number of rabbits that matured in the last month] is also [number of rabbits in the last month].
  • The number of rabbits in the previous month, i.e.\(f(n-1)\)
  • The number of rabbits in the previous month, i.e.\(f(n-2)\)

So the essence is still the Fibonacci series, just starting with its first term

Variant 2 - Frog Climbing Stairs

  • The stairs have\(n\) stairs
  • The frog has to climb to the top of the building, either by jumping one step at a time or by jumping two steps at a time
  • You can only jump upwards. How many ways are there to jump?

analyze

n jumping method rule (e.g. of science)
1 (1) Not at the moment.
2 (1,1) (2) Not at the moment.
3 (1,1,1) (1,2) (2,1) Not at the moment.
4 (1,1,1,1) (1,2,1) (2,1,1)
(1,1,2) (2,2)
The last jump, of one step, is based on f(3)
The last jump, of two steps, is based on f(2)
5 ... ...
  • So essentially it's still the Fibonacci series, just starting with its second term

  • Corresponding to the leetcode title70. Stair climbing - LeetCode

E02. Hannover Tower[^13] (multiplex recursion)

Tower of Hanoi, is an ancient Indian legend that originates from the fact that when Maharishi Brahma created the world, he made three diamond pillars, and in one of the pillars, 64 golden discs were stacked on top of each other in order of size from the bottom to the top, and Maharishi Brahma commanded Brahmins to rearrange the discs on the other pillar, and stipulated that the discs should be placed on the other pillar.

  • Only one disk can be moved at a time
  • You can't enlarge the disk on a small disk

The following motion picture demonstrates how the 4-piece disk is moved

Simulate the movement of the disk using program code and estimate the time complexity

reasoning

  • Suppose that each column is labeled a, b, c, and each disk is denoted by 1, 2, 3 ... denote its size, the disk is initially at a and the target to be moved to is c

  • If there is only one disk, this is a minimum problem and can be solved directly

    • Moving disk 1\(a \mapsto c\)

  • If there are two disks, then

    • Disk 1\(a \mapsto b\)
    • Disc 2\(a \mapsto c\)
    • Disk 1\(b \mapsto c\)

  • If there are three disks, then

    • Disc 12\(a \mapsto b\)
    • Disc 3\(a \mapsto c\)
    • Disc 12\(b \mapsto c\)

  • If there are four disks, then

    • Disc 123\(a \mapsto b\)
    • Disc 4\(a \mapsto c\)
    • Disc 123\(b \mapsto c\)

notes

public class E02HanoiTower {


    /*
             root walk all over (sb) order (taxonomy)
        h(4, a, b, c) -> h(3, a, c, b)
                         a -> c
                         h(3, b, a, c)
     */
    static LinkedList<Integer> a = new LinkedList<>();
    static LinkedList<Integer> b = new LinkedList<>();
    static LinkedList<Integer> c = new LinkedList<>();

    static void init(int n) {
        for (int i = n; i >= 1; i--) {
            (i);
        }
    }

    static void h(int n, LinkedList<Integer> a,
                  LinkedList<Integer> b,
                  LinkedList<Integer> c) {
        if (n == 0) {
            return;
        }
        h(n - 1, a, c, b);
        (());
        print();
        h(n - 1, b, a, c);
    }

    private static void print() {
        ("-----------------------");
        (a);
        (b);
        (c);
    }

    public static void main(String[] args) {
        init(3);
        print();
        h(3, a, b, c);
    }
}

E03. Yang Hui Triangle[^6]

analyze

Look at it diagonally.

        1
      1   1
    1   2   1
  1   3   3   1
1   4   6   4   1
  • classifier for objects in rows such as words\(i\)columns\(j\)So.\([i][j]\) The value of\([i-1][j-1] + [i-1][j]\)
  • (coll.) fail (a student)\(j=0\) maybe\(i=j\) when\([i][j]\) take a value of\(1\)

notes

public static void print(int n) {
    for (int i = 0; i < n; i++) {
        if (i < n - 1) {
            ("%" + 2 * (n - 1 - i) + "s", " ");
        }

        for (int j = 0; j < i + 1; j++) {
            ("%-4d", element(i, j));
        }
        ();
    }
}

public static int element(int i, int j) {
    if (j == 0 || i == j) {
        return 1;
    }
    return element(i - 1, j - 1) + element(i - 1, j);
}

Optimization 1

is multiple recursion, so many recursive calls are repeated, for example

  • recursion(3, 1) is decomposed as
    • recursion(2, 0) + recursion(2, 1)
  • and recursion(3, 2) decomposes as
    • recursion(2, 1) + recursion(2, 2)

Here recursion(2, 1) is called repeatedly, in fact it is repeated many times, you can see the total number of times the recursive function has been called with static AtomicInteger counter = new AtomicInteger(0)

In fact, it is possible to usememoization to optimize:

public static void print1(int n) {
    int[][] triangle = new int[n][];
    for (int i = 0; i < n; i++) {
        // print space
        triangle[i] = new int[i + 1];
        for (int j = 0; j <= i; j++) {
            ("%-4d", element1(triangle, i, j));
        }
        ();
    }
}

public static int element1(int[][] triangle, int i, int j) {
    if (triangle[i][j] > 0) {
        return triangle[i][j];
    }

    if (j == 0 || i == j) {
        triangle[i][j] = 1;
        return triangle[i][j];
    }
    triangle[i][j] = element1(triangle, i - 1, j - 1) + element1(triangle, i - 1, j);
    return triangle[i][j];
}
  • Traverses the array as accessible within a recursive function if the\(triangle[i][j]\) already has a value, which means that the element has already been computed by the previous recursive function, so there is no need to repeat the computation

Optimization 2

public static void print2(int n) {
    int[] row = new int[n];
    for (int i = 0; i < n; i++) {
        // print space
        createRow(row, i);
        for (int j = 0; j <= i; j++) {
            ("%-4d", row[j]);
        }
        ();
    }
}

private static void createRow(int[] row, int i) {
    if (i == 0) {
        row[0] = 1;
        return;
    }
    for (int j = i; j > 0; j--) {
        row[j] = row[j - 1] + row[j];
    }
}

Note: It is also possible to calculate the next item from the previous item in each row without resorting to the previous row, which is related to another property of Yang Hui's triangles, so I won't expand on that for now.

Other topics

Power button corresponding to the topic, but recursion is not suitable for brushing high scores in the power button, so only list the relevant topics, do not do brushing the explanation of the problem

title number name (of a thing)
Leetcode118 The Yang Hui Triangle
Leetcode119 Yang Hui Triangle II

4) Recursive Optimization - Mnemonics

The above code suffers from a lot of repetitive calculations, such as solving for\(f(5)\) recursive decomposition process

As you can see (the ones with the same color are duplicates):

  • \(f(3)\) Repeated 2 times
  • \(f(2)\) Repeated 3 times
  • \(f(1)\) Repeated 5 times
  • \(f(0)\) Repeated 3 times

in the wake of\(n\) The increase in the number of repetitions is very considerable, how to optimize it?

Memoization Mnemonics (also known as memoization) is an optimization technique that achieves a speed-up effect by storing the results of function calls (which are usually more expensive) so that when the same inputs (subproblems) occur again, the improved code

public static void main(String[] args) {
    int n = 13;
    int[] cache = new int[n + 1];
    (cache, -1);
    cache[0] = 0;
    cache[1] = 1;
    (f(cache, n));
}

public static int f(int[] cache, int n) {
    if (cache[n] != -1) {
        return cache[n];
    }

    cache[n] = f(cache, n - 1) + f(cache, n - 2);
    return cache[n];
}

The optimized graphic, as long as the results are cached, isWill not implement its sub-issues

  • The improved time complexity is\(O(n)\)
  • Please verify the improved results by yourself
  • Please analyze the improved space complexity by yourself

take note of

  1. Mnemonics is a case of dynamic programming that emphasizes top-down solution
  2. The essence of mnemonics is space for time

5) Recursive Optimization - Tail Recursion

bonded warehouse

do sth. by recursion\(n + (n-1) + (n-2) ... + 1\)

public static long sum(long n) {
    if (n == 1) {
        return 1;
    }
    return n + sum(n - 1);
}

On my machine.\(n = 12000\) When the stack bursts

Exception in thread "main" 
	at (:10)
	at (:10)
	at (:10)
	at (:10)
	at (:10)
	...

Why?

  • Each method call consumes a certain amount of stack memory, which is used to store method parameters, local variables within the method, return addresses, etc.
  • The memory occupied by method calls needs to wait until theEnd of methodIt's the only way to release
  • And as we've said before, recursive calls don't go back until they're the deepest, and the outer methods don't end until the innermost methods are done
    • For example.\(sum(3)\) Inside this method there is a method that needs to execute the\(3 + sum(2)\)\(sum(2)\) Before returning, the plus sign precedes the\(3\) unreleasable
    • Look at the pseudo-code below
long sum(long n = 3) {
    return 3 + long sum(long n = 2) {
        return 2 + long sum(long n = 1) {
            return 1;
        }
    }
}

tail call

If the last step of a function calls a function, it is called a tail call, for example

function a() {
    return b()
}

The following three pieces of codeshould notIt's called a tail call.

function a() {
    const c = b()
    return c
}
  • Because the last step is not a call to a function
function a() {
    return b() + 1
}
  • The last step performed is addition
function a(x) {
    return b() + x
}
  • The last step performed is addition

several languagesThe compiler of [^11] is able to optimize for tail calls, for example

function a() {
    // Do what's in front of you.
    return b()
}

function b() {
    // Do what's in front of you.
    return c()
}

function c() {
    return 1000
}

a()

Before optimizationpseudocode

function a() {
    return function b() {
        return function c() {
            return 1000
        }
    }
}

post-optimizationpseudocodeas below

a()
b()
c()

Why is tail recursion the only way to optimize?

When calling a

  • When a returns, it realizes: there is nothing left for b to do, the result b will provide in the future, so I don't need a, and my memory will be freed.

When calling b

  • When b returns, it realizes: there's nothing left for c to do, and c will provide the results in the future, so I don't need b, and my memory will be freed.

If the call to a

  • Not a tail call, e.g. return b() + 1, then a can't end early because it still has to use the result of b to do the addition

caudal recursion

Tail recursion is a special case of tail calls, that is, the last step executes the same function

Tail recursion to avoid exploding the stack

Installing Scala

Getting Started with Scala

object Main {
  def main(args: Array[String]): Unit = {
    println("Hello Scala")
  }
}
  • Scala is a close relative of java, and any class in java can be reused.
  • The type is placed after the variable
  • Unit means no return value, similar to void
  • It is not necessary to end with a semicolon, but of course it is correct to add one.

Or first write a function that will explode the stack

def sum(n: Long): Long = {
    if (n == 1) {
        return 1
    }
    return n + sum(n - 1)
}
  • The last line of Scala code can be omitted if it is a return value.

Unsurprisingly, in\(n = 11000\) But it's still an anomaly.

println(sum(11000))

Exception in thread "main" 
	at Main$.sum(:25)
	at Main$.sum(:25)
	at Main$.sum(:25)
	at Main$.sum(:25)
	...

This is because the above code, is not yet a tail call, to be a tail call then:

  1. The last line of code, which must be a function call
  2. The inner function mustbreak out (of)Relation to outer function, inner functionpost-implementationVariables or constants that do not depend on the outer layer
def sum(n: Long): Long = {
    if (n == 1) {
        return 1
    }
    return n + sum(n - 1) // Depend on the n variable of the outer function.
}

How can we make it get rid of the dependency on n once it executes?

  • You can't wait for the recursion to come back to do the addition, then you have to keep the outer n
  • Pass n as an argument to the inner function, then n belongs to the inner function
  • Accumulate when you pass a parameter, don't wait for it to come back.
sum(n - 1, n + accumulator)

The rewritten code is as follows

@tailrec
def sum(n: Long, accumulator: Long): Long = {
    if (n == 1) {
        return 1 + accumulator
    } 
    return sum(n - 1, n + accumulator)
}
  • accumulator as an accumulator
  • The @tailrec annotation is provided by scala to check if a method is tail-recursive.
  • This time sum(10000000, 0) is also fine and prints 50000005000000

The execution process is as follows topseudocodeindicate\(sum(4, 0)\)

// First call
def sum(n = 4, accumulator = 0): Long = {
    return sum(4 - 1, 4 + accumulator)
}

// Next call to the inner sum, which is done when you pass the parameter, so you don't have to wait for it to come back, and there's no need to keep the outer sum space when the inner sum is called
def sum(n = 3, accumulator = 4): Long = {
    return sum(3 - 1, 3 + accumulator)
}

// Continue calling the inner sum
def sum(n = 2, accumulator = 7): Long = {
    return sum(2 - 1, 2 + accumulator)
}

// Continue to call the inner sum, this is the last sum and returns the final result of 10, all the other sums have already been freed up
def sum(n = 1, accumulator = 9): Long = {
    if (1 == 1) {
        return 1 + accumulator
    }
}

Essentially, tail recursive optimization is the process of taking a function'srecursive (calculation)call into the function'scirculatecall (programming)

Changing loops to avoid exploding the stack

public static void main(String[] args) {
    long n = 100000000;
    long sum = 0;
    for (long i = n; i >= 1; i--) {
        sum += i;
    }
    (sum);
}

6) Recursive time complexity - Master theorem[^14]

If there is a recursive

\[T(n) = aT(\frac{n}{b}) + f(n) \]

included among these

  • \(T(n)\) is the running time of the problem.\(n\) It's the size of the data
  • \(a\) is the number of subproblems
  • \(T(\frac{n}{b})\) is the sub-problem runtime, and each sub-problem is split into the original problem's data size of\(\frac{n}{b}\)
  • \(f(n)\) is a calculation performed in addition to recursion

honorific title\(x = \log_{b}{a}\)namely\(x = \log_{subproblem reduction times}{number of subproblems}\)

in that case

\[T(n) = \begin{cases} \Theta(n^x) & f(n) = O(n^c) and c \lt x\\\ \Theta(n^x\log{n}) & f(n) = \Theta(n^x)\\\\ \Theta(n^c) & f(n) = \Omega(n^c) and c \gt x \end{cases} \]

Example 1

\(T(n) = 2T(\frac{n}{2}) + n^4\)

  • this time\(x = 1 < 4\), with the latter determining the overall time complexity\(\Theta(n^4)\)
  • If you think logarithms are bad, you can switch to finding [\(b\) of several times can be equal to\(a\)

Example 2

\(T(n) = T(\frac{7n}{10}) + n\)

  • \(a=1, b=\frac{10}{7}, x=0, c=1\)
  • this time\(x = 0 < 1\), with the latter determining the overall time complexity\(\Theta(n)\)

Example 3

\(T(n) = 16T(\frac{n}{4}) + n^2\)

  • \(a=16, b=4, x=2, c=2\)
  • this time\(x=2 = c\)Time Complexity\(\Theta(n^2 \log{n})\)

Example 4

\(T(n)=7T(\frac{n}{3}) + n^2\)

  • \(a=7, b=3, x=1.?, c=2\)
  • this time\(x = \log_{3}{7} < 2\), with the latter determining the overall time complexity\(\Theta(n^2)\)

Example 5

\(T(n) = 7T(\frac{n}{2}) + n^2\)

  • \(a=7, b=2, x=2.?, c=2\)
  • this time\(x = log_2{7} > 2\), with the former determining the overall time complexity\(\Theta(n^{\log_2{7}})\)

Example 6

\(T(n) = 2T(\frac{n}{4}) + \sqrt{n}\)

  • \(a=2, b=4, x = 0.5, c=0.5\)
  • this time\(x = 0.5 = c\)Time Complexity\(\Theta(\sqrt{n}\ \log{n})\)

Example 7. recursive binary search

int f(int[] a, int target, int i, int j) {
    if (i > j) {
        return -1;
    }
    int m = (i + j) >>> 1;
    if (target < a[m]) {
        return f(a, target, i, m - 1);
    } else if (a[m] < target) {
        return f(a, target, m + 1, j);
    } else {
        return m;
    }
}
  • Number of sub-issues\(a = 1\)
  • Sub-issue data size reduction multiplier\(b = 2\)
  • Calculations performed other than recursively are of constant order\(c=0\)

\(T(n) = T(\frac{n}{2}) + n^0\)

  • this time\(x=0 = c\)Time Complexity\(\Theta(\log{n})\)

Example 8. recursive merge sort

void split(B[], i, j, A[])
{
    if (j - i <= 1)
        return;
    m = (i + j) / 2;

    // Recursive
    split(A, i, m, B).
    split(A, m, j, B).

    // merge
    merge(B, i, m, j, A); }
}
  • Number of sub-issues\(a=2\)
  • Sub-issue data size reduction multiplier\(b=2\)
  • In addition to recursion, the main time is spent on merging, which can be done using the\(f(n) = n\) indicate

\(T(n) = 2T(\frac{n}{2}) + n\)

  • this time\(x=1=c\)Time Complexity\(\Theta(n\log{n})\)

Example 9. Rapid sort recursion

algorithm quicksort(A, lo, hi) is
  if lo >= hi || lo < 0 then
    return

  // Partition
  p := partition(A, lo, hi)

  // Recursive
  quicksort(A, lo, p - 1)
  quicksort(A, p + 1, hi)
  • Number of sub-issues\(a=2\)
  • Sub-issue data size reduction multiplier
    • If the division is well defined.\(b=2\)
    • If the partitioning is not done properly, e.g., the data in partition 1 is 0 and the data in partition 2 is\(n-1\)
  • In addition to recursion, the main time is spent on partitioning, which can be done with the\(f(n) = n\) indicate

Scenario 1 - well separated

\(T(n) = 2T(\frac{n}{2}) + n\)

  • this time\(x=1=c\)Time Complexity\(\Theta(n\log{n})\)

Scenario 2 - Partitioning is not done properly

\(T(n) = T(n-1) + T(1) + n\)

  • At this point the main theorem cannot be used to solve

7) Recursive Time Complexity-Expanded Solution

Recursive equations like the following cannot be solved by the main theorem

Example 1 - Recursive Summation

long sum(long n) {
    if (n == 1) {
        return 1;
    }
    return n + sum(n - 1);
}

\(T(n) = T(n-1) + c\)\(T(1) = c\)

Here's how it unfolds

\(T(n) = T(n-2) + c + c\)

\(T(n) = T(n-3) + c + c + c\)

...

\(T(n) = T(n-(n-1)) + (n-1)c\)

  • included among these\(T(n-(n-1))\) assume (office)\(T(1)\)
  • Bring in to find\(T(n) = c + (n-1)c = nc\)

The time complexity is\(O(n)\)

Example 2 - Recursive Bubble Sort

void bubble(int[] a, int high) {
    if(0 == high) {
        return;
    }
    for (int i = 0; i < high; i++) {
        if (a[i] > a[i + 1]) {
            swap(a, i, i + 1);
        }
    }
    bubble(a, high - 1);
}

\(T(n) = T(n-1) + n\)\(T(1) = c\)

Here's how it unfolds

\(T(n) = T(n-2) + (n-1) + n\)

\(T(n) = T(n-3) + (n-2) + (n-1) + n\)

...

\(T(n) = T(1) + 2 + ... + n = T(1) + (n-1)\frac{2+n}{2} = c + \frac{n^2}{2} + \frac{n}{2} -1\)

time complexity\(O(n^2)\)

Notes:

  • The sum of the equivariant series is\(number*\frac{\vert first term - last term\vert}{2}\)

Example 3 - Recursive Fast Rows

The extreme case where a quick sort partition is not partitioned properly

\(T(n) = T(n-1) + T(1) + n\)\(T(1) = c\)

\(T(n) = T(n-1) + c + n\)

Here's how it unfolds

\(T(n) = T(n-2) + c + (n-1) + c + n\)

\(T(n) = T(n-3) + c + (n-2) + c + (n-1) + c + n\)

...

\(T(n) = T(n-(n-1)) + (n-1)c + 2+...+n = \frac{n^2}{2} + \frac{2cn+n}{2} -1\)

time complexity\(O(n^2)\)

Students who do not know how to derive can enter the/

  • Example 1 Input f(n) = f(n - 1) + c, f(1) = c
  • Example 2 Input f(n) = f(n - 1) + n, f(1) = c
  • precedent3 importation f(n) = f(n - 1) + n + c, f(1) = c

This article, which has been featured on, my tech siteWe have full interviews with big companies, working technology, architect growth path, and other experiences to share!