/

Recursion

Recursion is a technique where a function calls itself to solve smaller instances of the same problem. Every recursive function needs a base case to stop the recursion and a recursive case that breaks the problem down. Recursion is the foundation for divide-and-conquer algorithms, tree traversals, backtracking, and dynamic programming.

Technique Fundamental

How Recursion Works

A recursive function solves a problem by reducing it to one or more smaller sub-problems of the same type. Every correct recursive function has two essential parts:

  • Base case: The simplest instance of the problem that can be answered directly without further recursion. Without a base case, recursion would continue indefinitely.
  • Recursive case: The function calls itself with a smaller or simpler input, making progress toward the base case with each call.

When a function calls itself, the current execution context is pushed onto the call stack. Each frame on the stack holds the function's local variables and the point to resume execution after the recursive call returns. Once the base case is reached, frames begin to unwind (pop off the stack) and return their results back up the chain.

Classic examples of recursion:

  • Factorial: n! = n * (n-1)! with base case 0! = 1.
  • Fibonacci: fib(n) = fib(n-1) + fib(n-2) with base cases fib(0) = 0 and fib(1) = 1.
  • Tree traversal: Visit the current node, then recursively traverse left and right subtrees.

Stack overflow: If the recursion depth becomes too large (e.g., computing fib(100000) without optimization), the call stack exceeds its memory limit and the program crashes with a stack overflow error. Python's default recursion limit is 1000 frames; C++ and Java limits depend on the thread stack size.

Tail recursion: A special form where the recursive call is the last operation in the function. Some compilers and interpreters can optimize tail-recursive functions into iterative loops, eliminating the extra stack frames entirely. This is called tail-call optimization (TCO). Languages like Scheme guarantee TCO, while Python does not support it natively.

Interactive Visualization

Watch the recursive call tree for fib(n) being built step by step. Each node represents a function call. Yellow nodes are currently being computed, and green nodes have returned their value. Observe how the naive approach recomputes the same sub-problems many times.

Set a value for n and click Visualize to see the recursive call tree for fib(n).
Speed No steps

Code Implementation

Below are implementations of three classic recursive functions: naive Fibonacci, factorial, and memoized Fibonacci. The memoized version eliminates redundant computations by caching previously computed results.

def fibonacci(n):
    """Naive recursive Fibonacci - O(2^n) time."""
    if n <= 0:
        return 0
    if n == 1:
        return 1
    return fibonacci(n - 1) + fibonacci(n - 2)


def factorial(n):
    """Recursive factorial - O(n) time."""
    if n <= 1:
        return 1
    return n * factorial(n - 1)


def fibonacci_memo(n, memo=None):
    """Memoized Fibonacci - O(n) time, O(n) space."""
    if memo is None:
        memo = {}
    if n <= 0:
        return 0
    if n == 1:
        return 1
    if n in memo:
        return memo[n]
    memo[n] = fibonacci_memo(n - 1, memo) + fibonacci_memo(n - 2, memo)
    return memo[n]


# Example usage
print(fibonacci(10))        # Output: 55
print(factorial(5))         # Output: 120
print(fibonacci_memo(50))   # Output: 12586269025 (instant with memoization)
import java.util.HashMap;
import java.util.Map;

public class Recursion {
    /** Naive recursive Fibonacci - O(2^n) time */
    public static int fibonacci(int n) {
        if (n <= 0) return 0;
        if (n == 1) return 1;
        return fibonacci(n - 1) + fibonacci(n - 2);
    }

    /** Recursive factorial - O(n) time */
    public static long factorial(int n) {
        if (n <= 1) return 1;
        return n * factorial(n - 1);
    }

    /** Memoized Fibonacci - O(n) time, O(n) space */
    private static Map<Integer, Long> memo = new HashMap<>();

    public static long fibonacciMemo(int n) {
        if (n <= 0) return 0;
        if (n == 1) return 1;
        if (memo.containsKey(n)) return memo.get(n);
        long result = fibonacciMemo(n - 1) + fibonacciMemo(n - 2);
        memo.put(n, result);
        return result;
    }

    public static void main(String[] args) {
        System.out.println(fibonacci(10));       // Output: 55
        System.out.println(factorial(5));         // Output: 120
        System.out.println(fibonacciMemo(50));    // Output: 12586269025
    }
}
#include <iostream>
#include <unordered_map>
using namespace std;

/** Naive recursive Fibonacci - O(2^n) time */
int fibonacci(int n) {
    if (n <= 0) return 0;
    if (n == 1) return 1;
    return fibonacci(n - 1) + fibonacci(n - 2);
}

/** Recursive factorial - O(n) time */
long long factorial(int n) {
    if (n <= 1) return 1;
    return n * factorial(n - 1);
}

/** Memoized Fibonacci - O(n) time, O(n) space */
unordered_map<int, long long> memo;

long long fibonacciMemo(int n) {
    if (n <= 0) return 0;
    if (n == 1) return 1;
    if (memo.count(n)) return memo[n];
    memo[n] = fibonacciMemo(n - 1) + fibonacciMemo(n - 2);
    return memo[n];
}

int main() {
    cout << fibonacci(10) << endl;        // Output: 55
    cout << factorial(5) << endl;          // Output: 120
    cout << fibonacciMemo(50) << endl;     // Output: 12586269025
    return 0;
}

Complexity Analysis

The complexity of recursive algorithms depends heavily on the number of recursive calls and whether sub-problems overlap. Naive Fibonacci is a classic example of exponential blowup due to redundant computation, while memoization reduces it to linear time.

Algorithm Time Complexity Space Complexity Description
Naive Fibonacci O(2n) O(n) Each call branches into two; call stack depth is n
Memoized Fibonacci O(n) O(n) Each sub-problem computed once; memo table and call stack each use O(n)
Factorial O(n) O(n) Single recursive call per step; call stack depth is n

Why naive Fibonacci is O(2n): Each call to fib(n) makes two recursive calls, forming a binary tree of calls. The number of nodes in this tree grows exponentially. For example, fib(5) produces 15 function calls, while fib(30) produces over 2.6 million calls.

Space complexity insight: Even though the call tree is exponential, the call stack at any point only holds the frames along the current path from root to leaf, which is at most O(n) deep. Memoization adds an O(n) hash table but eliminates the exponential time by ensuring each unique sub-problem is solved only once.

Quiz

Test your understanding of recursion with these questions.

Q1. What are the two essential components of every recursive function?

A loop and a counter
A base case and a recursive case
A stack and a queue
An input array and a return value
Every recursive function requires a base case (the condition that stops recursion and returns a direct answer) and a recursive case (where the function calls itself with a smaller or simpler input). Without a base case, the function would recurse infinitely and cause a stack overflow.

Q2. What is the time complexity of the naive recursive Fibonacci algorithm?

O(n)
O(n log n)
O(2n)
O(n2)
The naive recursive Fibonacci makes two recursive calls at each level, creating a binary tree of calls. The total number of calls grows exponentially as O(2n). This is because the same sub-problems (like fib(3), fib(2), etc.) are recomputed many times. Memoization or dynamic programming reduces this to O(n).

Q3. What happens when a recursive function has no base case?

It causes a stack overflow error
It returns zero
It runs in constant time
It automatically converts to iteration
Without a base case, the function calls itself indefinitely. Each call adds a new frame to the call stack, and when the stack exceeds its memory limit, the program crashes with a stack overflow error (or a RecursionError in Python). The base case is essential to ensure recursion terminates.

Q4. What is tail recursion?

Recursion that processes the last element of an array first
Recursion that uses two base cases
Recursion where the function calls itself at the beginning
Recursion where the recursive call is the last operation in the function
Tail recursion is a special form of recursion where the recursive call is the very last operation the function performs before returning. This means there is no pending work after the recursive call returns. Some compilers can optimize tail-recursive functions into iterative loops (tail-call optimization), eliminating the need for additional stack frames and preventing stack overflow.

Q5. How does memoization improve recursive Fibonacci from O(2n) to O(n)?

It removes the base case so the function runs faster
It caches results of sub-problems so each is computed only once
It converts the recursion into a for-loop automatically
It reduces the recursion depth to O(log n)
Memoization stores the result of each unique sub-problem (fib(0) through fib(n)) in a cache (dictionary or array). When a sub-problem is encountered again, the cached result is returned in O(1) instead of recomputing it. Since there are only n+1 unique sub-problems, each computed once, the total time becomes O(n). The recursion depth and cache each use O(n) space.