/

Dynamic Programming

Dynamic programming is a powerful algorithmic technique that solves complex problems by breaking them down into simpler, overlapping subproblems and storing their solutions to avoid redundant computation. It transforms exponential brute-force approaches into efficient polynomial-time algorithms by exploiting optimal substructure and overlapping subproblems.

Technique Optimization

How Dynamic Programming Works

Dynamic programming (DP) relies on two key properties to be applicable to a problem:

  • Overlapping Subproblems: The problem can be broken into subproblems that are reused multiple times. Unlike divide-and-conquer (where subproblems are independent), DP subproblems recur repeatedly. For example, computing Fibonacci(5) requires Fibonacci(3) twice and Fibonacci(2) three times in a naive recursive approach.
  • Optimal Substructure: An optimal solution to the problem can be constructed from optimal solutions of its subproblems. This means we can solve each subproblem once, store the result, and combine these results to build the final answer.

There are two classic strategies for implementing a DP solution:

  • Memoization (Top-Down): Start with the original problem and recursively break it down. Before computing a subproblem, check if its result is already stored in a cache (usually a dictionary or array). If so, return the cached value immediately. This approach is intuitive because it mirrors the natural recursive structure, but it incurs function-call overhead and uses the call stack.
  • Tabulation (Bottom-Up): Build a table starting from the smallest subproblems and iteratively work up to the desired answer. Each entry in the table is filled exactly once in a predetermined order. This avoids recursion entirely, eliminates stack overhead, and often allows space optimization since you only need the most recent entries.

Consider the classic Fibonacci sequence: F(0) = 0, F(1) = 1, F(n) = F(n-1) + F(n-2). A naive recursive implementation recomputes the same values exponentially many times, resulting in O(2n) time. With DP, each value F(k) is computed only once and stored, reducing the time complexity to O(n) and space to O(n) (or even O(1) with space optimization).

When to use DP: Look for problems that ask for an optimal value (minimum cost, maximum profit, number of ways), can be defined recursively, and have overlapping recursive calls. Common DP problem categories include knapsack, longest common subsequence, shortest paths, matrix chain multiplication, and coin change.

Interactive Visualization

Watch how dynamic programming computes Fibonacci numbers step by step using the bottom-up tabulation approach. Each cell in the table is filled exactly once by summing the two preceding values. Observe how the algorithm avoids redundant computation by storing and reusing previously computed results.

Set a value for n and click Compute to begin.
Speed No steps

Code Implementation

Below are two DP approaches for computing Fibonacci numbers: memoization (top-down with caching) and tabulation (bottom-up iterative). Both achieve O(n) time, but tabulation avoids recursion overhead and is generally preferred in practice.

# --- Memoization (Top-Down) ---
def fib_memo(n, memo=None):
    """Compute Fibonacci(n) using top-down memoization."""
    if memo is None:
        memo = {}
    if n in memo:
        return memo[n]
    if n <= 1:
        return n

    memo[n] = fib_memo(n - 1, memo) + fib_memo(n - 2, memo)
    return memo[n]


# --- Tabulation (Bottom-Up) ---
def fib_tab(n):
    """Compute Fibonacci(n) using bottom-up tabulation."""
    if n <= 1:
        return n

    dp = [0] * (n + 1)
    dp[0] = 0
    dp[1] = 1

    for i in range(2, n + 1):
        dp[i] = dp[i - 1] + dp[i - 2]

    return dp[n]


# Example usage
print(fib_memo(10))  # Output: 55
print(fib_tab(10))   # Output: 55
import java.util.HashMap;
import java.util.Map;

public class DynamicProgramming {

    // --- Memoization (Top-Down) ---
    private static Map<Integer, Long> memo = new HashMap<>();

    public static long fibMemo(int n) {
        if (memo.containsKey(n)) {
            return memo.get(n);
        }
        if (n <= 1) {
            return n;
        }
        long result = fibMemo(n - 1) + fibMemo(n - 2);
        memo.put(n, result);
        return result;
    }

    // --- Tabulation (Bottom-Up) ---
    public static long fibTab(int n) {
        if (n <= 1) {
            return n;
        }
        long[] dp = new long[n + 1];
        dp[0] = 0;
        dp[1] = 1;

        for (int i = 2; i <= n; i++) {
            dp[i] = dp[i - 1] + dp[i - 2];
        }
        return dp[n];
    }

    public static void main(String[] args) {
        System.out.println(fibMemo(10));  // Output: 55
        System.out.println(fibTab(10));   // Output: 55
    }
}
#include <iostream>
#include <vector>
#include <unordered_map>
using namespace std;

// --- Memoization (Top-Down) ---
unordered_map<int, long long> memo;

long long fibMemo(int n) {
    if (memo.count(n)) {
        return memo[n];
    }
    if (n <= 1) {
        return n;
    }
    memo[n] = fibMemo(n - 1) + fibMemo(n - 2);
    return memo[n];
}

// --- Tabulation (Bottom-Up) ---
long long fibTab(int n) {
    if (n <= 1) {
        return n;
    }
    vector<long long> dp(n + 1);
    dp[0] = 0;
    dp[1] = 1;

    for (int i = 2; i <= n; i++) {
        dp[i] = dp[i - 1] + dp[i - 2];
    }
    return dp[n];
}

int main() {
    cout << fibMemo(10) << endl;  // Output: 55
    cout << fibTab(10) << endl;   // Output: 55
    return 0;
}

Complexity Analysis

The table below compares the complexity of the naive recursive Fibonacci approach against the two dynamic programming strategies. DP transforms an exponential problem into a linear one by eliminating redundant computation.

Approach Time Complexity Space Complexity Description
Naive Recursive O(2n) O(n) Recomputes the same subproblems exponentially many times; space is due to the recursion call stack
Memoization (Top-Down) O(n) O(n) Each subproblem is solved once and cached; space for the memo table plus the recursion stack
Tabulation (Bottom-Up) O(n) O(n) Fills a table iteratively from base cases; no recursion overhead, and can be optimized to O(1) space

For the tabulation approach, space can be further optimized to O(1) by keeping only the last two computed values instead of the entire table. This optimization is possible whenever each state depends only on a fixed number of previous states.

Quiz

Test your understanding of dynamic programming concepts with these questions.

Q1. What are the two key properties a problem must have for dynamic programming to be applicable?

Sorted input and unique elements
Overlapping subproblems and optimal substructure
Divide-and-conquer structure and greedy choice property
Constant-time operations and linear memory usage
Dynamic programming requires overlapping subproblems (the same subproblems are solved multiple times) and optimal substructure (an optimal solution can be built from optimal solutions to subproblems). Without overlapping subproblems, memoization provides no benefit. Without optimal substructure, combining subproblem solutions would not yield a globally optimal answer.

Q2. What is the time complexity of computing Fibonacci(n) using a naive recursive approach without memoization?

O(n)
O(n log n)
O(2n)
O(n2)
The naive recursive Fibonacci makes two recursive calls at each level, and the recursion tree has depth n. This leads to roughly 2n total calls because the same subproblems (like F(3), F(4), etc.) are recomputed many times without caching. Dynamic programming eliminates this redundancy, reducing the time to O(n).

Q3. What is the main difference between memoization and tabulation?

Memoization is top-down with recursion; tabulation is bottom-up with iteration
Memoization always uses less memory than tabulation
Tabulation cannot solve problems that memoization can
Memoization is always faster than tabulation
Memoization takes a top-down approach: it starts from the original problem, recursively breaks it down, and caches results to avoid recomputation. Tabulation takes a bottom-up approach: it starts from the smallest base cases and iteratively builds up to the desired answer. Both achieve the same time complexity, but tabulation avoids recursion overhead and is often easier to optimize for space.

Q4. In the bottom-up Fibonacci tabulation, what is the minimum space complexity achievable?

O(n2)
O(n log n)
O(n)
O(1)
Since each Fibonacci value depends only on the previous two values, we do not need to store the entire table. By keeping just two variables (the last two computed values) and updating them iteratively, we can reduce space from O(n) to O(1). This space optimization technique applies whenever the current state depends on a fixed, constant number of previous states.

Q5. Which of the following problems is NOT typically solved using dynamic programming?

Longest Common Subsequence
Finding the median of an unsorted array
0/1 Knapsack Problem
Coin Change Problem
Finding the median of an unsorted array is a selection problem typically solved with the Quickselect algorithm (average O(n) time) and does not exhibit the overlapping subproblems property needed for DP. In contrast, Longest Common Subsequence, 0/1 Knapsack, and Coin Change are all classic DP problems with well-known recurrence relations and overlapping subproblem structures.