Skip to main content

1. Introduction to DP

What is DP?

Dynamic programming (DP) is defined as a powerful design technique that successfully combines the correctness of brute force algorithms with the efficiency of greedy algorithms.

The fundamental idea of dynamic programming is to remember the solutions to subproblems and reuse them to solve larger problems. DP involves identifying and solving all relevant sub problems.

Common use cases for dynamic programming include:

  1. Finding an optimal solution, such as determining a minimum or maximum answer to a question.
  2. Counting the total number of solutions that a problem may have.

A primary challenge when designing a DP solution is figuring out what sub problems can help solve the main problem. For simpler problems, the subproblems often have the same form as the actual problem.

DP Methodologies

Dynamic programming problems can generally be solved using one of two approaches:

  1. Top-Down Approach (Memoization)
    This approach begins with the final problem and recursively computes the sub problems when needed.
    • Memorization is the technique used here, which means "remembering". The idea is to remember the computation of each value (like a Fibonacci number) and compute it at most once.
    • This is typically achieved by storing the computed value in a dictionary, often called a memo.
    • When the function is called, it first checks if the value is in the dictionary; if so, the computation is replaced by a single Dictionary lookup, avoiding the need to run the entire recursive computation again. This significantly speeds up solutions, such as converting an exponential-time recursive Fibonacci solution into a linear-time memoized solution

  1. Bottom-Up Approach (Tabulation)
    This approach involves computing the sub problems first, starting from the base case, and working up to the final solution.

    • Most people prefer the bottom-up approach because it typically uses iteration (like a for loop) instead of recursion, which means it doesn't have function calls and is often more efficient in practice.
    • It may also allow for savings in memory in some cases (e.g., when solving the Fibonacci problem, only the last two values need to be remembered).
    • When using the bottom-up method, attention must be paid to the order in which sub problems are solved. The dependencies of a subproblem must be solved first. If each subproblem is considered a node with edges representing dependencies, the sub problems must be solved in the topological sort order.

    While the bottom-up approach has a lower constant factor and is slightly shorter to implement, thinking about DP problems in terms of recursive functions is often easier for the conceptualization phase.