Loading, please wait...


Dynamic programming, like the divide-and-conquer method, solves problems by combining the solutions to sub-problems. (“Programming” in this context refers to a tabular method, not to writing computer code.) Divide-and-conquer algorithms partition the problem into disjoint sub-problems, solve the sub-problems recursively, and then combine their solutions to solve the original problem.


In contrast, dynamic programming applies when the sub-problems overlap that is, when sub-problems share sub sub-problems. In this context, a divide-and-conquer algorithm does more work than necessary, repeatedly solving the common sub sub-problems.


A dynamic-programming algorithm solves each sub sub-problem just once and then saves its answer in a table, thereby avoiding the work of re computing the answer every time it solves each sub sub-problem. We typically apply dynamic programming to optimization problems.


Such problems can have many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value. We call such a solution an optimal solution to the problem, as opposed to the optimal solution, since there may be several solutions that achieve the optimal value.


When developing a dynamic-programming algorithm, we follow a sequence of four steps:

1. Characterize the structure of an optimal solution.

2. Recursively defines the value of an optimal solution.

3. Compute the value of an optimal solution, typically in a bottom-up fashion.

4. Construct an optimal solution from computed information.



Example: Following are some examples of problems can be solved by dynamic approach.

  1. Computing a binomial coefficient
  2. Travelling Salesman problem
  3. Knapsack Problem
  4. Fibonacci Series
  5. Constructing an optimal binary search tree
  6. Floyd’s algorithm for all-pairs shortest paths
  7. Scheduling problems etc.