Skip to content

15.3 Elements of dynamic programming

15.3-1

Which is a more efficient way to determine the optimal number of multiplications in a matrix-chain multiplication problem: enumerating all the ways of parenthesizing the product and computing the number of multiplications for each, or running $\text{RECURSIVE-MATRIX-CHAIN}$? Justify your answer.

Running $\text{RECURSIVE-MATRIX-CHAIN}$ is asymptotically more efficient than enumerating all the ways of parenthesizing the product and computing the number of multiplications for each.

Consider the treatment of subproblems by the two approaches.

  • For each possible place to split the matrix chain, the enumeration approach Þnds all ways to parenthesize the left half, finds all ways to parenthesize the right half, and looks at all possible combinations of the left half with the right half. The amount of work to look at each combination of left- and right-half subproblem results is thus the product of the number of ways to do the left half and the number of ways to do the right half.
  • For each possible place to split the matrix chain, $\text{RECURSIVE-MATRIX-CHAIN}$ finds the best way to parenthesize the left half, finds the best way to parenthesize the right half, and combines just those two results. Thus the amount of work to combine the left- and right-half subproblem results is $O(1)$.

Section 15.2 argued that the running time for enumeration is $\Omega(4^n / n^{3 / 2})$. We will show that the running time for $\text{RECURSIVE-MATRIX-CHAIN}$ is $O(n3^{n - 1})$.

To get an upper bound on the running time of $\text{RECURSIVE-MATRIX-CHAIN}$, we'll use the same approach used in Section 15.2 to get a lower bound: Derive a recurrence of the form $T(n) \le \ldots$ and solve it by substitution. For the lower-bound recurrence, the book assumed that the execution of lines 1–2 and 6–7 each take at least unit time. For the upper-bound recurrence, we'll assume those pairs of lines each take at most constant time $c$. Thus, we have the recurrence

$$ T(n) \le \begin{cases} c & \text{if $n = 1$}, \\ c + \sum_{k = 1}^{n - 1} (T(k) + T(n - k) + c) & \text{if $n \ge 2$}. \\ \end{cases} $$

This is just like the book's $\ge$ recurrence except that it has $c$ instead of $1$, and so we can be rewrite it as

$$T(n) \le 2 \sum_{i = 1}^{n - 1} T(i) + cn.$$

We shall prove that $T(n) = O(n3^{n - 1})$ using the substitution method. (Note: Any upper bound on $T(n)$ that is $o(4^n / n^{3 / 2})$ will suffice. You might prefer to prove one that is easier to think up, such as $T(n) = O(3.5^n)$.) Specifically, we shall show that $T(n) \le cn3^{n - 1}$ for all $n \ge 1$. The basis is easy, since $T(1) \le c = c \cdot 1 \cdot 3^{1 - 1}$ .

Inductively, for $n \ge 2$ we have

$$ \begin{aligned} T(n) & \le 2\sum_{i = 1}^{n - 1} T(i) + cn \\ & \le 2\sum_{i = 1}^{n - 1} ci 3^{i - 1} + cn \\ & \le c \cdot \bigg(2\sum_{i = 1}^{n - 1}i 3^{i - 1} + n\bigg) \\ & = c \cdot \bigg(2\cdot\bigg(\frac{n 3^{n - 1}}{3 - 1} + \frac{1 - 3^n}{(3 - 1)^2}\bigg) + n\bigg) & \text{(see below)} \\ & = cn 3^{n - 1} + c\cdot\bigg(\frac{1 - 3^n}{2} + n\bigg) \\ & = cn 3^{n - 1} + \frac{c}{2}(2n + 1 - 3^n) \\ & \le cn 3^{n - 1} & \text{for all $c > 0, n \ge 1$}. \end{aligned} $$

Running $\text{RECURSIVE-MATRIX-CHAIN}$ takes $O(n3^{n - 1})$ time, and enumerating all parenthesizations takes $(4^n / n^{3 / 2})$ time, and so $\text{RECURSIVE-MATRIX-CHAIN}$ is more efficient than enumeration.

Note: The above substitution uses the fact that

$$\sum_{i = 1}^{n - 1} ix^{i - 1} = \frac{nx^{n - 1}}{x - 1} + \frac{1 - x^n}{(x - 1)^2}.$$

This equation can be derived from equation $\text{(A.5)}$ by taking the derivative. Let

$$f(x) = \sum_{i = 1}^{n - 1} x^i = \frac{x^n - 1}{x - 1} - 1.$$

Then

$$\sum_{i = 1}^{n - 1} ix^{i - 1} = f'(x) = \frac{nx^{n - 1}}{x - 1} + \frac{1 - x^n}{(x - 1)^2}.$$

15.3-2

Draw the recursion tree for the $\text{MERGE-SORT}$ procedure from Section 2.3.1 on an array of $16$ elements. Explain why memoization fails to speed up a good divide-and-conquer algorithm such as $\text{MERGE-SORT}$.

Draw a recursion tree. The $\text{MERGE-SORT}$ procedure performs at most a single call to any pair of indices of the array that is being sorted. In other words, the subproblems do not overlap and therefore memoization will not improve the running time.

15.3-3

Consider a variant of the matrix-chain multiplication problem in which the goal is to parenthesize the sequence of matrices so as to maximize, rather than minimize, the number of scalar multiplications. Does this problem exhibit optimal substructure?

Yes, this problem also exhibits optimal substructure. If we know that we need the subproduct $(A_l \cdot A_r)$, then we should still find the most expensive way to compute it — otherwise, we could do better by substituting in the most expensive way.

15.3-4

As stated, in dynamic programming we first solve the subproblems and then choose which of them to use in an optimal solution to the problem. Professor Capulet claims that we do not always need to solve all the subproblems in order to find an optimal solution. She suggests that we can find an optimal solution to the matrix-chain multiplication problem by always choosing the matrix $A_k$ at which to split the subproduct $A_i A_{i + 1} \cdots A_j$ (by selecting $k$ to minimize the quantity $p_{i - 1} p_k p_j$) before solving the subproblems. Find an instance of the matrix-chain multiplication problem for which this greedy approach yields a suboptimal solution.

Suppose that we are given matrices $A_1$, $A_2$, $A_3$, and $A_4$ with dimensions such that

$$p_0, p_1, p_2, p_3, p_4 = 1000, 100, 20, 10, 1000.$$

Then $p_0 p_k p_4$ is minimized when $k = 3$, so we need to solve the subproblem of multiplying $A_1 A_2 A_3$, and also $A_4$ which is solved automatically. By her algorithm, this is solved by splitting at $k = 2$. Thus, the full parenthesization is $(((A_1A_2)A_3)A_4)$.

This requires

$$1000 \cdot 100 \cdot 20 + 1000 \cdot 20 \cdot 10 + 1000 \cdot 10 \cdot 1000 = 12200000$$

scalar multiplications.

On the other hand, suppose we had fully parenthesized the matrices to multiply as $((A_1(A_2A_3))A_4)$. Then we would only require

$$100 \cdot 20 \cdot 10 + 1000 \cdot 100 \cdot 10 + 1000 \cdot 10 \cdot 1000 = 11020000$$

scalar multiplications, which is fewer than Professor Capulet's method.

Therefore her greedy approach yields a suboptimal solution.

15.3-5

Suppose that in the rod-cutting problem of Section 15.1, we also had limit $l_i$ on the number of pieces of length $i$ that we are allowed to produce, for $i = 1, 2, \ldots, n$. Show that the optimal-substructure property described in Section 15.1 no longer holds.

We say that a problem exhibits the optimal substructure property when optimal solutions to a problem incorporate optimal solutions to related subproblems, which we may solve independently (i.e., they do not share resources). When we impose a limit $l_i$ on the number of pieces of size $i$ that we are permitted to produce, the subproblems can no longer be solved independently. For example, consider a rod of length $4$ with the following prices and limits:

$$ \begin{array}{c|cccc} \text{length $i$} & 1 & 2 & 3 & 4 \\ \hline \text{price $p_i$} & 15 & 20 & 33 & 36 \\ \text{limit $l_i$} & 2 & 1 & 1 & 1 \end{array} $$

This instance has only three solutions that do not violate the limits: length $4$ with price $36$; lengths $1$ and $3$ with price $48$; and lengths $1$, $1$, and $2$ with price $50$. The optimal solution, therefore is to cut into lengths $1$, $1$, and $2$. When we look at the subproblem for length $2$, it has two solutions that do not violate the limits: length $2$ with price $20$, and lengths $1$ and $1$ with price $30$. The optimal solution for length $2$, therefore, is to cut into lengths $1$ and $1$. But we cannot use this optimal solution for the subproblem in the optimal solution for the original problem, because it would result in using four rods of length $1$ to solve the original problem, violating the limit of two length-$1$ rods.

15.3-6

Imagine that you wish to exchange one currency for another. You realize that instead of directly exchanging one currency for another, you might be better off making a series of trades through other currencies, winding up with the currency you want. Suppose that you can trade $n$ different currencies, numbered $1, 2, \ldots, n$, where you start with currency $1$ and wish to wind up with currency $n$. You are given, for each pair of currencies $i$ and $j$ , an exchange rate $r_{ij}$, meaning that if you start with $d$ units of currency $i$ , you can trade for $dr_{ij}$ units of currency $j$. A sequence of trades may entail a commission, which depends on the number of trades you make. Let $c_k$ be the commission that you are charged when you make $k$ trades. Show that, if $c_k = 0$ for all $k = 1, 2, \ldots, n$, then the problem of finding the best sequence of exchanges from currency $1$ to currency $n$ exhibits optimal substructure. Then show that if commissions $c_k$ are arbitrary values, then the problem of finding the best sequence of exchanges from currency $1$ to currency $n$ does not necessarily exhibit optimal substructure.

Any solution must add the additional assumption that no currency can be repeated in a sequence of trades. Without this assumption, if $r_{ij} > 1 / r_{ji}$ for some currencies $i$ and $j$, we could repeatedly exchange $i \to j \to i \to j \to \cdot$ and make an unbounded profit.

To see that this problem has optimal substructure when $c_k = 0$ for all $k$, observe that the problem of exchanging currency $a$ for currency $b$ is equivalent to finding a sequence of currencies $k_1, k_2, \ldots, k_m$ such that $k_1 = a$, $k_m = b$ and the product $r_{k_1k_2}r_{k_2k_3}$ is maximized.

We use the usual cut-and-paste argument. Suppose that an optimal solution contains a sequence $\langle k_i, k_{i + 1}, \ldots, k_j \rangle$ of currencies, and suppose that there exists a sequence $\langle k_i', k_{i + 1}', \ldots, k_j' \rangle$, such that $k_i' = k_i$, $k_j' = k_j$, and $r_{k_i' k_{i + 1}'} \cdots r_{k_{j - 1}' k_j'} > r_{k_i k_{i + 1}} \cdots r_{k_{j - 1}k_j}$. Then we could substitute the sequence $\langle k_i', k_{i + 1}', \ldots, k_j' \rangle$ for the sequence $\langle k_i, k_{i + 1}, \ldots, k_j \rangle$ in the optimal solution to create an even better solution.

We show that optimal substructure does not hold when the $c_k$ are arbitrary values by means of an example. Suppose we have four currencies, with the following exchange rates:

$$ \begin{array}{c|cccc} r_{ij} & 1 & 2 & 3 & 4 \\ \hline 1 & 1 & 2 & 5 / 2 & 6 \\ 2 & 1 / 2 & 1 & 3 / 2 & 3 \\ 3 & 2 / 5 & 2 / 3 & 1 & 3 \\ 4 & 1 / 6 & 1 / 3 & 1 / 3 & 1 \end{array} $$

Let $c_1 = 2$ and $c_2 = c_3 = 3$. Note that this example is not too badly contrived, in that $r_{ji} = 1 / r_{ij}$ for all $i$ and $j$.

To see how this example does not exhibit optimal substructure, let's examine an optimal solution for exchanging currency $1$ for currency $4$. There are five possible exchange sequences, with the following costs:

$$ \begin{array}{lll} \langle 1, 4 \rangle & : 6 - 2 & = 4, \\ \langle 1, 2, 4 \rangle & : 2 \cdot 3 - 3 & = 3, \\ \langle 1, 3, 4 \rangle & : 5 / 2 \cdot 3 - 3 & = 9 / 2, \\ \langle 1, 2, 3, 4 \rangle & : 2 \cdot 3 / 2 \cdot 3 - 3 & = 6, \\ \langle 1, 3, 2, 4 \rangle & : 5 / 2 \cdot 2 / 3 \cdot 3 - 3 & = 2. \end{array} $$

The optimal exchange sequence, $\langle 1, 2, 3, 4 \rangle$, appears in boldface.

Let's examine the subproblem of exchanging currency $1$ for currency $3$. Allowing currency $4$ to be part of the exchange sequence, there are again five possible exchange sequences with the following costs and the optimal one in boldface:

$$ \begin{array}{lll} \langle 1, 3 \rangle & : 5 / 2 - 2 & = 1 / 2, \\ \langle 1, 2, 3 \rangle & : 2 \cdot 3 / 2 - 3 & = 0, \\ \langle 1, 4, 3 \rangle & : 6 \cdot 1 / 3 - 3 & = -1, \\ \langle 1, 2, 4, 3 \rangle & : 2 \cdot 3 \cdot 1 / 3 - 3 & = -1, \\ \langle 1, 4, 2, 3 \rangle & : 6 \cdot 1 / 3 \cdot 3 / 2 & = 0. \end{array} $$

We see that the solution to the original problem includes the subproblem of exchanging currency $1$ for currency $3$, yet the solution $\langle 1, 2, 3 \rangle$ to the subproblem used in the optimal solution to the original problem is not the optimal solution $\langle 1, 3 \rangle$ to the subproblem on its own.