Skip to content

25.1 Shortest paths and matrix multiplication

25.1-1

Run $\text{SLOW-ALL-PAIRS-SHORTEST-PATHS}$ on the weighted, directed graph of Figure 25.2, showing the matrices that result for each iteration of the loop. Then do the same for $\text{FASTER-ALL-PAIRS-SHORTEST-PATHS}$.

  • Initial:

    $$ \begin{pmatrix} 0 & \infty & \infty & \infty & -1 & \infty \\ 1 & 0 & \infty & 2 & \infty & \infty \\ \infty & 2 & 0 & \infty & \infty & -8 \\ -4 & \infty & \infty & 0 & 3 & \infty \\ \infty & 7 & \infty & \infty & 0 & \infty \\ \infty & 5 & 10 & \infty & \infty & 0 \end{pmatrix} $$

  • Slow:

    $m = 2$:

    $$ \begin{pmatrix} 0 & 6 & \infty & \infty & -1 & \infty \\ -2 & 0 & \infty & 2 & 0 & \infty \\ 3 & -3 & 0 & 4 & \infty & -8 \\ -4 & 10 & \infty & 0 & -5 & \infty \\ 8 & 7 & \infty & 9 & 0 & \infty \\ 6 & 5 & 10 & 7 & \infty & 0 \end{pmatrix} $$

    $m = 3$:

    $$ \begin{pmatrix} 0 & 6 & \infty & 8 & -1 & \infty \\ -2 & 0 & \infty & 2 & -3 & \infty \\ -2 & -3 & 0 & -1 & 2 & -8 \\ -4 & 2 & \infty & 0 & -5 & \infty \\ 5 & 7 & \infty & 9 & 0 & \infty \\ 3 & 5 & 10 & 7 & 5 & 0 \end{pmatrix} $$

    $m = 4$:

    $$ \begin{pmatrix} 0 & 6 & \infty & 8 & -1 & \infty \\ -2 & 0 & \infty & 2 & -3 & \infty \\ -5 & -3 & 0 & -1 & -3 & -8 \\ -4 & 2 & \infty & 0 & -5 & \infty \\ 5 & 7 & \infty & 9 & 0 & \infty \\ 3 & 5 & 10 & 7 & 2 & 0 \end{pmatrix} $$

    $m = 5$:

    $$ \begin{pmatrix} 0 & 6 & \infty & 8 & -1 & \infty \\ -2 & 0 & \infty & 2 & -3 & \infty \\ -5 & -3 & 0 & -1 & -6 & -8 \\ -4 & 2 & \infty & 0 & -5 & \infty \\ 5 & 7 & \infty & 9 & 0 & \infty \\ 3 & 5 & 10 & 7 & 2 & 0 \end{pmatrix} $$

  • Fast:

    $m = 2$:

    $$ \begin{pmatrix} 0 & 6 & \infty & \infty & -1 & \infty \\ -2 & 0 & \infty & 2 & 0 & \infty \\ 3 & -3 & 0 & 4 & \infty & -8 \\ -4 & 10 & \infty & 0 & -5 & \infty \\ 8 & 7 & \infty & 9 & 0 & \infty \\ 6 & 5 & 10 & 7 & \infty & 0 \end{pmatrix} $$

    $m = 4$:

    $$ \begin{pmatrix} 0 & 6 & \infty & 8 & -1 & \infty \\ -2 & 0 & \infty & 2 & -3 & \infty \\ -5 & -3 & 0 & -1 & -3 & -8 \\ -4 & 2 & \infty & 0 & -5 & \infty \\ 5 & 7 & \infty & 9 & 0 & \infty \\ 3 & 5 & 10 & 7 & 2 & 0 \end{pmatrix} $$

    $m = 8$:

    $$ \begin{pmatrix} 0 & 6 & \infty & 8 & -1 & \infty \\ -2 & 0 & \infty & 2 & -3 & \infty \\ -5 & -3 & 0 & -1 & -6 & -8 \\ -4 & 2 & \infty & 0 & -5 & \infty \\ 5 & 7 & \infty & 9 & 0 & \infty \\ 3 & 5 & 10 & 7 & 2 & 0 \end{pmatrix} $$

25.1-2

Why do we require that $w_{ii} = 0$ for all $1 \le i \le n$?

This is consistent with the fact that the shortest path from a vertex to itself is the empty path of weight $0$. If there were another path of weight less than $0$ then it must be a negative-weight cycle, since it starts and ends at $v_i$.

25.1-3

What does the matrix

$$ L^{(0)} = \begin{pmatrix} 0 & \infty & \infty & \cdots & \infty \\ \infty & 0 & \infty & \cdots & \infty \\ \infty & \infty & 0 & \cdots & \infty \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \infty & \infty & \infty & \cdots & 0 \end{pmatrix} $$

used in the shortest-paths algorithms correspond to in regular matrix multiplication?

The matrix $L^{(0)}$ corresponds to the identity matrix

$$ I = \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \end{pmatrix} $$

of regular matrix multiplication. Substitute $0$ (the identity for $+$) for $\infty$ (the identity for $\min$), and $1$ (the identity for $\cdot$) for $0$ (the identity for $+$).

25.1-4

Show that matrix multiplication defined by $\text{EXTEND-SHORTEST-PATHS}$ is associative.

To verify associativity, we need to check that $(W^iW^j)W^p = W^i(W^jW^p)$ for all $i$, $j$ and $p$, where we use the matrix multiplication defined by the $\text{EXTEND-SHORTEST-PATHS}$ procedure. Consider entry $(a, b)$ of the left hand side. This is:

$$ \begin{aligned} \min_{1 \le k \le n} [W^iW^j]_{a, k} + W_{k, b}^p & = \min_{1 \le k \le n} \min_{1 \le q \le n} W_{a, q}^i + W_{q, k}^j + W_{k, b}^p \\ & = \min_{1 \le q \le n} W_{a, q}^i + \min_{1 \le k \le n} W_{q, k}^j + W_{k, b}^p \\ & = \min_{1 \le q \le n} W_{a, q}^i + [W^jW^p]_{q, b}, \end{aligned} $$

which is precisely entry $(a, b)$ of the right hand side.

25.1-5

Show how to express the single-source shortest-paths problem as a product of matrices and a vector. Describe how evaluating this product corresponds to a Bellman-Ford-like algorithm (see Section 24.1).

The all-pairs shortest-paths algorithm in Section 25.1 computes

$$L^{(n - 1)} = W^{n - 1} = L^{(0)} \cdot W^{n - 1},$$

where $l_{ij}^{(n - 1)} = \delta(i, j)$ and $L^{(0)}$ is the identity matrix. That is, the entry in the $i$th row and $j$th column of the matrix "product" is the shortest-path distance from vertex $i$ to vertex $j$, and row $i$ of the product is the solution to the single-source shortest-paths problem for vertex $i$.

Notice that in a matrix "product" $C = A \cdot B$, the $i$th row of $C$ is the $i$th row of $A$ "multiplied" by $B$. Since all we want is the $i$th row of $C$, we never need more than the $i$th row of $A$.

Thus the solution to the single-source shortest-paths from vertex $i$ is $L_i^{(0)} \cdot W^{n - 1}$, where $L_i^{(0)}$ is the $i$th row of $L^{(0)}$—a vector whose $i$th entry is $0$ and whose other entries are $\infty$.

Doing the above "multiplications" starting from the left is essentially the same as the $\text{BELLMAN-FORD}$ algorithm. The vector corresponds to the $d$ values in $\text{BELLMAN-FORD}$—the shortest-path estimates from the source to each vertex.

  • The vector is initially $0$ for the source and $\infty$ for all other vertices, the same as the values set up for $d$ by $\text{INITIALIZE-SINGLE-SOURCE}$.
  • Each "multiplication" of the current vector by $W$ relaxes all edges just as $\text{BELLMAN-FORD}$ does. That is, a distance estimate in the row, say the distance to $v$, is updated to a smaller estimate, if any, formed by adding some $w(u, v)$ to the current estimate of the distance to $u$.
  • The relaxation/multiplication is done $n - 1$ times.

25.1-6

Suppose we also wish to compute the vertices on shortest paths in the algorithms of this section. Show how to compute the predecessor matrix $\prod$ from the completed matrix $L$ of shortest-path weights in $O(n^3)$ time.

For each source vertex $v_i$ we need to compute the shortest-paths tree for $v_i$. To do this, we need to compute the predecessor for each $j \ne i$. For fixed $i$ and $j$, this is the value of $k$ such that $L_{i, k} + w(k, j) = L[i, j]$. Since there are $n$ vertices whose trees need computing, $n$ vertices for each such tree whose predecessors need computing, and it takes $O(n)$ to compute this for each one (checking each possible $k$), the total time is $O(n^3)$.

25.1-7

We can also compute the vertices on shortest paths as we compute the shortestpath weights. Define $\pi_{ij}^{(m)}$ as the predecessor of vertex $j$ on any minimum-weight path from $i$ to $j$ that contains at most $m$ edges. Modify the $\text{EXTEND-SHORTESTPATHS}$ and $\text{SLOW-ALL-PAIRS-SHORTEST-PATHS}$ procedures to compute the matrices$\prod^{(1)}, \prod^{(2)}, \ldots, \prod^{(n - 1)}$ as the matrices $L^{(1)}, L^{(2)}, \ldots, L^{(n - 1)}$ are computed.

To have the procedure compute the predecessor along the shortest path, see the modified procedures, $\text{EXTEND-SHORTEST-PATH-MOD}$ and $\text{SLOW-ALL-PAIRS-SHORTEST-PATHS-MOD}$

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
EXTEND-SHORTEST-PATH-MOD(, L, W)
    n = L.row
    let L' = l'[i, j] be a new n × n matirx
    ∏' = π'[i, j] is a new n × n matrix
    for i = 1 to n
        for j = 1 to n
            l'[i, j] = 
            π[i, j] = NIL
            for k = 1 to n
                if l[i, k] + l[j, k] < l[i, j]
                    l[i, j] = l[i, k] + l[j, k]
                    π'[i, j] = π[k, j]
    return (∏', L')
1
2
3
4
5
6
7
SLOW-ALL-PAIRS-SHORTEST-PATHS-MOD(W)
    n = W.rows
    L(1) = W
    ∏(1) = π[i, j](1) where π[i, j](1) = i if there is an edge from i to j, and NIL otherwise
    for m = 2 to n - 1
        ∏(m), L(m) = EXTEND-SHORTEST-PATH-MOD(∏(m - 1), L(m - 1), W)
    return (∏(n - 1), L(n - 1))

25.1-8

The $\text{FASTER-ALL-PAIRS-SHORTEST-PATHS}$ procedure, as written, requires us to store $\lceil \lg(n - 1) \rceil$ matrices, each with $n^2$ elements, for a total space requirement of $\Theta(n^2\lg n)$. Modify the procedure to require only $\Theta(n^2)$ space by using only two $n \times n$ matrices.

We can overwrite matrices as we go. Let $A \star B$ denote multiplication defined by the $\text{EXTEND-SHORTEST-PATHS}$ procedure. Then we modify $\text{FASTER-ALL-EXTEND-SHORTEST-PATHS}(W)$. We initially create an $n$ by $n$ matrix $L$. Delete line 5 of the algorithm, and change line 6 to $L = W \star W$, followed by $W = L$.

25.1-9

Modify $\text{FASTER-ALL-PAIRS-SHORTEST-PATHS}$ so that it can determine whether the graph contains a negative-weight cycle.

For the modification, keep computing for one step more than the original, that is, we compute all the way up to $L^{(2k + 1)}$ where $2^k > n - 1$. Then, if there aren't any negative weight cycles, then, we will have that the two matrices should be equal since having no negative weight cycles means that between any two vertices, there is a path that is tied for shortest and contains at most $n - 1$ edges.

However, if there is a cycle of negative total weight, we know that it's length is at most $n$, so, since we are allowing paths to be larger by $2k \ge n$ between these two matrices, we have that we would need to have all of the vertices on the cycle have their distance reduce by at least the negative weight of the cycle. Since we can detect exactly when there is a negative cycle, based on when these two matrices are different. This algorithm works. It also only takes time equal to a single matrix multiplication which is littlee oh of the unmodified algorithm.

25.1-10

Give an efficient algorithm to find the length (number of edges) of a minimum-length negative-weight cycle in a graph.

Run $\text{SLOW-ALL-PAIRS-SHORTEST-PATHS}$ on the graph. Look at the diagonal elements of $L^{(m)}$. Return the first value of $m$ for which one (or more) of the diagonal elements ($l_{ii}^{(m)}$) is negative. If $m$ reaches $n + 1$, then stop and declare that there are no negative-weight cycles.

Let the number of edges in a minimum-length negative-weight cycle be $m^*$, where $m^* = \infty$ if the graph has no negative-weight cycles.

Correctness

Let's assume that for some value $m^* \le n$ and some value of $i$, we find that $l_{ii}^{m^*} < 0$. Then the graph has a cycle with $m^*$ edges that goes from vertex $i$ to itself, and this cycle has negative weight (stored in $l_{ii}^{m^*}$). This is the minimum-length negative-weight cycle because $\text{SLOW-ALL-PAIRS-SHORTEST-PATHS}$ computes all paths of $1$ edge, then all paths of $2$ edges, and so on, and all cycles shorter than $m^*$ edges were checked before and did not have negative weight. Now assume that for all $m \le n$, there is no negative $l_{ii}^{(m)}$ element. Then, there is no negativeweight cycle in the graph, because all cycles have length at most $n$.

Time

$O(n^4)$. More precisely, $\Theta(n^3 \cdot \min(n, m^*))$.

Faster solution

Run $\text{FASTER-ALL-PAIRS-SHORTEST-PATHS}$ on the graph until the first time that the matrix $L^{(m)}$ has one or more negative values on the diagonal, or until we have computed $L^{(m)}$ for some $m > n$. If we find any negative entries on the diagonal, we know that the minimum-length negative-weight cycle has more than $m / 2$ edges and at most $m$ edges. We just need to binary search for the value of $m^*$ in the range $m / 2 < m^* \le m$. The key observation is that on our way to computing $L^{(m)}$ , we computed $L^{(1)}, L^{(2)}, L^{(4)}, L^{(8)}, \ldots, L^{(m / 2)}$, and these matrices suffice to compute every matrix we'll need. Here's pseudocode:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
FIND-MIN-LENGTH-NEG-WEIGHT-CYCLE(W)
    n = W.rows
    L(1) = W
    m = 1
    whiel m  n and no diagonal entries of L(m) are negative
        L(2m) = EXTEND-SHORTEST-PATHS(L(m), L(m))
        m = 2m
    if m > n and no diagonal entries of L(m) are negative
        return "no negative-weight cycles"
    eles if m  2
        return m
    else
        low = m / 2
        high = m
        d = m / 4
        while d  1
            s = low + d
            L(s) = EXTEND-SHORTEST-PATHS(L(low), L(d))
            if L(s) has any negative entries on the diagonal
                high = s
            else low = s
            d = d / 2
        return high

Correctness

If, after the first while loop, $m > n$ and no diagonal entries of $L^{(m)}$ are negative, then there is no negative-weight cycle. Otherwise, if $m \le 2$, then either $m = 1$ or $m = 2$, and $L^{(m)}$ is the first matrix with a negative entry on the diagonal. Thus, the correct value to return is $m$.

If $m > 2$, then we maintain an interval bracketed by the values $low$ and $high$, such that the correct value $m^*$ is in the range $low < m^* \le high$. We use the following loop invariant:

Loop invariant: At the start of each iteration of the "while $d \ge 1$" loop,

  1. $d = 2^p$ for some integer $p \ge -1$,
  2. $d = (high - low) / 2$,
  3. $low < m^* \le high$.

Initialization: Initially, $m$ is an integer power of $2$ and $m > 2$. Since $d = m / 4$, we have that $d$ is an integer power of $2$ and $d > 1 / 2$, so that $d = 2^p$ for some integer $p \ge 0$. We also have

$$(high - low) / 2 = (m - (m / 2)) / 2 = m / 4 = d.$$

Finally, $L^{(m)}$ has a negative entry on the diagonal and $L^{(m / 2)}$ does not. Since $low = m / 2$ and $high = m$, we have that $low < m^* \le high$.

Maintenance: We use $high$, $low$, and $d$ to denote variable values in a given iteration, and $high'$, $low'$, and $d'$ to denote the same variable values in the next iteration. Thus, we wish to show that $d = 2^p$ for some integer $p \ge -1$ implies $d' = 2^p$ for some integer $p' \ge -1$, that $d = (high - low) / 2$ implies $d' = (high' - low') / 2$, and that $low < m^* \le high$ implies $low' < m^* \le high'$.

To see that $d' = 2^{p'}$, note that $d' = d / 2$, and so $d = 2^{p - 1}$. The condition that $d \ge 1$ implies that $p \ge 0$, and so $p' \ge -1$.

Within each iteration, $s$ is set to $low + d$, and one of the following actions occurs:

  • If $L^{(s)}$ has any negative entries on the diagonal, then $high'$ is set to s and $d'$ is set to $d / 2$. Upon entering the next iteration,

    $$(high' - low') / 2 = (s - low') / 2 = ((low + d) - low) / 2 = d / 2 = d'.$$

    Since $L^{(s)}$ has a negative diagonal entry, we know that $m^* \le s$. Because $high' = s$ and $low'= low$, we have that $low' < m^* \le high'$.

  • If $L^{(s)}$ has no negative entries on the diagonal, then $low'$ is set to $s$, and $d'$ is set to $d / 2$. Upon entering the next iteration,

    $$(high' - low') / 2 = (high' - s) / 2 = (high - (low + d)) / 2 = (high - low) / 2 - d / 2 = d - d / 2 = d / 2 = d'.$$

    Since $L^{(s)}$ has no negative diagonal entries, we know that $m^* > s$. Because $low' = s$ and $high' = high$, we have that $low' < m^* \le high'$.

Termination: At termination, $d < 1$. Since $d = 2^p$ for some integer $p \ge -1$, we must have $p = -1$, so that $d = 1 / 2$. By the second part of the loop invariant, if we multiply both sides by $2$, we get that $high - low = 2d = 1$. By the third part of the loop invariant, we know that $low < m^* \le high$. Since $high - low = 2d = 1$ and $m^* > low$, the only possible value for $m^*$ is high, which the procedure returns.

Time

If there is no negative-weight cycle, the first while loop iterates $\Theta(\lg n)$ times, and the total time is $\Theta(n^3\lg n)$.

Now suppose that there is a negative-weight cycle. We claim that each time we call $\text{EXTEND-SHORTEST-PATHS}(L^{(low)}, L^{(d)})$, we have already computed $L^{(low)}$ and $L^{(d)}$. Initially, since $low = m / 2$, we had already computed $L^{(low)}$ in the first while loop. In succeeding iterations of the second while loop, the only way that low changes is when it gets the value of $s$, and we have just computed $L^{(s)}$. As for $L^{(d)}$, observe that $d$ takes on the values $m / 4$, $m / 8$, $m / 16$, $\ldots$, $1$, and again, we computed all of these $L$ matrices in the first while loop. Thus, the claim is proven. Each of the two while loops iterates $\Theta(\lg m^*)$ times. Since we have already computed the parameters to each call of $\text{EXTEND-SHORTEST-PATHS}$, each iteration is dominated by the $\Theta(n^3)$-time call to $\text{EXTEND-SHORTEST-PATHS}$. Thus, the total time is $\Theta(n^3\lg m^*)$.

In general, therefore, the running time is $\Theta(n^3\lg\min(n, m^*))$.

Space

The slower algorithm needs to keep only three matrices at any time, and so its space requirement is $\Theta(n^3)$. This faster algorithm needs to maintain $\Theta(\lg\min(n, m^*))$ matrices, and so the space requirement increases to $\Theta(n^3\lg\min(n, m^*))$.