# 25.2 The Floyd-Warshall algorithm

## 25.2-1

Run the Floyd-Warshall algorithm on the weighted, directed graph of Figure 25.2. Show the matrix $D^{(k)}$ that results for each iteration of the outer loop.

$k = 1$:

$$\begin{pmatrix} 0 & \infty & \infty & \infty & -1 & \infty \\ 1 & 0 & \infty & 2 & 0 & \infty \\ \infty & 2 & 0 & \infty & \infty & -8 \\ -4 & \infty & \infty & 0 & -5 & \infty \\ \infty & 7 & \infty & \infty & 0 & \infty \\ \infty & 5 & 10 & \infty & \infty & 0 \end{pmatrix}$$

$k = 2$:

$$\begin{pmatrix} 0 & \infty & \infty & \infty & -1 & \infty \\ 1 & 0 & \infty & 2 & 0 & \infty \\ 3 & 2 & 0 & 4 & 2 & - 8 \\ -4 & \infty & \infty & 0 & -5 & \infty \\ 8 & 7 & \infty & 9 & 0 & \infty \\ 6 & 5 & 10 & 7 & 5 & 0 \end{pmatrix}$$

$k = 3$:

$$\begin{pmatrix} 0 & \infty & \infty & \infty & -1 & \infty \\ 1 & 0 & \infty & 2 & 0 & \infty \\ 3 & 2 & 0 & 4 & 2 & -8 \\ -4 & \infty & \infty & 0 & -5 & \infty \\ 8 & 7 & \infty & 9 & 0 & \infty \\ 6 & 5 & 10 & 7 & 5 & 0 \end{pmatrix}$$

$k = 4$:

$$\begin{pmatrix} 0 & \infty & \infty & \infty & -1 & \infty \\ -2 & 0 & \infty & 2 & -3 & \infty \\ 0 & 2 & 0 & 4 & -1 & -8 \\ -4 & \infty & \infty & 0 & -5 & \infty \\ 5 & 7 & \infty & 9 & 0 & \infty \\ 3 & 5 & 10 & 7 & 2 & 0 \end{pmatrix}$$

$k = 5$:

$$\begin{pmatrix} 0 & 6 & \infty & 8 & -1 & \infty \\ -2 & 0 & \infty & 2 & -3 & \infty \\ 0 & 2 & 0 & 4 & -1 & -8 \\ -4 & 2 & \infty & 0 & -5 & \infty \\ 5 & 7 & \infty & 9 & 0 & \infty \\ 3 & 5 & 10 & 7 & 2 & 0 \end{pmatrix}$$

$k = 6$:

$$\begin{pmatrix} 0 & 6 & \infty & 8 & -1 & \infty \\ -2 & 0 & \infty & 2 & -3 & \infty \\ -5 & -3 & 0 & -1 & -6 & -8 \\ -4 & 2 & \infty & 0 & -5 & \infty \\ 5 & 7 & \infty & 9 & 0 & \infty \\ 3 & 5 & 10 & 7 & 2 & 0 \end{pmatrix}$$

## 25.2-2

Show how to compute the transitive closure using the technique of Section 25.1.

We set $w_{ij} = 1$ if $(i, j)$ is an edge, and $w_{ij} = 0$ otherwise. Then we replace line 7 of $\text{EXTEND-SHORTEST-PATHS}(L, W)$ by $l''_{ij} = l''_{ij} \lor (l_{ik} \land w_{kj})$. Then run the $\text{SLOW-ALL-PAIRS-SHORTEST-PATHS}$ algorithm.

## 25.2-3

Modify the $\text{FLOYD-WARSHALL}$ procedure to compute the $\prod^{(k)}$ matrices according to equations $\text{(25.6)}$ and $\text{(25.7)}$. Prove rigorously that for all $i \in V$, the predecessor subgraph $G_{\pi, i}$ is a shortest-paths tree with root $i$. ($\textit{Hint:}$ To show that $G_{\pi, i}$ is acyclic, first show that $\pi_{ij}^{(k)} = l$ implies $d_{ij}^{(k)} \ge d_{il}^{(k)} + w_{lj}$, according to the definition of $\pi_{ij}^{(k)}$. Then, adapt the proof of Lemma 23.16.)

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 MOD-FLOYD-WARSHALL(W) n = W.rows D(0) = W let π(0) be a new n × n matrix for i = 1 to n for j = 1 to n if i != j and D[i, j](0) < ∞ π[i, j](0) = i for k = 1 to n let D(k) be a new n × n matrix let π(k) be a new n × n matrix for i = 1 to n for j = 1 to n if d[i, j](k - 1) ≤ d[i, k](k - 1) + d[k, j](k - 1) d[i, j](k) = d[i, j](k - 1) π[i, j](k) = π[i, j](k - 1) else d[i, j](k) = d[i, k](k - 1) + d[k, j](k - 1) π[i, j](k) = π[k, j](k - 1) 

In order to have that $\pi^{(k)}_{ij} = l$, we need that $d^{(k)}_{ij} \ge d^{(k)}_{il} + w_{lj}$. To see this fact, we will note that having $\pi^{(k)}_{ij} = l$ means that a shortest path from $i$ to $j$ last goes through $l$. A path that last goes through $l$ corresponds to taking a chepest path from $i$ to $l$ and then following the single edge from $l$ to $j$. However, This means that $d_{il} \le d_{ij} - w_{ij}$, which we can rearrange to get the desired inequality. We can just continue following this inequality around, and if we ever get some cycle, $i_1, i_2, \ldots, i_c$, then we would have that $d_{ii_1} \le d_{ii_1} + w_{i_1i_2} + w_{i_2i_3} + \cdots + w_{i_ci_1}$. So, if we subtract the common term sfrom both sides, we get that $0 \le w_{i_ci_1} + \sum_{q = 1}^{c - 1} w_{i_qi_{q + 1}}$. So, we have that we would only have a cycle in the precedessor graph if we ahvt that there is a zero weight cycle in the original graph. However, we would never have to go around the weight zero cycle since the constructed path of shortest weight favors ones with a fewer number of edges because of the way that we handle the equality case in equation $\text{(25.7)}$.

## 25.2-4

As it appears above, the Floyd-Warshall algorithm requires $\Theta(n^3)$ space, since we compute $d_{ij}^{(k)}$ for $i, j, k = 1, 2, \ldots, n$. Show that the following procedure, which simply drops all the superscripts, is correct, and thus only $\Theta(n^2)$ space is required.

 1 2 3 4 5 6 7 8 FLOYD-WARSHALL'(W) n = W.rows D = W for k = 1 to n for i = 1 to n for j = 1 to n d[i, j] = min(d[i, j], d[i, k] + d[k, j]) return D 

(Removed)

## 25.2-5

Suppose that we modify the way in which equation $\text{(25.7)}$ handles equality:

$$\pi_{ij}^{(k)} = \begin{cases} \pi_{ij}^{(k - 1)} & \text{ if } d_{ij}^{(k - 1)} < d_{ik}^{(k - 1)} + d_{kj}^{(k - 1)}, \\ \pi_{kj}^{(k - 1)} & \text{ if } d_{ij}^{(k - 1)} \ge d_{ik}^{(k - 1)} + d_{kj}^{(k - 1)}. \end{cases}$$

Is this alternative definition of the predecessor matrix $\prod$ correct?

If we change the way that we handle the equality case, we will still be generating a the correct values for the $\pi$ matrix. This is because updating the $\pi$ values to make paths that are longer but still tied for the lowest weight. Making $\pi_{ij} = \pi_{kj}$ means that we are making the shortest path from $i$ to $j$ passes through $k$ at some point. This has the same cost as just going from $i$ to $j$, since $d_{ij} = d_{ik} + d_{kj}$.

## 25.2-6

How can we use the output of the Floyd-Warshall algorithm to detect the presence of a negative-weight cycle?

(Removed)

## 25.2-7

Another way to reconstruct shortest paths in the Floyd-Warshall algorithm uses values $\phi_{ij}^{(k)}$ for $i, j, k = 1, 2, \ldots, n$, where $\phi_{ij}^{(k)}$ is the highest-numbered intermediate vertex of a shortest path from $i$ to $j$ in which all intermediate vertices are in the set $\{1, 2, \ldots, k \}$. Give a recursive formulation for $\phi_{ij}^{(k)}$, modify the $\text{FLOYD-WARSHALL}$ procedure to compute the $\phi_{ij}^{(k)}$ values, and rewrite the $\text{PRINT-ALLPAIRS-SHORTEST-PATH}$ procedure to take the matrix $\Phi = \big(\phi_{ij}^{(n)}\big)$ as an input. How is the matrix $\Phi$ like the $s$ table in the matrix-chain multiplication problem of Section 15.2?

We can recursively compute the values of $\phi_{ij}^{(k)}$ by, letting it be $\phi_{ij}^{(k - 1)}$ if $d_{ik}^{(k)} + d_{kj}^{(k)} \ge d_{ij}^{(k - 1)}$, and otherwise, let it be $k$. This works correctly because it perfectly captures whether we decided to use vertex $k$ when we were repeatedly allowing ourselves use of each vertex one at a time. To modify Floyd-Warshall to compute this, we would just need to stick within the innermost for loop, something that computes $\phi(k)$ by this recursive rule, this would only be a constant amount of work in this innermost for loop, and so would not cause the asymptotic runtime to increase. It is similar to the s table in matrix-chain multiplication because it is computed by a similar recurrence.

If we already have the $n^3$ values in $\phi_{ij}^{(k)}$ provided, then we can reconstruct the shortest path from $i$ to $j$ because we know that the largest vertex in the path from $i$ to $j$ is $\phi_{ij}^{(n)}$, call it $a_1$. Then, we know that the largest vertex in the path before $a_1$ will be $\phi_{ia_1}^{(a_1 - 1)}$ and the largest after $a_1$ will be $\phi_{a_1j}^{(a_1 - 1)}$. By continuing to recurse until we get that the largest element showing up at some point is $\text{NIL}$, we will be able to continue subdividing the path until it is entirely constructed.

## 25.2-8

Give an $O(VE)$-time algorithm for computing the transitive closure of a directed graph $G = (V, E)$.

We can determine the vertices reachable from a particular vertex in $O(V + E)$ time using any basic graph searching algorithm. Thus we can compute the transitive closure in $O(VE + V^2)$ time by searching the graph with each vertex as the source. If $|V| = O(E)$, we're done as $VE$ is now the dominating term in the running time bound. If not, we preprocess the graph and mark all degree-$0$ vertices in $O(V + E)$ time. The rows representing these vertices in the transitive closure are all $0$s, which means that the algorithm remains correct if we ignore these vertices when searching. After preprocessing, $|V| = O(E)$ as $|E| \geq |V|/2$. Therefore searching can be done in $O(VE)$ time.

## 25.2-9

Suppose that we can compute the transitive closure of a directed acyclic graph in $f(|V|, |E|)$ time, where $f$ is a monotonically increasing function of $|V|$ and $|E|$. Show that the time to compute the transitive closure $G^* = (V, E^*)$ of a general directed graph $G = (V, E)$ is then $f(|V|, |E|) + O(V + E^*)$.

First, compute the strongly connected components of the directed graph, and look at it's component graph. This component graph is going to be acyclic and have at most as many vertices and at most as many edges as the original graph. Since it is acyclic, we can run our transitive closure algorithm on it. Then, for every edge $(S_1, S_2)$ that shows up in the transitive closure of the component graph, we add an edge from each vertex in $S_1$ to a vertex in $S_2$. This takes time equal to $O(V + E')$. So, the total time required is $\le f(|V|, |E|) + O(V + E)$.