# 28-1 Tridiagonal systems of linear equations

Consider the tridiagonal matrix

$$ A = \begin{pmatrix} 1 & -1 & 0 & 0 & 0 \\ -1 & 2 & -1 & 0 & 0 \\ 0 & -1 & 2 & -1 & 0 \\ 0 & 0 & -1 & 2 & -1 \\ 0 & 0 & 0 & -1 & 2 \end{pmatrix} . $$

a.Find an $\text{LU}$ decomposition of $A$.

b.Solve the equation $Ax = \begin{pmatrix} 1 & 1 & 1 & 1 & 1 \end{pmatrix}^{\text T}$ by using forward and back substitution.

c.Find the inverse of $A$.

d.Show how, for any $n \times n$ symmetric positive-definite, tridiagonal matrix $A$ and any $n$-vector $b$, to solve the equation $Ax = b$ in $O(n)$ time by performing an $\text{LU}$ decomposition. Argue that any method based on forming $A^{-1}$ is asymptotically more expensive in the worst case.

e.Show how, for any $n \times n$ nonsingular, tridiagonal matrix $A$ and any $n$-vector $b$, to solve the equation $Ax = b$ in $O(n)$ time by performing an $\text{LUP}$ decomposition.

**a.**

$$ \begin{aligned} L & = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 & 0 \\ 0 & -1 & 1 & 0 & 0 \\ 0 & 0 & -1 & 1 & 0 \\ 0 & 0 & 0 & -1 & 1 \end{pmatrix} , \\ U & = \begin{pmatrix} 1 & -1 & 0 & 0 & 0 \\ 0 & 1 & -1 & 0 & 0 \\ 0 & 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 0 & 1 \end{pmatrix} , \\ P & = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{pmatrix} . \end{aligned} $$

**b.** We first do back substitution to obtain that

$$ Ux = \begin{pmatrix} 5 \\ 4 \\ 3 \\ 2 \\ 1 \end{pmatrix} . $$

By forward substitution, we have that

$$ x = \begin{pmatrix} 5 \\ 9 \\ 12 \\ 14 \\ 15 \end{pmatrix} . $$

**c.** We will set $Ax = e_i$ for each $i$, where $e_i$ is the vector that is all zeroes except for a one in the $i$th position. Then, we will just concatenate all of these solutions together to get the desired inverse.

$$ \begin{array}{|c|c|} \hline \text{equation} & \text{solution} \\ \hline Ax_1 = e_1 & x_1 = \begin{pmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \end{pmatrix} \\ \hline Ax_2 = e_2 & x_2 = \begin{pmatrix} 1 \\ 2 \\ 2 \\ 2 \\ 2 \end{pmatrix} \\ \hline Ax_3 = e_3 & x_3 = \begin{pmatrix} 1 \\ 2 \\ 3 \\ 3 \\ 3 \end{pmatrix} \\ \hline Ax_4 = e_4 & x_4 = \begin{pmatrix} 1 \\ 2 \\ 3 \\ 4 \\ 4 \end{pmatrix} \\ \hline Ax_5 = e_5 & x_5 = \begin{pmatrix} 1 \\ 2 \\ 3 \\ 4 \\ 5 \end{pmatrix} \\ \hline \end{array} $$

Thus,

$$ A^{-1} = \begin{pmatrix} 1 & 1 & 1 & 1 & 1 \\ 1 & 2 & 2 & 2 & 2 \\ 1 & 2 & 3 & 3 & 3 \\ 1 & 2 & 3 & 4 & 4 \\ 1 & 2 & 3 & 4 & 5 \end{pmatrix} . $$

**d.** When performing the LU decomposition, we only need to take the max over at most two different rows, so the loop on line 7 of $\text{LUP-DECOMPOSITION}$ drops to $O(1)$. There are only some constant number of nonzero entries in each row, so the loop on line 14 can also be reduced to being $O(1)$. Lastly, there are only some constant number of nonzero entries of the form $a_{ik}$ and $a_{kj}$. Since the square of a constant is also a constant, this means that the nested **for** loops on lines 16-19 also only take time $O(1)$ to run. Since the **for** loops on lines 3 and 5 both run $O(n)$ times and take $O(1)$ time each to run (provided we are smart to not consider a bunch of zero entries in the matrix), the total runtime can be brought down to $O(n)$.

Since for a Tridiagonal matrix, it will only ever have finitely many nonzero entries in any row, we can do both the forward and back substitution each in time only $O(n)$.

Since the asymptotics of performing the LU decomposition on a positive definite tridiagonal matrix is $O(n)$, and this decomposition can be used to solve the equation in time $O(n)$, the total time for solving it with this method is $O(n)$. However, to simply record the inverse of the tridiagonal matrix would take time $O(n^2)$ since there are that many entries, so, any method based on computing the inverse of the matrix would take time $\Omega(n^2)$ which is clearly slower than the previous method.

**e.** The runtime of our LUP decomposition algorithm drops to being $O(n)$ because we know there are only ever a constant number of nonzero entries in each row and column, as before. Once we have an LUP decomposition, we also know that that decomposition have both $L$ and $U$ having only a constant number of non-zero entries in each row and column. This means that when we perform the forward and backward substitution, we only spend a constant amount of time per entry in $x$, and so, only takes $O(n)$ time.