Skip to content

12.3 Insertion and deletion

12.3-1

Give a recursive version of the $\text{TREE-INSERT}$ procedure.

1
2
3
4
RECURSIVE-TREE-INSERT(T, z)
    if T.root == NIL
        T.root = z
    else INSERT(NIL, T.root, z)
1
2
3
4
5
6
7
8
9
INSERT(p, x, z)
    if x == NIL
        z.p = p
        if z.key < p.key
            p.left = z
        else p.right = z
    else if z.key < x.key
        INSERT(x, x.left, z)
    else INSERT(x, x.right, z)

12.3-2

Suppose that we construct a binary search tree by repeatedly inserting distinct values into the tree. Argue that the number of nodes examined in searching for a value in the tree is one plus the number of nodes examined when the value was first inserted into the tree.

Number of nodes examined while searching also includes the node which is searched for, which isn't the case when we inserted it.

12.3-3

We can sort a given set of $n$ numbers by first building a binary search tree containing these numbers (using $\text{TREE-INSERT}$ repeatedly to insert the numbers one by one) and then printing the numbers by an inorder tree walk. What are the worst-case and best-case running times for this sorting algorithm?

Here's the algorithm:

1
2
3
4
5
TREE-SORT(A)
    let T be an empty binary search tree
    for i = 1 to n
        TREE-INSERT(T, A[i])
    INORDER-TREE-WALK(T.root)
  • Worst case: $\Theta(n^2)$—occurs when a linear chain of nodes results from the repeated $\text{TREE-INSERT}$ operations.
  • Best case: $\Theta(n\lg n)$—occurs when a binary tree of height $\Theta(\lg n)$ results from the repeated $\text{TREE-INSERT}$ operations.

12.3-4

Is the operation of deletion "commutative" in the sense that deleting $x$ and then $y$ from a binary search tree leaves the same tree as deleting $y$ and then $x$? Argue why it is or give a counterexample.

No, giving the following courterexample.

  • Delete $A$ first, then delete $B$:

    1
    2
    3
    4
    5
      A        C        C
     / \      / \        \
    B   D    B   D        D
       /
      C
    
  • Delete $B$ first, then delete $A$:

    1
    2
    3
    4
    5
      A        A        D
     / \        \      /
    B   D        D    C
       /        /
      C        C
    

12.3-5

Suppose that instead of each node $x$ keeping the attribute $x.p$, pointing to $x$'s parent, it keeps $x.succ$, pointing to $x$'s successor. Give pseudocode for $\text{SEARCH}$, $\text{INSERT}$, and $\text{DELETE}$ on a binary search tree $T$ using this representation. These procedures should operate in time $O(h)$, where $h$ is the height of the tree $T$. ($\textit{Hint:}$ You may wish to implement a subroutine that returns the parent of a node.)

In $\text{SEARCH}$ and $\text{INSERT}$, we do not need to know the parent of $x$.

1
2
3
4
5
6
7
8
9
PARENT(z)
    if z == NIL
        m = T.root
    else
        m = z.left
    while m != x
        p = m
        m = m.right
    return p
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
GET-PARENT(T, x)
    if x.right == NIL
        if x == x.succ.left
            x.p = x.succ
        else
            z = x.succ
            x.p = PARENT(z)
    else
        y = TREE-MAXIMUM(x.right)
        if x == y.succ.left
            x.p = y.succ
        else
            z = y.succ
            x.p = PARENT(z)

Therefore we can find $x$'s parent in $O(h)$, $\text{DELETE}$ is $O(h + h) = O(h)$.

12.3-6

When node $z$ in $\text{TREE-DELETE}$ has two children, we could choose node $y$ as its predecessor rather than its successor. What other changes to $\text{TREE-DELETE}$ would be necessary if we did so? Some have argued that a fair strategy, giving equal priority to predecessor and successor, yields better empirical performance. How might $\text{TREE-DELETE}$ be changed to implement such a fair strategy?

Update line 5 so that $y$ is set equal to $\text{TREE-MAXIMUM}(z.left)$.

To implement the fair strategy, we could randomly decide each time $\text{TREE-DELETE}$ is called whether or not to use the predecessor or successor.