Randomized binary search trees
Imagine you’re trying to lookup “John Smith” in a phone book that’s organized alphabetically. We could search sequentially, name by name, page by page from the beginning, but the more natural thing to do would be to open up the book about midway to start the search. Doing this probably puts us somewhere in the “M” section. “Smith” comes after “M” so we can ignore everything that comes before (or is to the left of) the current page. If we cut the remaining pages in half, we might end somewhere in the “T” section. “Smith” comes before “T”, so we can similarly ignore everything that comes after (or is to the right of) the current page. If we proceed this way, each time eliminating half of the names in each subsection of the phone book, it’s certain that we’ll eventually find Mr. John Smith.
This retrieval strategy is called binary search and is one of the most celebrated algorithms in all of computer science. A binary search tree or BST is a type of data structure that maps keys to values and stores the keys in such a way that lookup can be done using binary search. Each “node” or key-value pair has two child nodes (“left” and “right”). For any node in the tree, all nodes pointed to from “left” have keys less in value compared with the given node’s key. Conversely, all nodes pointed to from “right” have keys greater in value compared with the given node’s key.
The example tree above shows precisely how the keys must be ordered for binary search to succeed.
Implementing a binary search tree
Let’s sketch out a class for a generic binary search tree.
public class BST<TKey, TValue> where TKey : IComparable<TKey>
{
[DebuggerDisplay("Key={Key}, Value={Value}")]
/// <summary>Represents a Node in a BST</summary>
class Node
{
public readonly TKey Key;
public readonly TValue Value;
public Node Left, Right;
public Node(TKey key, TValue value,
Node left = null, Node right = null)
{
Key = key; Value = value; Left = left; Right = right;
}
}
/// <summary>The BST root Node</summary>
private Node treeRoot;
}
Notice that I decided to put the node type inside the tree. In my final code, I decided against this, and made the Node
class public and marked the set
methods for the left/right properties internal. I wanted the final tree structure traversable by any user defined algorithm and did this by designing the tree with the visitor design pattern. We’ll keep things simple for now.
Retrieval
To lookup a key, all we have to do is traverse the nodes in the tree from the root node and choose the left or right branch depending on whether our target is less/greater than the current node’s key. This algorithm is naturally recursive and the code for it is nice and compact.
private Node Get(Node root, TKey key)
{
if (root == null)
return null;
int cmp = key.CompareTo(root.Key);
if (cmp < 0) return Get(root.Left, key); // less
if (cmp > 0) return Get(root.Right,key); // greater
return root; // we got it!
}
This makes sense; if our starting point is null
, we can’t go any further. Otherwise, we check in which branch our key must reside, and that will be our starting point for the next search.
This method is private and has to be (recall that the Node
class is private). In my implementation I expose retrieval by a C# indexer, but any method that calls Get
from treeRoot
will do.
public TValue this[TKey key]
{
get { return Get(treeRoot, key).Value; }
set { treeRoot = Set(treeRoot, new Node(key, value)); }
}
Insertion
Given that we need to maintain the binary search invariant (everything to the left must be less and everything to the right must be greater), it’s no surprise that the algorithm for insertion closely mirrors that for lookup.
private Node Set(Node root, Node child)
{
if (root == null) return child;
int cmp = child.Key.CompareTo(root.Key);
if (cmp == 0)
return child; // replaces the existing node at key
else if (cmp < 0) root.Left = Set(root.left, child);
else if (cmp > 0) root.Right = Set(root.Right, child);
return root;
}
There’s a subtle difference though; here we’re actually mutating the tree. This is obvious (we’re adding to the data structure), but how to do it correctly is tricky.
If we’ve exhausted the tree and root
is null
, all we can do is return child
and hope that the calling code manages the restructuring appropriately. This shouldn’t seem too radical (consider, for example the first insertion where treeRoot
is null
). What about the case when we’ve found the key we’re trying to insert? In my implementation, I disallow duplicate keys. Recall that I expose my implementation of the data structure like a dictionary, but this detail is really up to the developer.
The difficult part is determining what to do if the key is less/greater than the root
key. Consider inserting 12 into the example tree above. Starting at 8, the tree root, we traverse down to 14. We keep going left until there’s no remaining tree to traverse. This is great! We’ve already solved the problem of arriving at null
and can just return child
. After that, all that’s left is to keep returning the modified subtree back up to the root. If we do this all the way up to treeRoot
, node 8 will then point to a subtree that now includes 12.
Deletion
Now that we’re comfortable with insertion, let’s move on to deletion. Consider deleting 3 from the example tree above. Once again, we can use binary search to traverse the tree to the node with the target key. What then? We can’t just return null
, since that would render anything below 3 inaccessible.
Instead, we need to restructure the tree in a way that maintains the tree’s integrity. To do this, we need to ask ourselves what node in the tree could take the place of the deleted node 3. There’s actually two answers. We could make the left node of node 8 point to node 1, provided we make the right node for node 1 point to 3. Or we could replace node 3 with node 4 and give node 4 the left and right nodes previously pointed to by node 3.
It’s no coincidence that these two choices represent the immediate predecessor and successors of node 3 when the nodes of the tree are traversed in order from least to greatest. When I was taught deletion from a BST, I always preferred restructuring the tree around the successor node. Taking this approach, the algorithm is as follows.
- Use binary search to traverse down to the node that needs deleting.
- If the right subtree is null, there’s no restructuring to do and we can just return the left subtree (this corresponds to deleting 14, for example). We can take the same approach if the left subtree is null.
- If it’s the case that both children are not null, find the minimum node in the right subtree (the next node in order). Make this node take the place of the node by reassigning the left and right subtrees of the deleted node.
- Delete the minimum node in the right subtree as it now appears twice.
- The node to delete has now been deleted from the subtree; return the modified subtree back up to the tree root.
The code is again short and concise, but easily the most involved we’ve seen so far in our discussion of binary search trees.
public Node Delete(Node root, TKey key)
{
if (root == null)
return null;
int cmp = key.CompareTo(root.Key);
if (cmp > 0)
root.Right = Delete(root.Right, key);
else if (cmp < 0)
root.Left = Delete(root.Left, key);
else
{
if (root.Right == null) return root.Left;
if (root.Left == null) return root.Right;
Node temp = root;
root = Min(temp.Right);
root.Right = DeleteMin(temp.Right);
root.Left = temp.Left;
}
return root;
}
private Node Min(Node root)
{
return root.Left == null ? root : Min(root.Left);
}
private Node DeleteMin(Node root)
{
if (root.Left == null)
return root.Right;
root.Left = DeleteMin(root.Left);
return root;
}
A better binary search tree
The major problem with binary search trees is that they easily devolve into a linked list of nodes, rather than a well balanced tree. Consider, for example, inserting the keys in sorted order into the tree.
A balanced binary search tree is efficient for lookup because at every level we toss out half the remaining nodes. Sequential search, in contrast, only allows us to eliminate one possibility at a time. There are self balancing algorithms out there that guarantee ideal performance. Unfortunately, these algorithms are complicated and easy to get wrong (take a look at deleting a node from a red-black tree and you’ll see what I mean).
We need some way to randomize the data internally in a way that maintains the tree’s integrity at all times. One way to do this might be to give each node a random key, and maintain a map from lookup keys to random keys. All core operations would then be on the random keys, and the random assignment of keys would roughly balance the tree. Though sensible, this strategy is redundant (we’re required to maintain two data structures).
It turns out that there’s a more elegant way to randomize the data. In computer science, a binary heap is a tree structure where the keys of the left and right children nodes are greater than that of their parent node. Most often, heaps are used to maintain immediate access to the smallest (or largest) element in the data structure which is always the root of the tree.
A treap is a tree structure that’s simultaneously a binary search tree and a heap. We structure the tree just as we would a regular binary search tree. Everything to the left is less, everything to the right is greater. However, we augment our Node
data structure with a randomly generated “priority key”, and require that inserting and deleting from the tree maintain the heap order invariant (child priority is always less than parent priority).
The key insight here is that we can do this. Remember that there’s more than one way to order the data such that it’s a valid binary search tree. Try different insertion orders on the example tree, and you’ll see what I mean. It’s the same data; only the structure of the data has changed.
For binary trees, this is actually a lot easier than it sounds. We simply perform one of two “tree rotations” which preserves the binary search invariant but has the effect of promoting a child to a root, thereby ensuring the heap order property.
Consider the left image below and imagine that the priority key of the green child is less than that of its parent. This is a violation of the heap order and we need to exchange green with its parent. The right image shows a left rotation about the right child. Notice that the tree is still a valid binary search tree; alpha, beta, and gamma are still in the same order left to right as the were before even though their parents have changed.
Implementing a treap
Let’s code up these tree rotations. Doing some examples on paper helps to verify that the below code is valid, correct, and won’t raise any exceptions provided root
is not null
// promotes child to root level by rotating the right child left
private Node LRotate(Node root)
{
Node temp = root.Right;
root.Right = temp.Left;
temp.Left = root;
return temp;
}
// promotes child to root level by rotating the left child right
private Node RRotate(Node root)
{
Node temp = root.Left;
root.Left = temp.Right;
temp.Right = root;
return temp;
}
We’ll need to use these methods when adding a node to the treap. As before, we’ll use binary search to find the correct location for the new node. Once we’ve located the insertion point, we’ll need to consider whether or not the heap order has been violated. If we see a violation, we need to exchange the root with one of its children so that the root node has the smallest priority. We’ve already determined that this won’t mess up the binary search ordering, and we can do this at each level of the recursion when returning the new subtree until the heap order is correct.
private Node Set(Node root, Node child)
{
if (root == null) return child;
int cmp = child.Key.CompareTo(root.Key);
if (cmp == 0)
return child;
else if (cmp > 0)
{
root.Right = Set(root.Right, child);
if (root.Right.Priority < root.Priority)
root = LRotate(root);
}
else if (cmp < 0)
{
root.Left = Set(root.Left, child);
if (root.Left.Priority < root.Priority)
root = RRotate(root);
}
return root;
}
The algorithm for deleting a node in a treap is conceptually easier than that for a regular binary search tree. The idea will be to use tree rotations to demote the node to delete down to a leaf node. At that point, we can just return null
(we discussed this case before). As with insertion, we need to ensure that the heap order is maintained. If a node has both children, we pick the one with the smallest priority for the exchange.
private Node Delete(Node root, TKey key)
{
if (root == null) return null;
int cmp = key.CompareTo(root.Key);
if (cmp < 0) root.Left = Delete(root.Left, key);
if (cmp > 0) root.Right = Delete(root.Right, key);
if (cmp== 0)
{
if (root.Left != null && root.Right != null)
{
if (root.Left.Priority < root.Right.Priority)
root = RRotate(root);
else
root = LRotate(root);
}
else if (root.Left != null) root = RRotate(root);
else if (root.Right != null) root = LRotate(root);
else return null;
root = Delete(root, key);
}
return root;
}
Notice the root = Delete(root, key)
call we make after finding the target node. Intuitively, this makes sense; we have to keep deleting the node, after demoting it, since its location in the tree has changed.
Evaluating performance
The randomly generated priority keys should help distribute the data uniformly throughout the tree. Let’s run some benchmarks, and see how well our data structure performs on large datasets.
The below numbers are from inserting 50000 keys in order and looking up key 49999 1000 times.
70.407967 seconds (BST)
0.0428024 seconds (BSTreap)
Excellent!
The main improvement comes from the fact that we’ve radically sped up insertion. Rather than having to traverse to the end of the data structure each time, we find the insertion point much more quickly in a balanced binary search tree (about after log # of elements in tree).
Final code and concluding thoughts
In the end, I decided to make the Node
class public. This helped me use inheritance, which reduced the code needed to implement the structure as an ICollection
(BSTreap
inherits from BST
and just overrides the insert and delete methods). The final code can be found here.
Finally, I took a look at implementing a simple BST in Python. Rather than classes, I used a namedtuple
to model the Node
type we defined above. This makes the tree immutable, which tended to simplify the algorithms for insertion and deletion (in particular, we no longer have to consider how to reconnect subtrees). The full code is quite concise.
Thanks for reading!