Matthew Brecknell

Difference lists

| Comments

This post is an extended version of a lightning talk I gave at BFPG. It introduces the difference list, a simple trick for improving the performance of certain kinds of list-building functions in Haskell, and goes on to explore the connections to monoids and folds.

The first half is aimed at Haskell novices, while the latter parts might be interesting to intermediates.

A first example

Let’s reverse some lists. I’m sure you’ve seen this before:

1
2
3
reverse :: [a] -> [a]
reverse [] = []
reverse (x:xs) = reverse xs ++ [x]

It’s a simple and clear definition, but it performs badly. List concatenation, (++), traverses its left argument, and since that’s a recursive call, this version takes quadratic time overall.

We can get to linear time by introducing an accumulator:

1
2
3
4
5
6
reverse :: [a] -> [a]
reverse xs = go xs []
  where
    go :: [a] -> [a] -> [a]
    go [] acc = acc
    go (x:xs) acc = go xs (x:acc)

This is a worker-wrapper, where the wrapper initialises the accumulator to the empty list, []. The worker, go, is tail recursive, and accumulates the result in reverse order as it forward-traverses the input.

Although efficient, this is less clear. Let’s tidy up the worker by eliminating the accumulator parameter, acc:

1
2
3
4
5
6
reverse :: [a] -> [a]
reverse xs = go xs []
  where
    go :: [a] -> [a] -> [a]
    go [] = \acc -> acc
    go (x:xs) = \acc -> go xs (x:acc)

The first case (line 5) is just the identity function, id, while we can write the second case as a function composition:

1
2
3
4
5
6
reverse :: [a] -> [a]
reverse xs = go xs []
  where
    go :: [a] -> [a] -> [a]
    go [] = id
    go (x:xs) = go xs . (x:)

If we now compare the worker, go, with our original version of reverse, we see a similar structure, with some differences:

  • The empty list, [], becomes the identity function, id.
  • List concatenation, (++), becomes function composition, (.).
  • The singleton list, [x], becomes a function, (x:), which prepends x to its argument.
  • The list type, [a], becomes the function type, [a] -> [a], in the result.

For the last point, recall that a -> b -> c parses as a -> (b -> c).

Tree flattening

For a second example, let’s take a binary tree and flatten it to a list of elements in left-to-right order. Again, this version is clear, but since there is a recursive call to flatten on the left of list append, (++), it takes quadratic time:

1
2
3
4
5
Tree a = Leaf | Branch (Tree a) a (Tree a)

flatten :: Tree a -> [a]
flatten Leaf = []
flatten (Branch l x r) = flatten l ++ [x] ++ flatten r

As before, the linear-time version uses an accumulator to build the result in reverse:

1
2
3
4
5
6
flatten :: Tree a -> [a]
flatten t = go t []
  where
    go :: Tree a -> [a] -> [a]
    go Leaf acc = acc
    go (Branch l x r) acc = go l (x : go r acc)

If we rearrange, and compare the worker, go, with the original, inefficient version, we can see the same patterns we saw in the reverse function:

1
2
3
4
5
6
flatten :: Tree a -> [a]
flatten t = go t []
  where
    go :: Tree a -> [a] -> [a]
    go Leaf = id
    go (Branch l x r) = go l . (x:) . go r

Difference lists

Having identified the pattern, we can define the type of difference lists, together with its basic operations:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
type DList a = [a] -> [a]

-- O(1)
empty :: DList a
empty = id

-- O(1)
singleton :: a -> DList a
singleton = (:)

-- O(n)
fromList :: [a] -> DList a
fromList = (++)

-- O(1)
append :: DList a -> DList a -> DList a
append = (.)

-- O(1)
toList :: DList a -> [a]
toList d = d []

This means a difference list is a function which prepends some number of items to its argument, without inspecting that argument. For example, this is a difference list:

1
2
myDList :: DList Int
myDlist r = 1:2:3:r

But the following is not a difference list, because the reverse function inspects its argument, and because it does not incorporate the argument in its result:

1
2
notDList :: DList a
notDlist = reverse

Comparing the difference list operations with the corresponding ones on ordinary lists, we see that that difference list append takes constant time, whereas for ordinary lists, (++) takes time linear in the size of the left argument. This means difference lists are good for building lists using left-associated concatenation.

But there’s a catch!

The time complexities are amortised, and they are only valid when the difference list is used ephemerally. If you attempt to use a difference list persistently, toList may degrade to linear time.

The reason for this is that there are two costs associated with append. The first is paid when append builds a closure for the function composition. The second is paid every time the function composition is applied to an argument. That is, whenever toList is evaluated.

This means that toList actually takes time that is linear in the number of function compositions it has to apply. However, if we are only interested in the overall time taken by a sequence of operations, we can assign the cost of evaluating the function composition to append, even though the evaluation occurs in toList. And so we can say that both append and toList take amortised constant time.

Of course, that analysis assumes that toList is only evaluated once. This is what it means to restrict ourselves to ephemeral usage.

A bad example

To show that difference lists are not magical, let’s see how we might use them inappropriately. Since difference lists are good for appending, perhaps they could be used to implement a FIFO queue:

1
2
3
4
5
6
7
8
9
10
type Queue a = DList a

put :: Queue a -> a -> Queue a
put q x = append q (singleton x)

-- No!
take :: Queue a -> Maybe (a, Queue a)
take q = case toList q of
  [] -> Nothing
  x:xs -> Just (x, fromList xs)

The put operation seems fine, provided we don’t attempt to use the Queue persistently. For take, we find that we need to use toList before we can inspect the contents of the queue. Worst of all, we must use the linear-time fromList to construct the new Queue.

Since switching back and forth between difference lists and ordinary lists is inefficient, difference lists are primarily useful in build-then-consume algorithms. We can use a difference list to efficiently build a list, before converting to an ordinary list that is then consumed.

Some fun with folds

While it’s good to know when and how to use difference lists, it’s even nicer to understand the connections to folds and monoids. Let’s reexamine our difference-list-based reverse:

1
2
3
4
5
6
reverse :: [a] -> [a]
reverse xs = go xs []
  where
    go :: [a] -> [a] -> [a]
    go [] = id
    go (x:xs) = go xs . (x:)

The only places we actually construct any lists are the empty list, [], at the end of line 2, and the singleton difference list, (x:), at the end of line 6. Let’s abstract over those, calling them z and f x respectively:

1
2
3
4
5
6
reverse :: (a -> [a] -> [a]) -> [a] -> [a] -> [a]
reverse f z xs = go xs z
  where
    go :: [a] -> [a] -> [a]
    go [] = id
    go (x:xs) = go xs . f x

We can then generalise the type without touching the implementation, and arrive at a familiar function:

1
2
3
4
5
6
foldl :: forall a b. (a -> b -> b) -> b -> [a] -> b
foldl f z xs = go xs z
  where
    go :: [a] -> b -> b
    go [] = id
    go (x:xs) = go xs . f x

The type of argument f is flipped with respect to the Prelude version, but the meaning is the same. So we can say that foldl is just reverse with the constructors abstracted.

Similarly, we can say that foldr is just a list traversal with the constructors abstracted. In fact, we can get the right-fold on Tree using the same generalisation on flatten. For lists, we can get foldr by simply flipping the function composition in line 6:

1
2
3
4
5
6
foldr :: forall a b. (a -> b -> b) -> b -> [a] -> b
foldr f z xs = go xs z
  where
    go :: [a] -> b -> b
    go [] = id
    go (x:xs) = f x . go xs

Monoids

To make precise the comparison between the two versions of each of our examples, we need to recognise that we are using the corresponding operations of two different monoids, the list monoid and the Endo monoid:

1
2
3
4
5
6
7
8
9
instance Monoid [a] where
  mempty = []
  (<>) = (++)

newtype Endo b = Endo { appEndo :: b -> b }

instance Monoid (Endo a) where
  mempty = Endo id
  Endo f <> Endo g = Endo (f . g)

Notice that in the Endo monoid, we’ve generalised from the type [a] -> [a] that we used for difference lists, to b -> b, just as we did when we generalised from reverse to foldl.

We also need a way to inject a single element into a monoid. We use the unit method of the Reducer class, from the reducers package. We can then rewrite reverse and flatten like this:

1
2
3
4
5
6
7
reverse' :: (Monoid m, Reducer m a) => [a] -> m
reverse' [] = mempty
reverse' (x:xs) = reverse xs <> unit x

flatten' :: (Monoid m, Reducer m a) => Tree a -> m
flatten' Leaf = mempty
flatten' (Branch l x r) = flatten l <> unit x <> flatten r

If we treat the result as a list, we get the inefficient versions:

1
2
3
4
5
reverse :: [a] -> [a]
reverse = reverse'

flatten :: Tree a -> [a]
flatten = flatten'

But if we add an appropriate wrapper, we get the efficient versions:

1
2
3
4
5
6
7
8
extract :: Endo [a] -> [a]
extract xs = appEndo xs []

reverse :: [a] -> [a]
reverse = extract . reverse'

flatten :: Tree a -> [a]
flatten = extract . flatten'

More fun with folds

Now that we know about Endo, let’s reexamine foldr:

1
2
3
4
5
6
foldr :: forall a b. (a -> b -> b) -> b -> [a] -> b
foldr f z xs = go xs z
  where
    go :: [a] -> b -> b
    go [] = id
    go (x:xs) = f x . go xs

First let’s rearrange the arguments a little, swapping z and xs:

1
2
3
4
5
6
foldr :: forall a b. (a -> b -> b) -> [a] -> b -> b
foldr f xs z = go xs z
  where
    go :: [a] -> b -> b
    go [] = id
    go (x:xs) = f x . go xs

We can see the type b -> b in lots of places, and we’re using the function id and composition as well. That’s just the Endo monoid, so let’s rewrite like this:

1
2
3
4
5
6
foldr :: forall a b. (a -> Endo b) -> [a] -> Endo b
foldr f = go
  where
    go :: [a] -> Endo b
    go [] = mempty
    go (x:xs) = f x <> go xs

Without touching the implementation, we can generalise the type to any monoid, and rename:

1
2
3
4
5
6
foldMap :: forall a m. Monoid m => (a -> m) -> [a] -> m
foldMap f = go
  where
    go :: [a] -> m
    go [] = mempty
    go (x:xs) = f x <> go xs

The wrapper is no longer doing anything, so we have just this:

1
2
3
foldMap :: Monoid m => (a -> m) -> [a] -> m
foldMap f [] = mempty
foldMap f (x:xs) = f x <> foldMap f xs

Notice how foldMap follows the structure of the list data type. We can recover foldr by specialising to the Endo monoid, and extracting the result by applying to the argument z:

1
2
foldr :: (a -> b -> b) -> b -> [a] -> b
foldr f z xs = appEndo (foldMap (Endo . f) xs) z

To get foldl, we would need a version of Endo in which the monoid append (i.e. the function composition) is reversed. Dual is a monoid transformer which just flips the monoid append operation, <>, with respect to the base monoid:

1
2
foldl :: (a -> b -> b) -> b -> [a] -> b
foldl f z xs = appEndo (getDual $ foldMap (Dual . Endo . f) xs) z

What about Tree flattening? Well, Tree has a foldMap, too:

1
2
3
foldMap :: Monoid m => (a -> m) -> Tree a -> m
foldMap f Leaf = mempty
foldMap f (Branch l x r) = foldMap f l <> f x <> foldMap f r

Again, notice how foldMap follows the structure of the Tree data type. We could reuse the same definitions of foldr and foldl above, just by changing the types.

In fact, we’ve just reinvented the Foldable type class from the base package:

1
2
class Foldable t where
  foldMap :: Monoid m => (a -> m) -> t a -> m

Just by implementing foldMap for your Foldable data types, you get foldl and foldr for free. This means you can use whichever fold is most appropriate for your situation. For example:

1
2
3
4
5
6
7
8
9
10
11
reverse :: Foldable t => t a -> [a]
reverse = foldl (flip (:)) []

flatten :: Foldable t => t a -> [a]
flatten = foldr (:) []

first :: Foldable t => t a -> Maybe a
first = foldr (const . Just) Nothing

last :: Foldable t => t a -> Maybe a
last = foldl (const Just) Nothing

What’s more, if your foldMap follows the structure of your data type, foldl and foldr will both be maximally lazy and efficient. For example, first is O(1) on lists, including infinite lists, and O(log n) on Data.Set, while last is O(n) on lists, but still O(log n) on Data.Set.

Analogy to free monads

When defining a term language and one or more interpreters, the free monad makes it easy to write computions in the term language without reference to any specific interpretation.

Janis Voitgländer showed how to improve the asymptotic complexity of computations over free monads. The method for free monad computations is analogous to the difference-list transformation for list constructions, but operating in a different category. Whereas list types are free monoid objects in the category Hask, free monads are the corresponding objects in the endofunctor category.

Following Voitgländer, we can write our extract function with a more restrictive rank-2 type:

1
2
extract :: forall a. (forall m. (Monoid m, Reducer m a) => m) -> [a]
extract xs = appEndo xs []

This ensures that the computed list is in fact a difference list, by restricting the construction to mempty, (<>) and unit.

C++11 universal reference pop-quiz

| Comments

Here’s a little exercise for anyone who, like me, recently came across Scott Meyerswork on universal references in C++11: Is the following program well formed? Does it have well-defined behaviour? If not, why not? If so, what is the value returned by main()? Why? References, please!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
template <typename R>
struct wat;

template <typename T>
struct wat <T&> {
  static int value(T & t) { return t * 1; }
};

template <typename T>
struct wat <T&&> {
  static int value(T && t) { return t * 3; }
};

template <typename T>
struct ref {
  ref(T && t) : t(static_cast<T&&>(t)) {}
  int value() { return wat<T&&>::value(static_cast<T&&>(t)); }
  T && t;
};

template <typename T>
ref<T> uni(T && t) {
  return ref<T>(static_cast<T&&>(t));
}

int main() {
  int i(3);
  return uni(i).value() + uni(13).value();
}

I have written the program so that it needs no #include directives, and therefore you can be sure there is not a single typedef, decltype or auto specifier anywhere in or out of sight. That means there’s only one way that so-called universal references can arise.

However, you might find one or two other little surprises. Oh, and Clang 3.3 and GCC 4.8.1 don’t even agree on this program, so there’s not much point in cheating! I know where I’m putting my money, though…

B-trees with GADTs

| Comments

I interactively build a simple B-tree data structure in Haskell, implementing insertion and deletion, using a GADT to ensure that I maintain the structural invariant.

You can download the video as MP4 or WebM, as well as the slides I used for my YLJ13 talk, in PDF or Keynote. There’s some code on GitHub. Sorry, no subtitles yet, but you can read along with this script. You can also find the video on BitCast.

Hole-driven Haskell

| Comments

A demonstration of a technique for using types to guide the construction of Haskell programs, based on natural deduction. Includes some tricks for getting help from GHC.

Thanks to Tony Morris, Greg Davis and Clinton Freeman for giving me the idea. Thanks to everyone else for not giving me too much shit about my noisy hole.

You can download the video as MP4 with embedded subtitles, WebM with separate subtitles, or find it on YouTube or BitCast.