One Hundred Years of Laziness

Back in 2003 Paul Graham wrote The Hundred-Year Language, in which he hypothesised about what a programming language might look like in a hundred years time. To summarise, over time programs have become more and more wasteful, with modern ‘dynamic languages’ like Python and Ruby running an order of magnitude or two slower than C, which was once considered “high-level.” In the next hundred years, Paul predicted, we would see this lead to a language designed with the assumption that resources are essentially free. For example, most data structures exist for the purpose of speed — although most languages treat strings and lists differently for performance, in Haskell String is just a synonym for [Char], a linked list of Unicode characters. Although a little wasteful, this is absolutely great when programming. Paul suggests that we could replace all data structures with lists — even, theoretically, numbers — though I think trees would make more sense, since orders of complexity will matter even if overheads don’t.

But here I’m going to talk about laziness. I believe the Hundred-Year Language must be the laziest language we’ll have seen. Going back to Haskell, its lazy evaluation means that it does not compute a value until necessary. This means we can create the Fibonacci sequence as a list of infinite length, whose elements are calculated only when required. If we take 10 fibs only the first ten Fibonacci numbers are calculated, even though fibs is infinite, because we don’t need the rest. In an ordinary ‘strict’ language mentioning fibs would hang until we ran out of memory.

Suppose rational numbers are represented as a quotient of two arbitrary precision integers, so unlike floating point arithmetic where 1/3 = 0.33333333333333331, we actually store the result as a third. Arbitrary precision rationals are great, until we realise we actually rather like irrational numbers too. How do we represent π? One solution could be to approximate it with a rational number, like 355/113. But come now, this is the Hundred-Year Language! We need arbitrary precision irrationals. We might define π with the Chudnovsky algorithm, lazily evaluating it depending on how much of π we need during the program. Suppose we (controversially) define τ as 2π and print its first twenty digits, then we will compute only enough of the Chudnovksy algorithm for twenty digits of π, which are then doubled and printed. π is infinite, but that does not matter.

I predict that programming will eventually become nothing but mathematics bound by time and space. We may even make up for wastefulness through laziness. “Want not, waste not.”