Dynamic languages
October 5, 2012  

In my last post, I criticized Bob Harper’s critique of dynamic languages without saying what I myself think about dynamic languages. The disease of tar-pit thinking deserved a post of its own, but so do dynamic languages.

So, is Bob wrong about dynamic languages?

Of course he is, as I shall now explain.

First, here’s what I think Bob is saying:

We should not study or use dynamically typed languages on their own. We should study statically typed languages instead, because static types are so useful, and dynamic languages can be handled as a special case.

You can decide for yourself whether I’m using a strawman argument.

I am going to make several responses to this, any one of which is sufficient to refute Bob.

  1. Whenever someone comes up with a new type system, one of the first things they do is prove that it satisfies type soundness. This means that they define an operational semantics of programs that can “go wrong”—encounter a runtime error—and they prove that well-typed programs never go wrong. To repeat: they define a programming language and semantics where some programs can suffer a runtime type error. This is a dynamic language.

    A thousand papers on type theory cannot be wrong: we need dynamic languages to demonstrate the computational consequences of static types.

  2. Simpler is better in mathematics. A proof with fewer assumptions is a better proof. A mathematical construct with simpler machinery is always preferred, provided it is still sufficient to study the problem of interest.

    It is impossible to imagine a world in which we start with statically typed languages and a mathematician never comes along and removes the type system, because the pure, untyped lambda calculus is sufficient to study many topics in the mathematics of computation.

    Dynamic languages are mathematically inevitable.

  3. Every year we invent new type systems. Why? Because there are useful programs that are perfectly sensible (never “go wrong”), but which cannot be be handled by existing type systems. Where do these programs come from? How do we know they exist? People are writing them in dynamic languages.

    We need dynamic languages to understand and remedy the shortcomings of existing static type systems.

Finally, let me add a more subjective comment: a type system is just one of many features you might weigh in choosing a programming language, and it is not independent of the other features. Choosing a type system involves tradeoffs. For example:

  • Powerful type systems are hard to understand. It’s great that intersection types can handle all strongly normalizing lambda expressions, but most people cannot comprehend the resulting types—I know I can’t.
  • Powerful type systems lack type inference. System F is more powerful than Hindley-Milner types, but I wouldn’t want to use System F in practice because figuring out and writing down all of the necessary types would be awful. Foundational proof-carrying code is enormously powerful, but most people cannot program it by hand (it is for compilers and compiler writers).
  • Type inference can be expensive. Hindley-Milner type inference is doubly-exponential in the worst case, so it is unsuitable for some programs. Many useful static analyses face this limitation.

In this spectrum of tradeoffs, both dynamic and static languages are legitimate choices.