Programming language researchers love to throw each other into the Turing tar pit. This is poor scholarship.
Let's use a post by Bob Harper as a case study. I'm going to pick on Bob because he can take it, and because he's one of the very few programming language researchers writing an interesting blog.
The title of Bob's post is “Dynamic languages are static languages”. Right away, I have to wonder if Bob is going to throw dynamic languages into the tar pit—that is, denegrate them because they can be implemented in some other language. Per my previous post, I would not consider this to be a sufficient or even necessary condition for dismissing them.
I think he approaches this several times. Taking a few quotes out of context:
Dynamic typing is but a special case of static typing
Something can hardly be opposed to that of which it is but a trivial special case.
Your “dynamic language” has static types; the problem is that it has only one.
This is what I call tar-pit thinking. We are talking about type systems, so it is not Turing-tar-pit thinking, because type systems are usually not Turing complete. Nevertheless, it fits the general template: “your system is not interesting because my system subsumes it”.
If this were Bob's complete argument, I would have to reject it. He has more to say, however, and some of it is good:
To borrow an apt description from Dana Scott, the so-called untyped (that is “dynamically typed”) languages are, in fact, unityped. Rather than have a variety of types from which to choose, there is but one!
And this is precisely what is wrong with dynamically typed languages: rather than affording the freedom to ignore types, they instead impose the bondage of restricting attention to a single type!
you are depriving yourself of the ability to state and enforce the invariant that the value at a particular program point must be an integer. For another, you are imposing a serious bit of run-time overhead to represent the class itself (a tag of some sort) and to check and remove and apply the class tag on the value each time it is used.
I have no argument with this, but remember, I am critiquing the methodology of Bob's critique; the fact that we are talking about static and dynamic languages has nothing to do with it. At this meta-level, Bob's argument takes the following, two-part form:
- X subsumes/generalizes/can implement Y, so Y is not worthwhile.
- X has some advantages over Y.
Part (1) is tar-pit thinking, and it is not valid. It might be interesting that X subsumes/generalizes/can implement Y, but that does not imply that Y is not worthwhile.
Part (2) is a nice argument for X being worthwhile, but it doesn't imply that Y has no advantages over X. It's possible that both X and Y are worthy of interest.
To see how this line of reasoning fails, consider a different choice for X and Y:
Specious argument: Impure languages are pure languages
- An impure language is just a pure language plus Haskell-style monads.
- In a pure language plus monads, the type of a program indicates whether or not it has side effects. In impure languages like ML, you cannot specify that a program has no side effects. Why would you want to give up this freedom? Pure languages are better than impure languages because CONCURRENCY/PARALLELISM and because REFERENTIAL TRANSPARENCY.
Therefore, Haskell FTW.
This argument fits the two-part template just as perfectly as Bob's argument does. In fact, it seems better than Bob's argument because monads are actually part of the Haskell language definition, and they are used every day by ordinary Haskell programmers who know nothing about type theory. In contrast, Bob is really glossing over some important details when he says,
languages such as ML or Haskell, have no trouble handling dynamic modes of programming.
The copout here is that Bob should not be the one deciding what constitutes “dynamic modes of programming”. That's a strawman. To really convince me, he would have to show that it is straightforward to take a program written by an ordinary Scheme programmer and translate it into ML, by hand. In my experience, that's not fun or easy, and I have been programming in ML since 1985. Sure, you can write a Scheme interpreter in ML but that's exactly the Turing tar pit: you can also write an ML interpreter in Scheme.