I found a pretty good quote about dynamic vs. static typing today, from Dave Thomas (one of the Pragmatic Programmers and an advocate for the Ruby programming language).

For a long time, developers felt that the type-safety of static languages would mean their code was more reliable.

That seems pretty intuitive. But increasingly, people are finding that not to be the case. They’re finding that the productivity gains they get from dynamic languages are enormous, and that type safety is generally not an issue. Sure, it is theoretically possible for you to have a variable called ‘person’ but discover at runtime that it’s referencing an object of class PurchaseOrder. But it just doesn’t happen in practice.

Indeed. As you all know, I’m a quantitative kind of guy. I believe the effort expended to address a problem should be proportional to the size (likelihood of occurrence times severity of consequences times magnitude of effect, or something like that) of the problem. Are type errors that big a problem? How often do type errors occur that are really errors and not just instances of legitimate polymorphism (“if it has behavior X I don’t care what type it is”), and which wouldn’t be caught immediately by even the most rudimentary unit tests? How much effort is wasted tweaking declarations or performing casts and conversions, to deal with something that wasn’t really going to be a problem anyway? Are strong compile-time type systems really no more than a crutch for the kind of programmer who doesn’t do unit tests? The more I think about it, the more I share Dave’s disdain for type bondage.