Economists do it with models


Economic models | Big questions and big numbers |

But the policymakers who swallow these simulations have little way of knowing what is driving the results: is it deep theory, solid data, or arbitrary pruning? Sometimes the model-maker himself does not know.

I once had to use some I/O modeling software to assess the economic impacts of a particular program. Actually, I was revising the numbers so that they could be compared to the estimated impacts of a similar program in a different year. I had no idea how to use the software, and the user’s manual was thicker than my attention span would allow. So my tactic was to replicate the results of the old estimation, which I hadn’t done, then apply the same procedure to the new data. So far, so good.

But when a paper containing these results was submitted to a journal and peer-reviewed, there was uncertainty about whether the initial procedure was correct — does this number already include that number?, are these assumptions about different economic sectors correct? Nervous, I now pored through the user manual (anxiety can heighten my attention span), trying to connect the basic mathematics to the actual inner-workings of the software. In the end, I think I found the correct interpretation of results, although the model still contained arbitrary assumptions. Worse, to eliminate those assumptions, I would have had to make an entire host of new, more arbitrary assumptions that would have made the model seem more comprehensive, but would, in my mind, have also made it even less accurate.

And I have (some) experience with this kind of thing. Imagine a legislator with a business background making important decisions on the basis of these kinds of models. And if the experts disagree, what good is deference to expertise?


%d bloggers like this: