Last week the National Academies Press released a symposium report entitled: Avoiding Technology Surprise for Tomorrow’s Warfighter. (see: www.nap.edu/catalog/12735.html) This is part of an ongoing project of the National Research Council’s Air Force Studies Board. While it is certainly a laudable goal to try to minimize the degree to which one is caught off guard by the technological developments of other states (and, more dauntingly, how those new technologies will be used), the report revives my concern about whether we are fooling ourselves about our ability to make accurate predictions in the domain of social activity. It is interesting that the report notes, as suggested above, that predicting the technologies that will develop is not the hard part, but rather determining how they will be used. The latter has a lot to do with behavior and human decision-making, which are subjects humanity seems to have much less of a handle upon than that of modeling the physical world (spoken by someone whose education is in Economics and Political Science.) We still don’t have a definite answer to the most fundamental question relevent to this discussion- does free will exist?
This is not to say that we are outstanding at getting the technology piece right. There are many hilarious examples of both over- and under-prediction of technology development. With respect to over-prediction, Alex Lewyt, president of a vacuum company, said in 1955 that “Nuclear powered vacuum cleaners probably be a reality in ten years.” On the under-prediction front, there is, of course, the famous Ken Olson quote that “There is no reason anyone would want a computer in their home.”
It seems to me that we (as a society through tax money) have spent tons of money on forecasting models that don’t work. We do this because nothing scares humanity like uncertainty. I think we may be more terrified of what we cannot know than any certain calamity imaginable. I’m not an anthropologist or a geneticist, but I would guess that this visceral fear of the unknown is probably evolutionarily hard-wired into us. At any rate, it is certainly engrained.
I have seen at least two or three different papers that were all essentially probabilistic models of the likelihood that we will suffer a nuclear terrorist attack withing “x” years. At the risk of creating more enemies than my al-Megrahi release post, these papers are complete crap. Mathematically they are invariably sound as they are usually fairly simple and straightforward probabilistic models. However, when the authors pluck probability figures from the air to insert into the model, they are just making a smooth-running garbage-in garbage-out machine.
I am not saying that we should give up on developing a better predictive ability, but I scratch my head at the fact that we keep paying to have people misapply the same methods to similar problems. I am also by no means a critic of probabilistic and statistical models, we just need to know what problems they work for, and which they don’t. Even with all the popular works about the limitations of probabilistic models (perhaps most famously Nassim Taleb’s Black Swan), we are still enamored of applying these methods to problems for which they lack utility.
Unfortunately, I think that we will continue to give big grants to people to build fallacious predictive models, and will probably continue to spend far too little on models of how to achieve an optimal outcome where uncertainty is a given. The former give people a measure of comfort, and policymakers can readily understand the output of such models. Just as policymakers would usually rather throw money at an activity that gives [false] hope of preventing calamity than management of the consequence of a disaster that has already transpired. I think there is little support for activities that work from the assumption that things are going to happen that catch us off guard, and we need to minimize this impact.
No comments:
Post a Comment