The Erroneous Evaluation Slippery Slope

Fight Censorship, Share This Post!

slippery

[This month, I’m serializing my 2003 Harvard Law Review article, The Mechanisms of the Slippery Slope.]

Experience with a policy can change people’s empirical judgments about policies of that sort, and this can of course be good. Sometimes, though, people learn the wrong lesson, because they err in evaluating an experiment’s results. For instance, suppose that after A is enacted, good things happen: stringent enforcement of a drug ban is followed by reduced drug use; an educational reform is followed by higher test scores; a new gun law is followed by lower crime rates.

People might infer that A caused the improvement, even if the true cause was different. Crime or drug use might have fallen because of demographic shifts. Test scores might have risen because of the delayed effects of past policy changes. The furor that led to enacting this policy might also have produced other policies (such as more efficient policing), and those policies might have caused the improvement. But because A‘s enactment was correlated with the improvement, people might incorrectly assume that A caused the improvement, and thus support a still more aggressive drug enforcement strategy, educational reform, or gun control law (B).

Those who are skeptical about A can argue that correlation doesn’t necessarily mean causation, and that post hoc ergo propter hoc (“after, therefore because of”) is a fallacy. But, as with the is-ought fallacy, the fact that philosophers have had to keep condemning this fallacy for over 2000 years shows that it’s not an easy attitude to root out.

Moreover, as with the is-ought fallacy, post hoc ergo propter hoc may correspond to an often non-fallacious heuristic. People might be rational to generally assume that when a legal change is followed by a good result, the result probably flowed from the change, but be mistaken to believe this in a particular case. If we have reason to anticipate that voters or legislators who follow this heuristic will indeed draw a mistaken inference from the outcome of decision A, that may be reason for us to oppose A.

This concern about erroneous evaluation of decision A might be exacerbated, or mitigated, by two kinds of circumstances. First, we might foresee that people will evaluate certain changes using some incomplete metric that ignores the changes’ costs and focuses disproportionately on their benefits. The benefits might be more quickly seen, more easily quantifiable, or otherwise more visible than the costs. The benefits might be felt by a more politically powerful group than the costs might be. The benefits might be deeply felt by easily identifiable people, while the costs might be more diffuse, or might be borne by people who aren’t even aware of them. { Of course, if the harms flowing from decision A are more visible than the possible benefits, then A‘s net benefits may be underestimated. If that’s so, then we needn’t worry as much that an improper evaluation of A‘s effects will lead to greater enthusiasm for implementing B.}

Second, we might reasonably doubt the impartiality of those who will play leading roles in evaluating A‘s effects. Most new laws have some influential backers (whether media, government agencies, or interest groups), or else they wouldn’t have been enacted. These influential authorities will want their favorable predictions to be confirmed, so we might suspect that they will consciously or subconsciously err on the side of evaluating A favorably. B might then be adopted based on an unsound evaluation of A‘s benefits. {Again, though, the opposite may also be true: if we know that, say, the media is generally against proposal A, then we shouldn’t worry much about an improper evaluation of A leading to further step B—if A is seen as a success even by a generally anti-A media, then it probably is indeed a success, and perhaps the further extension to B is therefore justified.}

This danger suggests that we might want to ask the following when a policy A is proposed:

[1.] Is there some other trend or program that might yield benefits that could be erroneously attributed to A?

[2.] Is there reason to think that measurements of A‘s effectiveness will be inaccurate because they underestimate some costs or overestimate some benefits?

[3.] Do we distrust the objectivity and competence of those who will play leading roles in evaluating A‘s effects?

[4.] Have the effects of similar proposals been evaluated incorrectly in the past?

[5.] Are there ways to reduce the risk of erroneous evaluation? For instance, opponents of B might want to negotiate for including a sound evaluation system in the proposal. There will doubtless be debate about which evaluation system is best, but the opponents of B may have more power to insist on a system that’s acceptable to them while A is still being debated.

If any of the answers to the first four questions is “yes,” that might give those who oppose B reason to also oppose A, at least unless they can find—per question 5—some way to decrease the risk of the erroneous evaluation slippery slope.

The post The Erroneous Evaluation Slippery Slope appeared first on Reason.com.


Fight Censorship, Share This Post!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.