Simons on the Precautionary Principle
Kenneth Simons, the distinguished tort scholar from Boston University, writes with some comments about the precautionary principle. For my post, go
here. Ken's comments are an excerpt (which was only very slightly revised in the published article) from Kenneth W. Simons,
Negligence, 16
Social Philosophy & Policy 52 (1999):
Is it possible to develop a clear, nonconsequentialist formula for negligence, one that accommodates competing values but avoids the problems of a pure (or even a distribution-sensitive) maximizing approach? Consider two efforts--a “disproportion” test, and a “freedom v. security” balancing test. I will conclude that these efforts, while promising, are inadequate. The first is too ill-defined, while the second is too reductionist to capture the full array of values that should be balanced.
a. Disproportion test. One possibility is a disproportion test. On this approach, if an injurer’s risky conduct would expose potential victims to expected risks of P x L and could be avoided only at marginal cost B, then, in order for the injurer to be permitted to impose the risk, P x L must not only be greater than B, it must be much (or disproportionately) greater. This could also be called a thumb on the scale test: in weighing the potential victim’s interest in personal security against the potential injurer’s interest in freedom of activity to impose risks, we should place a (heavy) thumb on the scale, giving special weight to the interest in personal security.
These tests sound plausible and appealing, but, unless substantially recast, they provide a useless criterion. If we have identified the appropriate factors to balance, and if the method of balancing is also justifiable, then these tests say the following: one should not take a risk (as opposed to taking a precaution against the risk) simply because the advantages of taking the risk are greater than the disadvantages. Rather, one should take such a risk only if the advantages of doing so are much greater than the disadvantages (normally, only if the benefits to the injurer are much greater than the expected injuries to victims, discounted according to their probability).
This approach is either indeterminate or irrational. For unless one has a common metric or other justifiable method for measuring the competing interests or values, how does one know whether the interest in physical security and safety is “just” weightier than the interest in freedom of activity, as opposed to “much” weightier, so as to apply the “disproportion” or “much weightier” criterion? On the other hand, if one does have a common metric for measuring the competing interests, or if one does have some other justifiable method of balancing, why shouldn’t the actor simply choose the alternative that furthers the “weightier” value, even if that value is only weightier by a peppercorn?
Let me be more specific. Is the interest in avoiding the risk of having one’s arm broken “usually” greater than the interest in driving 10 miles per hour faster, or “usually” greater than the interest in avoiding the expense of a softer bumper? These questions are meaningless unless we specify more clearly both the degree of risk of a broken arm, and the disadvantages of taking a precaution. Yet once we specify these factors, and adopt a justifiable method of balancing, shouldn’t we indeed balance “at the margin”? That is, shouldn’t we examine whether the advantages of any particular action (even a narrowly defined action) exceed the disadvantages?
I suspect that the worry about weighing “at the margin” is a legitimate concern about turning moral analysis into a bloodless form of calculation. What one should do should not depend on plugging numbers into a formula. And we should often be suspicious of methodologies that purport to balance along a “razor’s edge,” such that trivial factual differences in the weight of a given factor render an otherwise permissible action impermissible (or vice versa).
These concerns are well-founded if the most justifiable method of balancing requires a strong form of commensurability, i.e., translation of all values into a single metric such as money or wealth. But weaker forms of commensurability are more plausible for most moral decisions, including decisions about risky alternatives. For example, consider the question whether a doctor should disclose to a patient all adverse risks of medical treatment of which the doctor is aware. A range of possible rules is possible, from a rule of no disclosure (if the doctor believes that nondisclosure of a particular risk is in the best interest of the patient), to a rule of relatively full disclosure (of all risks that most patients would consider material), to a rule of disclosure tailored to the second-order preference of patients (i.e., disclosure of whatever scope of risks the patient herself prefers to be disclosed). These different rules embody different conceptions of the proper scope of patient autonomy and physician discretion in decisionmaking about medical risks. Whether a given risk should be disclosed in a given case is much more likely to depend on these subtle value judgments than on the precise magnitude of the risk or on the precise financial or temporal burden to the doctor.
At the same time, however, even this more qualitative form of balancing will be sensitive to facts. Accordingly, close questions will sometimes arise about whether, for example, a particular risk is one that most patients would consider material. If we conclude that a doctor should disclose a 1% risk that hernia surgery will result in permanent numbness at the location of the surgery, but we find this a very close question, then the doctor might have no duty to disclose a 0.5% risk. In this sense, “marginal” decisions will still occur.
The “thumb on the scale” approach might also be designed to express special concern for one value in the balance, relative to some other, deficient way of valuing it. But this concern can be accommodated in a balancing test without suggesting the implausible conclusion that there will never be marginal cases. For example, one might conclude that the social value to be given to patient autonomy is greater than the value that most patients actually express in the marketplace (either because of marketplace distortions in capturing the private valuations of patients, or because recognizing patient autonomy is a collective social good, the value of which transcends the sum of individual valuations). Thus, even if patient surveys reveal that most patients only strongly care about risk information that has at least a 20% probability of changing their mind about treatment, the “thumb” might justify a rule that doctors disclose risk information with at least a 10% probability of changing a patient’s mind. Notice, however, that this use of a “thumb on the scales” is much more limited than the general use described earlier.
If you want to pursue these issues, Ken suggests two articles:
1. Mark Geistfeld, Reconciling Cost-Benefit Analysis with the Principle
That Safety Matters More than Money, 76 N.Y.U. L. REV. 114 (2001).
2. Gregory Keating, ³Pressing Precaution Beyond the Point of
Cost-Justification² 56 Vanderbilt Law Review 653 (April 2003), forthcoming,
available at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=424609.