**Introduction**

First year students soon learn that the law must deal with uncertainty--imperfect knowledge about the past, present, or future. What level of precaution is required by the duty of reasonable care when engaging in behavior that might or might not cause a harm? How should regulators deal with a new technology (e.g., genetically modified organisms when they were first introduced) when the possible harms (and benefits) are speculative? What should a jury do when conflicting evidence makes it uncertain whether a defendant is innocent or guilty? What should a judge do when crucial evidence has been destroyed and we cannot know which side it would have (decisively) favored?

Questions like these can be made more tractable with tools developed by decision theorists, economists, policy scientists, and philosophers. In this entry in the *Legal Theory Lexicon*, we will explore three ideas, "uncertainty," "risk," and "ignorance"--and some related notions, "expected utility," "maximin," and "maximax." As always, the *Lexicon* is aimed at law students--especially first year law students--with an interest in legal theory.

**Vocabulary**

Unforunately, the central ideas that we will investigate are not described by a standard vocabulary. So the first thing we need to do is define some terms. After that, we can discuss *alternative vocabularies *used to describe the same underlying ideas.

Here are the definitions:

*Certainty*: The term "certainty" will be used to designate indisputable or indubitable knowledge about a state of affairs. Thus, an event is certain to occur if it is 100% likely that it will come about. An historical fact is certaintly true if there is no basis for doubting or disputing its truth.*Uncertainty:*The term "uncertainty" will be used to designate any belief that falls short of certain knowledge. Thus, the occurrence of an event is not certain if there is a chance that it will not come about. Our knowledge about the occurrence an event in the past is uncertain, if there are (all things considered) reasons to dispute or doubt that the event occurred.*Risk*: Risk is quantifiable uncertainty. For example, if I toss a balanced coin, the chance of its coming up heads if 0.5 (or 50%).*Ignorance:*Ignorance is uncertainty that cannot be quantified. Examples are tricky here. Suppose that someone has put 100 balls in an urn. You do know the balls are either white or black, but you do not know whether there are 100 black balls and 0 white balls, or 99 black balls and 1 white ball, or 98 black balls, and 2 white balls, or any other combination (the sequence ends with 0 black balls and 100) white balls. What are the chances that a randomly selected ball from the urn will be black? We lack sufficient information to assign a probablity. It could be 100% or 0% or any other whole number percentage in between. (Notice, however, that we know with certainty that the chances are not 3.14159265%.)

So there are two different kinds of uncertainty, risk and ignorance. "Risk" is "quantifiable uncertainty," and "ignorance" is "unquantifiable uncertainty."

Unfortunately, this vocabularly is not used by everyone who writes about uncertainty. In particular, many economists use the word "uncertainty" to mean both a lack of certainty and "ignorance" (as I have defined it). This terminological carelessness can be traced back to an important paper by Frank Knight:

Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated. The term "risk," as loosely used in everyday speech and in economic discussion, really covers two things which, functionally at least, in their causal relations to the phenomena of economic organization, are categorically different. ... The essential fact is that "risk" means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomenon depending on which of the two is really present and operating. ... It will appear that a measurable uncertainty, or "risk" proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We ... accordingly restrict the term "uncertainty" to cases of the non-quantitive type. (Knight, 1921, p. 19)

Knight uses the word "risk" to name measuable uncertainty and "uncertainty" to name unmeasurable uncertainty. As long as you are careful to distinguish the two different senses of the word uncertainty, you can avoid confusion, but the best way to avoid terminological confusion is to avoid ambiguity altogether. For the remainder of the *Lexicon*, I will use "uncertainty," "risk," and "ignorance" as defined above.

**Varities of Risk and Ignorance**

I've defined risk and ignorance as if they were mutually exclusive, but that is a bit too simple. For example, we might have some knowledge about probabilties. For example, we might know that the probability of event is within a range. Consider our urn of red and black balls. Suppose that you know that there are at least ten white balls, but nothing more than that. There could be 100 white balls or 99 or 98 or any other value that is no greater than 100 but no less than 10. In these circumstances, you know that there is at least a 10% chance of drawing a white ball and no more than a 90% chance of drawing a black ball. That means there are some bets that you should take. If someone offers you 100 to 1 odds on bet that a white ball will be drawn, you should jump on it. But what if someone offered you 1 to 1 odds on a bet that a black ball will be drawn? It doesn't seem like you have enough information to calculate the odds.

Likewise, we can imagine having second-order information about probabilities. You might know the probabilities for certain. Or you might know that there is a 0.5 chance that our hypothetical urn contains 100 white balls and a 0.5 chance that it contains 10 white balls. The second-order information would permit you to calculate the likelihood of drawing a white ball. It is (0.5 * 1.0) + (0.5 * 0.1) = .55 or 55%. In that case, we would still have a condition of risk. But we can also imagine scenarios in which we have incomplete second-order information about probabilities--and hence a combination of risk and ignorance.

For the remainder of the *Lexicon* entry, I will assume that risk and ignorance are mutually exclusive and I will ignore the complexities introduced by second-order information about risk and ignorance.

**Risk and Expected Utilities**

Decision theory is the discipline that studies rational choice using formal (e.g., mathematical) tools. If we put aside the possibility of risk aversion (a desire to avoid risk), then rational choice under conditions of risk requires that we maximize the expected utilities of our actions.

Take a simple example. Suppose I have a choice. I can bet $1.00 on a coin toss. There is a 0.5 chance that the coin will come up heads and 0.5 chance it will come up tails. The expected value of a bet that the coin will come up heads is $0.50. So if I am offered odds that are better than 50/50 (say 2 for 1), I should take the bet. There is a 0.5 chance of heads with a payoff of $2.00 (and 0.5 * $2.00 = $1.00), and there is a 0.5 chance of tails with a loss of $1.00 (and 0.5 * -$1.00 = -$0.50). That means that the expected value of the bet is $0.50, the sum of $1.00 and negative $0.50. If I decline the bet, the expected value is $0.00, so if I want to maximize my expected utility, then I should take the bet. (I am assuming that I derive utility from the opportunities created by having the additional $2.00 as compared to the alternative of no gain.)

The expected utility of a decision is the payoff associated with each possible outcome of the decision discounted by the probability of the outcomes occurrence. Maximizing expected utilities is simply making the choice that will produce the largest expected utility.

**Decision Making under Conditions of Ignorance**

Suppose we have to make a decision under conditions of ignorance. We now lack the information to maximize our expected utilitiy. So how might we decide when we lack information about the probabilities of of possible outcomes?

For example, suppose it were the case that the introduction of a genetically modified organism (GMO) into argriculture would produce one of two outcomes. The first outcome is an increase in the production of a valuable foodstuff with no negative side effects. The second outcome is an environmental catastrophe that causes the deaths of millions of persons. We don't have enough information to know how likely the second outcome is, but we do know that it is possible. What is the rational choice in this situation?

Decision theorists have suggested a number of possibilites. Here are some:

- Assume that each possible outcome is equally likely. That assumption permits us to calculate expected utilities. We would weigh the benefits of the GMO discounted by 50% against the costs of the environmental catastrophe (also discounted by 50%).
- Assume the worst case scenario. Decision theorists sometimes call this second strategy the maximin principle. We maximize the minimum payoff. Some versions of the "precautionary principle" are versions of the
*maximin*strategy. In our GMO hypothetical, this means we would assume that ecological catastrophe will occur--and presumably the rational decision would be to prevent introduction of the GMO. - Assume the best case scenario. This strategy requires us to maximize the maximum payoff--hence it can be called the
*maximax*strategy. This strategy means that in the GMO case, we would assume that the GMO will produce all the benefits and none of the costs--and hence the rational strategy would be to deploy the new genetically modified crop.

We can imagine more complex strategies. For example, we choose the outcome with the highest value of the average of the maximum payoff and the minimum payoff.

Each of these strategies has problems. For example, the maximin principle requires us to decide soley on the basis for the worst case scenario. That might require us to forgo large benefits to avoid dangers to which we could not assign probabilties--even though we might later gain knowledge that would show that the likelihood of catastrophe was vanishingly small. Once we had this knowledge, we might regret having adopted the maximin principle.

Another problem with some of these methods is that they are sensitive to variations in the way that outcomes are described and categorized. Consider the choice strategy that assumes that each outcome is equally probable. That means that adding new categories of possible outcomes will affect the probabilities, and (as you can easily demonstrate for yourself) different ways of slicing and dicing the outcomes pie will result in different decisions becoming the rational choice. Sometimes, the set of outcomes is defined naturally (heads or tails), but sometimes the outcomes are shorthand for alternatives that are complex and can be described in many different ways, each of which is correct. And even in the coin toss case, it may turn out that there are neglected possibile outcomes (e.g., the coin ends up perfectly balanced on its side or is snatched in midair by a magpie).

I haven't even begun to scratch the surface. It is not clear that there is any general strategy for decisionmaking under conditions of uncertainty that can be justified as the rational strategy--but that is a topic for a whole discipline and not a *Lexicon* entry*.*

**Conclusion**

Of course, we have barerly scratched the surface. Remember, the point of this *Lexicon* entry is to give you a rough understanding of uncertainty, risk, and ignorance. The difference between risk and ignorance is fundamental--and every legal theorists needs to understand its significance.

**Related Lexicon Entries**

- Legal Theory Lexicon 008: Utilitarianism
- Legal Theory Lexicon 024: Balancing Tests
- Legal Theory Lexicon 025: Social Welfare Functions
- Legal Theory Lexicon 060: Efficiency, Pareto, and Kaldor-Hicks
- Legal Theory Lexicon 064: Possibility and Necessity

**References**

- Daniel A. Farber, Uncertainty, 99 Georgetown Law Journal 901 (2011).
- Frank Hyneman Knight, Risk, Uncertainty and Profit (Houghton Mifflin Company, 1921) (Google Books page).
- Stephen Senn, Dicing with Death: Chance, Risk and Health (Cambridge University Press, 2003).
- Uncertainty and Risk: Multidisciplinary Perspectives (Gabriele Bammer, Michael Smithson eds., Earthscan 2008).

Resources on the Internet

- Sven Ove Hansson, Risk, Stanford Encyclopedia of Philosophy
- Understanding Uncertainty (webside produced by the Winton programme for the public understanding of risk based in the Statistical Laboratory in the University of Cambridge).

(First created on February 10, 2013.)