CHAPTER I-3
PROBABILITY AND CHANCE: CHARACTERISTICS AND MEANING
Uncertainty, in the presence of vivid hopes and fears,
is painful, but must be endured if we wish to live
without the support of comforting fairy tales.
Bertrand Russell, A History of Western Philosophy
(New York: Simon and Schuster, 1945, p. xiv)
INTRODUCTION
The central concept for dealing with uncertainty is
probability. Hence we must inquire into the "meaning" of the
term probability. (The term meaning is in quotes because it can
be a confusing word.)
After sketching the intuitive ground from which the concept
of probability emerges, I shall suggest a theoretical concept of
probability for applied work, and then argue that the appropriate
empirical concept to apply this theoretical concept in statistics
and elsewhere is an operational definition of probability. An
operational definition at one stroke cuts away the difficulties
that have arisen over the centuries in disputes among
philosophers and statisticians about the appropriate concepts and
definitions of probability.
This chapter offers a way of dealing with the issue of
probability that has been a bone of controversy for centuries.
The following chapter discusses the history and nature of that
controversy.
THE INTUITIVE GROUND OF THE CONCEPT OF PROBABILITY
You wonder: Will the footballer's kick from the 45 yard
line go through the uprights? How much oil can you expect from
the next well you drill, and what value should you assign to that
prospect? Will you be the first one to discover a completely
effective system for converting speech into computer-typed
output? Will the next space shuttle end in disaster? Your
answers to these questions rest on the probabilities you
estimate.
And you act on the basis of probabilities: You place your
blanket on the beach where there is a low probability of
someone's kicking sand on you. You bet heavily on a poker hand
if there is a high probability that you have the best hand. A
hospital decides not to buy another ambulance when the
administrator judges that there is a low probability that all the
other ambulances will ever be in use at once. NASA decides
whether or not to send off the space shuttle this morning as
scheduled.
The common meaning of the term "probability" is as follows:
Any particular stated probability is an assertion that indicates
how likely you believe it is that an event will occur.
THE THEORETICAL CONCEPT OF PROBABILITY FOR APPLIED WORK
To say that an event has a high or low probability is the
equivalent of making a statement that forecasts the future or
predicts the outcome of some other event whose result is not yet
known; an example of the latter is the result of a historical
inquiry or the result of a coin flip when the coin already has
been thrown and is in your palm.
In practice, probability is stated as a decimal number
between "0" and "1," such that "0" means you estimate that there
is no chance of the event happening, and "1" means you are
certain the event will happen. A probability estimate of .2
indicates that you think there is twice as great a chance of the
events happening as if you had estimated a probability of .1.
The probabilities associated with the possible outcomes in a
given situation sum to one 1 by definition.
The idea of probability arises when you are not sure about
what will happen in an uncertain situation - that is, when you
lack information and therefore can only make an estimate. For
example, if someone asks you what your name is, you do not use
the concept of probability to answer; you know the answer to a
very high degree of surety. To be sure, there is some chance
that you do not know your own name, but for all practical
purposes you can be quite sure of the answer. If someone asks
you who will win tomorrow's ball game, however, there is a
considerable chance that you will be wrong no matter what you
say. Whenever there is a reasonable chance that your prediction
could be wrong, the concept of probability can help you.
The concept of probability helps you to answer the question,
"How likely is it that ...?" The purpose of the study of
probability and statistics is to help you make sound appraisals
of statements about the future, and good decisions based upon
those appraisals. The concept of probability is especially
useful when you have a sample from a larger set of data - a
"universe" - and you want to know the probability of various
degrees of likeness between the sample and the universe. (The
universe of events you are sampling from is also called the
"population", a concept to be discussed below.) Perhaps the
universe of your study is all high school seniors in 1994. You
might then want to know, for example, the probability that the
universe's average SAT score will not differ from your sample's
average SAT by more than some arbitrary number of SAT points -
say, ten points.
A probability statement is usually about the future, or
about your future knowledge of past events. (Not always, though;
historians put probabilities on the likelihoods that events
occurred in the past, and the courts do, too, though the courts
hesitate to say so explicitly.) But often one does not know the
probability of a future event, except in the case of a gambler
playing black on an honest roulette wheel, or an insurance
company issuing a policy on an event with which it has had a lot
of experience, such as a life insurance policy. Therefore, our
concept must include situations where extensive data are not
available.
The conceptual probability in any specific situation is an
interpretation of all the evidence that is then available. For
example, a wise biomedical worker's estimate of the chance that a
given therapy will have a positive effect on a sick patient
should be an interpretation of the results of not just one study
in isolation, but of the results of that study plus everything
else that is known about the disease and the therapy. A wise
policymaker in business, government, or the military will base a
probability estimate on a wide variety of information and
knowledge. The same is even true of an insurance underwriter who
bases a life-insurance or shipping-insurance rate not only on
extensive tables of long-time experience but also on recent
knowledge of other kinds. The choice of a method of estimating a
probability constitutes an operational definition of probability.
Some writers (e. g. Hacking, 1975) suggest that the concept
of probability is fairly new, emerging in recent centuries. But
in my view the concept of probability is as old as the concept of
uncertainty, which in turn is as old as the concept of certainty
- that is, they have been with us forever. (The ancient
Egyptians used gaming devices quite similar to our dice, and we
can be sure that those players thought probabilistically.) What
is new is our gradually acquiring better capacities to understand
and deal with uncertainty.
AN OPERATIONAL DEFINITION OF PROBABILITY
As we shall see in the next chapter, there has long been
controversy about what probability "is" - that is, about the
supposed properties of probability. Typically, Parzen (1960, p.
2) asks, "What is it that is studied in probability that enables
it to have such diverse applications?" He goes on to answer, "A
random (or chance) phenomenon is an empirical phenonomenon
characterized by the property..." But the search for the
characterising property or properties is fraught with unavoidable
confusion and contradiction. For example, is the distribution of
a box of 3 one-inch and 3 two-inch nails to be considered a
"random phenomenon"? What about if one draws a nail blindfolded
from the box? And if you draw with your eyes open? How about
drawing one from a box of only 3 one-inch nails? And from a box
of 1000 one-inch and 1 two-inch nails? How about drawing 100
nails from a well-shuffled box of 1000 of each length? We can
certainly make some useful distinctions among these cases, but
the distinctions will depend upon what we know and think as we
draw a nail rather than just upon the distribution of nails, and
upon what we consider variation or stability (is picking between
45 and 55 one-inch nails among a 100 picks consider variation or
sameness?), among other complicating factors.
This dispute is reminiscent of the controversy about what
time "is" that held up the progress of physics until Einstein
swept away the difficulties by saying that time should simply be
defined as what one reads on a clock. This operational
definition became the keystone in a major advance in the
philosophy of science which overreached itself for a while, but
whose central idea - defining a difficult concept by the
operations used to measure an empirical proxy for the concept -
has cut threw many gordian knots of confusion in physics,
psychology, and elsewhere in science. (For two examples in
economics, the concepts of utility and product differentiation,
see Simon, 1974 and 1969). Similarly, an operational definition
of probability sidesteps the pitfalls into which probability has
for too long been mired.[1]
An operation definition is the all-important intellectual
procedure that Einstein employed in his study of special
relativity to sidestep the conceptual pitfalls into which
discussions of such concepts as probability also often slip. An
operational definition is to be distinguished from a property or
attribute definition, in which something is defined by saying
what it consists of. For example, a crude attribute definition
of a college might be "An organization containing faculty and
students, teaching a variety of subjects beyond the high-school
level." An operational definition of university might be "An
organization found in The World Almanac's listing of `Colleges
and Universities.'" (Simon, 1969, p. 18.)
P. W. Bridgman, the inventor of operational definitions, put
it that "the proper definition of a concept is not in terms of
its properties but in terms of actual operations." It was he who
explained that definitions in terms of properties had held
physics back until Albert Einstein and constituted the barrier
that it took Einstein to crack (Bridgman, 1927, pp. 6-7).
A formal operational definition of "operational definition"
may be appropriate. "A definition is an operational definition
to the extent that the definer (a) specifies the procedure
(including materials used) for identifying or generating the
definiendum and (b) finds high reliability for [consistency in
application of] his definition" (Dodd, in Dictionary of Social
Science, p. 476). A. J. Bachrach adds that "the operational
definition of a dish ... is its recipe" (Bachrach, 1962, p. 74).
The language of empirical scientific research is made up of
instructions that are descriptions of sets of actions or
operations (for instance, "turn right at the first street sign")
that someone can follow accurately. Such instructions are called
an "operational definition." An operational definition contains
a specification of all operations necessary to achieve the same
result.
The language of science also contains theoretical terms
(better called "hypothetical terms") that are not defined
operationally.
The clock-reading which operationally defines time may be a
windup spring clock, an electric clock, an atomic clock, sunset
on a given date in a given place, or the postman's delivery.
Similarly, we should simply say that probability "is" ( or
better, that we "define probability as") what you calculate from
the data in a life table or the experience with a slot machine,
or the number you assign to the chance that the competitor across
the street will reduce her price tomorrow. And just as different
sorts of clock proxies are used to measure time in various
circumstances in physics, different sorts of data proxies are
used to stand for probability - even for the same probability.
Back to Proxies
Example of a proxy: The "probability risk assessments"
(PRAs) that are made for the chances of failures of nuclear power
plants are based, not on long experience or even on laboratory
experiment, but rather on theorizing of various kinds - using
pieces of prior experience wherever possible, of course. A PRA
can cost a nuclear facility $5 million.
Another example: If a manager looks at the sales of radios
in the last two Decembers, and on that basis guesses how likely
it is that he will run out of stock if he orders 200 radios, then
the last two years' experience is serving as a proxy for future
experience. If a sales manager just "intuits" that the odds are
3 to 1 (a probability of .75) that the main competitor will not
meet a price cut, then all his past experience summed into his
intuition is a proxy for the probability that it will really
happen. Whether any proxy is a good or bad one depends on the
wisdom of the person choosing the proxy and making the
probability estimates.
THE VARIOUS WAYS OF ESTIMATING PROBABILITIES
How does one estimate a probability in practice? This
involves practical skills not very different from the practical
skills required to estimate with accuracy the length of a golf
shot, the number of carpenters you will need to build a house, or
the time it will take you to walk to a friend's house; we will
consider elsewhere some ways to improve your practical skills in
estimating probabilities [references in book or elsewhere]. For
now, let us simply categorize and consider in the next section
various ways of estimating an ordinary garden variety of
probability, which is called an "unconditional" probability.
Consider the probability of drawing an even-numbered spade
from a deck of poker cards (consider the queen as even and the
jack and king as odd). Here are several general methods of
estimation, the specifics of which constitute an operational
definition of probability in this particular case:
1. Experience. The first possible source for an estimate
of the probability of drawing an even-numbered spade is the
purely empirical method of experience. If you have watched card
games casually from time to time, you might simply guess at the
proportion of times you have seen even-numbered spades appear -
say, "about 1 in 15" or "about 1 in 9" (which is almost correct)
or something like that. (If you watch long enough you might come
to estimate something like 6 in 52.)
General information and experience are also the source for
estimating the probability that the sales of radios this December
will be between 200 and 250, based on sales the last two
Decembers; that your team will win the football game tomorrow;
that war will break out next year; or that a United States
astronaut will reach Mars before a Russian astronaut. You simply
put together all your relevant prior experience and knowledge,
and then make an educated guess.
Observation of repeated events can help you estimate the
probability that a machine will turn out a defective part or that
a child can memorize four nonsense syllables correctly in one
attempt. You watch repeated trials of similar events and record
the results.
Data on the mortality rates for people of various ages in a
particular country in a given decade are the basis for estimating
the probabilities of death, which are then used by the actuaries
of an insurance company to set life insurance rates. This is
systematized experience, - called a frequency series. No frequency series can speak for itself in a perfectly
objective manner. Many judgments inevitably enter into compiling
every frequency series - deciding which frequency series to use
for an estimate, and in choosing which part of the frequency
series to use, and so on. For example, should the insurance
company use only its records from last year, which will be too
few to provide as many data as would be liked, or should it also
use death records from years further back, when conditions were
slightly different, together with data from other sources? (Of
course, no two deaths - indeed, no events of any kind - are
exactly the same. But under many circumstances they are
practically the same, and science is only interested in such
"practical" considerations.)
In view of the necessarily judgmental aspects of probability
estimates, the reader may prefer to talk about "degrees of
belief" instead of probabilities. That's fine, just a long as it
is understood that we operate with degrees of belief in exactly
the same way as we operate with probabilities; the two terms are
working synonyms.
There is no logical difference between the sort of
probability that the life insurance company estimates on the
basis of its "frequency series" of past death rates, and the
manager's estimates of the sales of radios in December, based on
sales in that month in the past two years. [4]
The concept of a probability based on a frequency series can
be rendered meaningless when all the observations are repetitions
of a single magnitude - for example, the case of all successes
and zero failures of space-shuttle launches prior to the
Challenger shuttle tragedy in the 1980s; in those data alone
there was no basis to estimate the probability of a shuttle
failure. (Probabilists have made what some rather peculiar
attempts over the centuries to estimate probabilities from the
length of a zero-defect time series - such as the fact that the
sun has never failed to rise (foggy days aside!) - based on the
undeniable fact that the longer is such a series, the smaller the
probability of a failure; see e.g., Whitworth, 1897/1965, pp.
xix-xli. However, one surely has more information on which to
act when one has a long series of observations of the same
magnitude rather than a short series).
2. Simulated Experience. A second possible source of
probability estimates is empirical scientific investigation with
repeated trials of the phenomenon. This is an empirical method
even when the empirical trials are simulations. In the case of
the even-numbered spades, the empirical scientific procedure is
to shuffle the cards, deal one card, record whether or not the
card is an even-number spade, replace the card, and repeat the
steps a good many times. The proportions of times you observe an
even-numbered spade come up is a probability estimate based on a
frequency series.
You might reasonably ask why we do not just count the number
of even-numbered spades in the deck of fifty-two cards. No
reason at all. But that procedure would not work if you wanted
to estimate the probability of a baseball batter getting a hit or
a cigarette lighter producing flame.
Some varieties of poker are so complex that experiment is
the only feasible way to estimate the probabilities a player
needs to know.
The resampling approach to statistics produces estimates of
most probabilities with this sort of experimental "Monte Carlo"
method. More about this later.
3. Sample space analysis and first principles. A third
source of probability estimates is counting the possibilities -
the quintessential theoretical method. For example, by
examination of an ordinary die one can determine that there are
six different numbers that can come up. One can then determine
that the probability of getting (say) either a "1" or a "2," on a
single throw, is 2/6 = 1/3, because two among the six
possibilities are "1" or "2." One can similarly determine that
there are two possibilities of getting a "1" plus a "6" out of
thirty-six possibilities when rolling two dice, yielding a
probability estimate of 2/36 = 1/18.
Estimating probabilities by counting the possibilities has
two requirements: 1) that the possibilities all be known (and
therefore limited), and few enough to be studied easily; and 2)
that the probability of each particular possibility be known, for
example, that the probabilities of all sides of the dice coming
up are equal, that is, equal to 1/6.
4. Mathematical shortcuts to sample-space analysis. A
fourth source of probability estimates is mathematical
calculations. If one knows by other means that the probability
of a spade is 1/4 and the probability of an even-numbered card is
6/13, one can then calculate that the probability of turning up
an even-numbered spade is 6/52 (that is, 1/4 x 6/13). If one
knows that the probability of a spade is 1/4 and the probability
of a heart is 1/4, one can then calculate that the probability of
getting a heart or a spade is 1/2 (that is 1/4 + 1/4). The point
here is not the particular calculation procedures, but rather
that one can often calculate the desired probability on the basis
of already-known probabilities.
It is possible to estimate probabilities with mathematical
calculation only if one knows by other means the probabilities of
some related events. For example, there is no possible way of
mathematically calculating that a child will memorize four
nonsense syllables correctly in one attempt; empirical knowledge
is necessary.
5. Kitchen-sink methods. In addition to the above four
categories of estimation procedures, the statistical imagination
may produce estimates in still other ways such as a) the
salesman's seat-of-the-pants estimate of what the competition's
price will be next quarter, based on who-knows-what gossip, long-
time acquaintance with the competitors, and so on, and b) the
probability risk assessments (PRAs) that are made for the chances
of failures of nuclear power plants based, not on long experience
or even on laboratory experiment, but rather on theorizing of
various kinds - using pieces of prior experience wherever
possible, of course. Any of these methods may be a combination
of theoretical and empirical methods.
Consider the estimation of the probability of failure for
the tragic flight of the Challenger shuttle, as described by the
famous physicist Nobelist Richard Feynman. This is a very real
case that includes just about every sort of complication that
enters into estimating probabilities.
...Mr. Ullian told us that 5 out of 127 rockets that he
looked at had failed - a rate of about 4 percent. He
took that 4 percent and divided it by 4, because he
assumed a manned flight would be safer than an unmanned
one. He came out with about a 1 percent chance of
failure, and that was enough to warrant the destruct
charges.
But NASA [the space agency in charge] told Mr. Ullian
that the probability of failure was more like 1 of 105.
I tried to make sense out of that number. "Did you say
1 in 105?"
"That's right; 1 in 100,000."
"That means you could fly the shuttle every day for an
average of 300 years between accidents - every day, one
flight, for 300 years - which is obviously crazy!"
"Yes, I know," said Mr. Ullian. "I moved my number up
to 1 in 1000 to answer all of NASA's claims - that they
were much more careful with manned flights, that the
typical rocket isn't a valid comparison, et cetera...".
But then a new problem came up: the Jupiter probe,
Galileo, was going to use a power supply that runs on
heat generated by radioactivity. If the shuttle
carrying Galileo failed, radioactivity could be spread
over a large area. So the argument continued: NASA
kept saying 1 in 100,000 and Mr. Ullian kept saying 1
in 1000, at best.
Mr. Ullian also told us about the problems he had in
trying to talk to the man in charge, Mr. Kingsbury: he
could get appointments with underlings, but he never
could get through to Kingsbury and find out how NASA
got its figure of 1 in 100,000 (Feynman, 1989, pp. 179-
180)
Feynman tried to ascertain more about the more about the
origins of the figure of 1 in 100,000 that entered into NASA's
calculations. He performed an experiment with the engineers:
..."Here's a piece of paper each. Please write on your
paper the answer to this question: what do you think is
the probability that a flight would be uncompleted due
to a failure in this engine?"
They write down their answers and hand in their papers.
One guy wrote "99-44/100% pure" (copying the Ivory soap
slogan), meaning about 1 in 200. Another guy wrote
something very technical and highly quantitative in the
standard statistical way, carefully defining every-
thing, that I had to translate - which also meant about
1 in 200. The third guy wrote, simply, "1 in 300."
Mr. Lovingood's paper, however, said,
Cannot quantify. Reliability is judged from:
* past experience
* quality control in manufacturing
* engineering judgment
"Well," I said, "I've got four answers, and one of them
weaseled." I turned to Mr. Lovingood: "I think you
weaseled."
"I don't think I weaseled."
"You didn't tell me what your confidence was, sir; you
told me how you determined it. What I want to know is:
after you determined it, what was it?"
He says, "100 percent" - the engineers' jaws drop, my
jaw drops; I look at him, everybody looks at him - "uh,
uh, minus epsilon!"
So I say, "Well, yes; that's fine. Now, the only
problem is, WHAT IS EPSILON?"
He says, "10-5." It was the same number that Mr.
Ullian had told us about: 1 in 100,000.
I showed Mr. Lovingood the other answers and said,
"You'll be interested to know that there is a
difference between engineers and management here - a
factor of more than 300."
He says, "Sir, I'll be glad to send you the document
that contains this estimate, so you can understand
it."*
*Later, Mr. Lovingood sent me that report. It
said things like "The probability of mission
success is necessarily very close to 1.0" - does
that mean it is close to 1.0, or it ought to be
close to 1.0? - and "Historically, this high
degree of mission success has given rise to a
difference in philosophy between unmanned and
manned space flight programs; i.e., numerical
probability versus engineering judgment." As far
as I can tell, "engineering judgment" means
they're just going to make up numbers! The
probability of an engine-blade failure was given
as a universal constant, as if all the blades were
exactly the same, under the same conditions. The
whole paper was quantifying everything. Just
about every nut and bolt was in there: "The chance
that a HPHTP pipe will burst is 10-7." You can't
estimate things like that; a probability of 1 in
10,000,000 is almost impossible to estimate. It
was clear that the numbers for each part of the
engine were chosen so that when you add everything
together you get 1 in 100,000.(Feynman, 1989, pp.
182-183).
We see in the Challenger shuttle case very mixed kinds of
inputs to actual estimates of probabilities. They include
frequency series of past flights of other rockets, judgments
about the relevance of experience with that different sort of
rocket, adjustments for special temperature conditions (cold),
and much much more. There also were complex computational
processes in arriving at the probabilities that were made the
basis for the launch decision. And most impressive of all, of
course, are the extraordinary differences in estimates made by
various persons (or perhaps we should talk of various statuses
and roles) which make a mockery of the notion of objective
estimation in this case.
Working at a practical level with different sorts of
estimation methods in different sorts of situations is not new;
practical statisticians do so all the time. The novelty here
lies in making no apologies for doing so, and for raising the
practice to the philosophical level of a theoretically-justified
procedure - the theory being that of the operational definition.
The concept of probability varies from one field of endeavor
to another; it is different in the law, in science, and in
business. The concept is most straightforward in decision-making
situations such as business and gambling; there it is crystal-
clear that one's interest is entirely in making accurate
predictions so as to advance the interests of oneself and one's
group. The concept is most difficult in social science, where
there is considerable doubt about the aims and values of an
investigation. Most philosophical discussion focuses on the
roles of probability and statistics in physical science - which
is just one of the many types of situations where these concepts
are used, and certainly one of those where the waters of thought
have been most muddy.
THE DUALITY OF PROBABILITY AND PHYSICAL CONCEPTS
An important argument in favor of approaching the concept of
probability with the concept of the operational definition is
that an estimate of a probability often (though not always) is
the opposite side of the coin from an estimate of a physical
quantity such as time or space.
For example, uncertainty about the probability that one will
finish a task within 9 minutes is another way of labeling the
uncertainty that the time required to finish the task will be
less than 9 minutes. Hence, if an operational definition is
appropriate for time in this case, it should be equally
appropriate for probability. The same is true for the
probability that the quantity of radios sold will be between 200
and 250 units.
Hence the concept of probability, and its estimation in any
particular case, should be no more puzzling than is the "dual"
concept of time or distance or quantities of radios. That is,
lack of certainty about the probability that an event will occur
is not different in nature than lack of certainty about the
amount of time or distance in the event. There is no essential
difference between whether a part of length 2 inches long will be
the next to emerge from the machine, or what the length of the
next part will be, or the length of the part that just emerged
(if it has not yet been measured.)
The information available for the measurement of (say) the
length of a car or the location of a star is exactly the same
information that is available with respect to the concept of
probability in those situations. That is, one may have ten
disparate observations of an auto's length which then constitute
a probability distribution, and the same for the altitude of a
star in the heavens. All the more reason to see the parallel
between Einstein's concept of time and length as being what you
measure on a clock and on a meter stick, respectively - or
better, that time and length are equivalent to the measurements
that one makes on a clock or meter stick -- and the notion that
probability should be defined by the measurements made on a clock
or a meter stick. Seen this way, all the discussions of logical
and empirical notions of probability may be seen as being made
obsolete by the Einsteinian invention of the operational
definition, just as were discussions of absolute space and time
made obsolete by it.
Or: Consider having four different measurements of the
length of a model auto. Which number should we call the length?
It is standard practice to compute the mean. But the mean could
be seen as a weighted average of each observation by its
probability. That is
(.25 * 20 inches + .25 * 22 inches...) = mean model length
instead of
(20 + 22 + ...) / 4 = mean model length
This again makes clear that the decimal weights we call
"probabilities" have no extraordinary properties when discussing
frequency series; they are just weights we put on some other
values.
The key difference between a probability and other related
concepts such as length is that length can refer to the past, the
present, or the future. But a probability refers only to the
future (or to our future knowledge), and therefore cannot be
measured in ways analogous to our measurement of length and
weight and time, at least in those cases that are not aggregate
measurements such as the distribution of heights, or error
distributions such as the location of star.
Savage's "Bayesian" view of probability as a magnitude that
emerges implicitly from an assessment of the expected value (or
the expected utility, in decisions with very large consequences)
of a routine choice also fits in here, as I understand it. For
example, one may consider a wager on a football game between
Bolivia and Argentina at odds of 3-1; those odds then express
implicitly an estimate of the probability of each of the two
teams winning. Here, the terms of the wager - 3 to 1 odds - are
not different than the terms of an even-money wager with New York
being a ten-point favorite over Washington, where the point
spread rather than a difference in odds expresses the assessed
difference in skill. The point spread may seem to be a less
probabilistic notion than odds, but they are of quite the same
nature.
In the course of preparing the point spread or the odds,
however, the bet-maker almost surely has thought explicitly in
terms of the probabilities of the teams winning the game. That
is, a forecast underlies a statement of the odds, so we seem to
be about back to the same place we were before. A probabilitiy
is a statement about the future, which may be considered the same
as a judgmnent about the future. There are many different
methods that people find it useful to form such judgments, and
the methods are quite different in different circumstances, and
not easy to classify. A weather forecaster forms a probability
of rain tomorrow on the basis of a wide variety of information,
various mathematical models, records about local weather
conditions in the past, records of his/her own performance, and
much more. Frequency data and personal probabilities are
inextricably mixed in here, as they are in sports forecasting and
odds making, and in the sort of engineering forecast as the
failure of the space shuttle Challenger's O-rings (see page 000).
To my mind, all this makes a complete hash (in all senses of the
word) of the dispute between frequentists and personalists and
all other schools of thought about the "nature" of probability.
It should be noted that the view outlined above has
absolutely no negative implications for the formal mathematical
theory of probability.
In a book of puzzles about probability (Mosteller,
1965/1987, #42)), this problem appears: "If a stick is broken in
two at random, what is the average length of the smaller piece?"
This particular puzzle does not even mention probability
explicitly, and no one would feel the need to write a scholarly
treatise on the meaning of the word "length" here, any more than
one would one do so if the question were about an astronomer's
average observation of the angle of a star at a given time or
place, or the average height of boards cut by a carpenter, or the
average size of a basketball team. Nor would one write a
treatise about the "meaning" of "time" if a similar puzzle
involved the average time between two bird calls. Yet a
rephrasing of the problem reveals its tie to the concept of
probability, to wit: What is the probability that the smaller
piece will be (say) more than half the length of the larger
piece? Or, what is the probability distribution of the sizes of
the shorter piece?
The duality of the concepts of probability and physical
entities also emerges in Whitworth's discussion (1897/1965) of
fair betting odds:
...What sum ought you fairly to give or take now, while
the event is undetermined, in exchange for the
assurance that you shall receive a stated sum (say
$1,000 if the favourable event occur? The chance of
receiving $1,000 is worth something. It is not as good
as the certainty of receiving $1,000, and therefore it
is worth less than $1,000. But the prospect or
expectation or chance, however slight, is a commodity
which may be bought and sold. It must have its price
somewhere between zero and $1,000. (p. xix.)
...And the ratio of the expectation to the full sum to
be received is what is called the chance of the
favourable event. For instance, if we say that the
chance is 1/5, it is equivalent to saying that $200 is
the fair price of the contingent $1,000. (p. xx.)...
The fair price can sometimes be calculated
mathematically from a priori considerations: sometimes
it can be deduced from statistics, that is, from the
recorded results of observation and experiment.
Sometimes it can only be estimated generally, the
estimate being founded on a limited knowledge or
experience. If your expectation depends on the drawing
of a ticket in a raffle, the fair price can be
calculated from abstract considerations: if it depend
upon your outliving another person, the fair price can
be inferred from recorded statistics: if it depend upon
a benefactor not revoking his will, the fair price
depends upon the character of your benefactor, his
habit of changing his mind, and other circumstances
upon the knowledge of which you base your estimate.
But if in any of these cases you determine that $300 is
the sum which you ought fairly to accept for your
prospect, this is equivalent to saying that your
chance, whether calculated or estimated, is 3/10... (p.
xx.)
It is indubitable that along with frequency data, a wide
variety of other information will affect the odds at which a
reasonable person will bet. If the two concepts of probability
stand on a similar footing here, why should they not be on a
similar footing in all discussion of probability? Why should
both kinds of information not be employed in an operational
definition of probability? I can think of no reason that they
should not be so treated.
Scholars write about the "discovery" of the concept of
probability in one century or another. But is it not likely that
even in pre-history, when a fisherperson was asked how long the
big fish was, s/he sometimes extended her/his arms and said,
"About this long, but I'm not exactly sure", and when a scout was
asked how many of the enemy there were, s/he answered, "I don't
know for sure...probably about fifty". The uncertainty implicit
in these statements is the functional equivalent of probability
statements. There simply is no need to make such heavy work of
the probability concept as the philosophers and mathematicians
and historians have done.
CONCLUSION
In sum, one should not think of what a probability "is" but
rather how best to estimate it. In practice, neither in actual
decision-making situations nor in scientific work - nor in
classes - do people experience difficulties estimating
probabilities because of philosophical confusions. Only
philosophers and mathematicians worry - and even they really do
not need to worry - about the "meaning" of probability.
**FOOTNOTES**
[1]: I have long wondered why the concept of operational
definition has not had a larger place in discussions by
professional philosophers. I now speculate that as a powerful
tool for resolving apparently-insolvable controversies that have
raged for decades and even centuries, it threatens the
livelihoods of some philosophers whose intellectual capital and
reason for being would disappear with the disappearance of those
controversies. See Deming (1989, Chapter 15) for spirited
advocacy of operational definitions in statistics, and Simon
1969; 3rd ed. (with Burstein) 1985, for discussion of the
concept.
ENDNOTES