[Martin Taylor 951102 13:30]

francisco arocha, 95/11/02-10.15

+ Bill Powers (951102.0920 MDT)

The subjectivist interpretation of probabilties states that

probabilties reflect degrees of uncertainty of a hypothesis or a

proposition, not of an external event (that would be an objectivist

interpretation). ...

... To call guesses, probabilities is

confusing guesses or guesstimates with probabilities, which amounts

to confusing a psychological category with its mathematical

formulation.

+I'm curious; just how do psychologists think of this state we call

+uncertainty? ...What does being certain feel like? I should think you'd have

+to know that before you could say you're UN certain. Is uncertainty like

+a conflict?

+...

+Could somebody give me a restart on this meaningless noise?

Don't know whether this will be a restart or more meaningless noise, but

I'll have a go.

First off, let's make our usual assumption that there is a real world out

there, but that we can know nothing of it except through our perceptual

functions working (ultimately) on sensory data--that word "ultimately" is

intended to include imagination, memory, etc.

The consequence of that is to make it clear that the "probability" of

something, or its "uncertainty" is a perception. So it is legitimate to

ask, as Bill does, "what does being certain feel like," just as one may

ask "what does red look like." Red looks like "that folder," "those leaves,"

"that sunset," "that light"... You can't specify it in words, but you can

get across the idea by example, by trying to find a bunch of things that

have only their "redness" in common.

With "probability" as a perception, it may be more difficult to find examples,

since the probability is of a perception that is not a function _only_ of

immediately present sensory data. Probability is an attribute of some

fact or event not now present. I can get across the idea by trying to

choose facts or events about which I assume you have much the same notion

of probability as I do. If you don't, I'll fail, just as I will fail to

show a colour-blind person "what red looks like." I'll know that the

colour blind person didn't get it when he points out as red something I

would call green. So, if what I get across to you is not my idea of

uncertainty or probability, then at some point you will make an assessment

that is at variance with what I would say.

This happened in an earlier interchange on "uncertainty." Bill P. said

that the value of the consequence of a decision affected his perception

of the uncertainty of the decision. I've forgotten the example, but I'll

reconstruct an analogue. He would be more uncertain as to whether the

flower in a vase was an aster or a carnation (assuming unfamiliarity with

flower names) if its being an aster meant he had to do something very

important and its being a carnation meant he had to do the opposite, than

if the difference were just a matter of coming to know which was which.

In my use of the term, the uncertainty would be the same in both cases,

and the importance of the decision would be a separate issue.

So there's a problem of labelling. Bill P., for all I know, may not even

_have_ a perception that corresponds to the feeling I label with the word

"uncertainty". There's no "uncertainty" in the world, unless (possibly)

we are dealing with quantum effects, and even then, it's an interesting

issue as to whether the uncertainty is in the world or in our perception

of the world.

Second point. I assume we all have some kind of feeling that goes along

with the difference between "The sun will rise in the East and not the

West tomorrow," and "The weather will be fine for our picnic on Tuesday

next." We are less uncertain about the first statement than the second.

We believe strongly that the sun will rise in the East, but we may believe

equally that the weather will be fine and that the weather will be rainy

next Tuesday. I think it more likely that there will be 4 inches of snow

in Toronto on Jan 16 than on July 16 1996, but I'm almost certain that there

will be more than 4 inches total between Jan 1 and Dec 31 1996.

When the time comes, the sun will or will not rise in the East, and the

weather will or will not be fine enough to have the picnic. The uncertainty

will have become very low in both cases. But it is an attribute of a

single event, not of a set of many events. When this event has many

reasonable possibilities of how it will (or did) turn out, one is more

uncertain than if there is only one reasonable possibility. And that

word "reasonable" is also a label for a perception. There's no "reason"

out there in the world.

Starting (I think) with Buffon in the (?)17th century, people began to try

to put numbers on this feeling of uncertainty. Gamblers had been using it

for a long time, but without mathematics. For mathematics, you need some

kind of set of axioms, and those gave rise to (or were derived from) a

notion of "probability," which is quite different from "uncertainty" but

is still a label for a perception. One may be "uncertain" about what the

weather will be for the picnic, but that uncertainty depends on there being

several reasonable outcomes. One can associate a "probability" with each of

the outcomes. "Probabilities" are associated with numbers that conform

to certain axioms, no matter what the underlying perception may have been

before the concept of probability became mathematized. It's much the same

as the precision in the term "control" as used in PCT as compared to the

fuzziness of its everyday use.

Probabilities have certain requirements.

(1) If there is a set of possible outcomes for an event, their

probabilities must sum to 1.0 exactly. This immediately removes the

label "probability" from the perceptions that are an attribute of the

imagined event, and applies it to some symbolic perceptions derived from

those feelings.

(2) If one outcome of an event is perceived as being more likely than

another, then the "probability" of that outcome is the greater. "Likely"

is another label that has been mathematized, but here I use it in reference

to the perception that one associates with the imagined outcome. If you

perceive it as more likely next Tuesday will be sunny than that it will rain,

the "probability" of sun is greater than the "probability" of rain.

(3) If two events A and B have no causal connection (in the physicist's

sense), then the probability that A will have outcome i and that B will

have outcome j is eaxctly the product of the individual probabilties of

those outcomes.

(4) Since the probabilities are real numbers, the probabilities of

the outcomes of two disparate events can be compared. The probability

that it will rain next Tuesday is greater than, equal to, or less than

the probability that there is life under the nitrogen atmosphere of Titan

(and whether it is greater than or less than depends on who you are).

With these and maybe one or two other axioms, the whole mathematics of

probability is developed. And there remains a parallelism between the

results of the maths and the subjective feelings one has about the events

symbolized in the maths, at least most of the time.

Probability is entirely subjective and symbolic and deals with single

events ONLY. But how the underlying "likeliness" perception is acquired

is another matter (I don't use "likelihood" because that is also mathematized).

One often derives it from a model of the world together with current and

remembered perceptions of the world. Is the probability that the sun will

rise in the East tomorrow high because it always has, or because you have a

model of the spinning world that leads you to believe that the spin won't

change before tomorrow morning? A bit of each, I suspect. Is it improbable

that a camel could pass through the eye of a needle because, of lots of camels

that you have seen trying it, only a few succeeded? I doubt it. You have

modelled camels, needles, and the requirement for passage, and found in

your imagination few camels that could succeed. (Though as I remember, "The

eye of the needle" was the name of some city gate or other, with no

reference to a sewing needle; some camels might have got through).

"Frequentist" probability depends on the assumption that an event can be

repeated, sometimes with one outcome and sometimes with another. But no

event can ever be repeated. All we can do is to assert that two events

differ only in ways that do not matter to us. A coin has been tossed

a large number of times, and the more times it is tossed, the nearer the

proportion of heads comes to 0.5. It doesn't matter that the mail arrived

after the 651st toss, that Mercury completed three orbits around the sun

during the experiment, that Sam went to see the Pyramids during the 3,000 to

5716th toss... But we, personally, choose to see certain events as "the

same" for purposes of measuring the proportions, and when we imagine a

new event to be "the same" again, we can take that historic proportion and

use it to estimate the _probability_ of the outcomes of the new event.

"Frequentist" probability (or as it is sometimes, absurdly, called,

"objective" probability) substitutes historic proportion for the probability

of the new event. And if you have no model of what is going on, historical

proportion may indeed be the best way to get a probability number. But

historical proportion should never be confused with "probability."

Enough "meaningless noise" on "probability." How about "uncertainty?"

Above, I asserted that "uncertainty" is a perception about an event. Whereas

"likeliness," or in symbolic terms "probability," is associated with a

possible outcome, "uncertainty" refers to the relationship among possible

outcomes. The more nearly the probabilities are the same, and the more

of them there are, the greater the uncertainty about the event. But to

use "probability" in this way is already to mathematize "uncertainty."

We come around in a circle. "Uncertainty" as a feeling of, shall we say,

indecision, is a very vague, non-numeric kind of thing (except, in PCT,

every perception has a numeric value, but we ignore that for the moment).

"Uncertainty" as a technical, mathematical, term, captures that feeling

in much the same way that "probability" captures the feeling of "likeliness."

In the technical sense, the "uncertainty" of an event has no relation to

the importance of the various outcomes. It depends only and exclusively

on their probabilities. Shannon (The Mathematical Theory of Communication,

Urbana, U of Illinois Press, 1949) specified three axioms, which, like those

of probability, seem to capture much of the flavour of our feeling of

"uncertainty," if we leave out the importance of a decision. Shannon's

axioms are (paraphrased):

(1) Very small changes in the probabilies of the various outcomes lead to

very small changes in the uncertainty of the event.

(2) If all the outcomes have equal probabilities, then the more outcomes

there are, the greater then uncertainty about the event. (Remember that

the probabilities must sum to 1.0).

(3) If two or more outcomes are combined into one sub-event and their

probabilities as outcomes of the sub-event are then considered, that

combination and re-division does not affect the resulting uncertainty

of the original event. For example, if a light may possibly be shown

as "white or coloured" and "if coloured, it will be red or blue," the

uncertainty about the light colour will be the same as if it had been

thought of as "white or red or blue" initially.

These axioms are sufficient to define a mathematization of "uncertainty"

based on the prior mathematization of "probability." If the N different

outcomes of an event are labelled (1,2,...j,...N), and the probability

of outcome j is pj, then the uncertainty about the event is

H = -K(sum-from-1-to-N (pj log pj)).

In that formula, K is an arbitrary constant. The logarithmic form is

forced by the three "common-sense" axioms. Shannon also developed the same

form when the event can be considered as a measurement, with a continuous

range of possibilities, rather than as a choice among N discrete possible

outcomes. In the continuous form, the sum is replaced by an integral,

and probability is replaced by "probability density."

What is "probability density?" Simply put, if you define for yourself

an outcome as being "the measurement will be between X and X+deltaX",

then the probability density at X will be the probability of that outcome

divided by deltaX, as deltaX is reduced ever closer toward zero.

To return to Bill's original plaint about what "uncertainty" feels like:

Nothing mathematical really expresses what something "feels like." What

does "2" feel like, or "plus?" Both are abstractions from a whole mess of

ordinary perceptions of ordinary events and relationships. So it is with

uncertainty. Sometimes we "just know" what's going to happen (though we

may well be wrong when the time comes), and sometimes we "haven't a clue

how it will turn out." These feelings are abstracted into a mathematized

form as "probability" and "uncertainty." And on those symbolic

representations we can operate, with useful (or pointless) results.

As in any other technical area, terms may be used with precision in ways

that capture only a part of the range of everyday usage. Sometimes we

trip up when we use the everyday sense and are understood to be using the

technical sense, or vice-versa. But as with "Perception" in PCT, there

really isn't a satisfactory alternative to using the everyday terms with

which the technical concepts most closely relate.

## ···

----------------

I hope there isn't too much gibberish there. I had no intention of making

it so long when I started, or of taking so much time on this issue. But I

think it does matter, so I won't apologize.

Martin