# Thorndyke, etc.; probability, etc.

[From Bill Powers (951101.1340 MST)]

Hans Blom (951101)--

Your reply makes the distinctions very clear. And also that you pit
one theory against another.

I intended it to be taken that way: one theory against another. If two
theories seem to explain the same behavior, then perhaps we can think of
more detailed experiments that will allow us to select one theory over
the other.

To the observer, the cat's actions are the "doing," but to the cat,
the consequences of those actions are what is being done.

How do you know? Unfortunately you are locked into the position of
an observer and cannot be the cat. You're only guessing (i.e.
theoriz- ing). Since YOU BELIEVE THAT the consequences of actions
are what is being done, you attribute the same point of view to the
cat. Are you doing more than Thorndyke, i.e. formulating a
hypothesis which needs further corroboration?

I'm not saying I believe it; just that this is what PCT would lead me to
expect. Of course we would have to test that explanation before
submitting it to a journal; test the variables I am guessing that the
cat controls, and see if it is actually controlling them.

Do you really think so? In my view, the cat was trying to solve a
problem; to gain control, one might say, over its situation. Then
suddenly, unexplainedly, the solution was there and, moreover, was
immediately recognized as such.

That's another proposal, which would also have to be tested to see if it
works. I know how I would go about seeing if a cat was controlling some
variable, but I don't know how anyone could show that the cat
immediately recognized the solution as such. It seems to me that that
would be a lot harder to determine.

A far more important event, I think, than "the door simply fell
open by itself". So important, maybe, that some internal mechanisms
made something of a "snapshot" of the situ- ation, not only of the
door falling open but of everything else that happend at/around
that moment too. Now I am theorizing ;-).

A pretty good idea, I think. I can't see offhand how to check it out,
but perhaps you can.

An alternative explanation: Superimpose a number of those
"snapshots" that were taken when solutions occurred, kind of like
averaging, and a pattern might present itself: a more or less
reliable and signifi- cant connection between the door opening and
some action. No certain- ty yet, but at least a possible way to a
solution that ought to be given a higher priority over other ways.

More good ideas, but they're getting harder and harder to check. The
snapshots, the process of superimposing them, the assignment of
priorities -- all pretty tough to verify.

Maybe it is. Goals, except the topmost ones of the hierarchy, are
means, methods, tools, that can be employed to reach the topmost
goals. Try out the methods that appear most promising first, and
relative frequency becomes an important notion.

Fine, if we're talking about an organism that can think in such
abstractions. Probably the first step to take in testing this theory
would be to set up some preliminary experiments in which perception of
relative frequency as such becomes an explicit requirement for control.
If it turns out that cats can't control variables of that kind, that
would save us the trouble of doing the rest of the investigation.

In probabilistic model-building, degrees of certainty (or relative
frequencies) are an important notion. They don't only exist object-
ively -- where an observer can watch them -- but also internally in
the model itself. So your remark

>All the probabilities and orderings into a hierarchy are part of
>Thorndyke's behavior, not the cat's behavior.

is not correct from the point of view of model building.

You're right about that. If we're modeling a system that can actually
perceive probabilities, then this perception becomes an important part
of the model.

If the cat did not try all directions but in each case happened to
be in a position that worked, it would not have discovered the
"true" solution, but only one that "worked".

I think that's all that a cat can be concerned with -- a solution that
works. If it stops working, the cat just has to learn some more. It
never has to know WHY the solution works, or whether it's the ideal
optimal solution.

But in solving this "real problem" whose solution in terms of
specif- ic actions is as yet unknown, (un)certainties or relative
frequencies might play a major role, if only in ordering the
possible means of solving the problem in terms of what seems most
promising. Note that this is different from the standard PCT
approach, in which _one_ non- discrete action (u) must be
quantified. Now the problem is _which_ of a (discrete) number of
actions (mechanisms) is to be employed, and how to perform this
selection.

Uncertainties might play a major role if the organism in question is
capable of considering them. A cat might not solve the problem in the
same way that a human being who can make lists and arrange possibilities
by priorities and think about the priorities by manipulating symbols can
do. In fact I would be quite surprised if it turned out that cats did
such things, but if you can think of a way to show that they do, I'll be
pleased to be surprised. I always did love Doctor Dolittle (child's
story: the man who could talk to animals).

All that matters is eliminating those behaviors that are not having
the desired effect on another variable.

This assumes that one can reach _certainty_ that a certain type of
behavior does not contribute towards the solution.

If I waited for certainty that a certain type of behavior doesn't
contribute toward a solution, I'd never do anything. Is this how you
work? It's not how I work. When I approach a door, I don't worry about
whether reaching out and pulling on the handle will, with absolute
certainty, cause the door to open. I don't even think about that; I just
reach out and pull on it. If that doesn't work, I try pushing. I really
couldn't tell you whether more door have opened by a push than a pull,
so I can't say whether I'm trying the most likely solution first. But
does it matter if I'm not?

The problem is simpler, in my opinion. You have a problem that
needs to be solved, so you try all approaches to a solution that
you can "think" of and that you have in your repertoire, either
innate or learned. That seems simple. What is not so simple is the
order in which to try out all those possible solutions. Here you
need some (assumed) knowledge about which approaches are most
promising. This is where mechanisms that can handle uncertainty are
needed.

Why do they have to be tried out in some particular order? The time and
effort you'd spend in making sure you tried the most likely solution
first would be wasted if there aren't many things you can try and if
there's no important penalty for guessing wrong. In fact, in Thorndyke's
experiment, if the cat picked the mostly frequent of its behaviors to
try first, it would assure itself of taking the longest possible time to
open the box, given what Bruce Abbott said (that brushing against the
post is near the bottom of the list). The cat would find the best
solution quickest by using a random selection of means.

And don't forget one other thing, which is extremely important.
Initially the cat has no idea that the solution from case to case
will be the same: rubbing the pole. This is something that has to
be discovered as well.

Well, that's still another proposal that has to be checked out. Does the
cat _ever_ learn that the solution is the same from case to case? I
think this generalization is probably beyond the cat; the idea that a
solution might change from case to case is a rather advanced notion, one
which few animals, even humans, would be likely to consider. I don't
really think it's necessary to form any opinion about the reliability of
the world; if it's not reliable, you'll never find a solution worth
remembering anyway. If you do find a solution, you might as well use it
again the next time, even if you have the intellectual capacity to form
the notion of "reliability." But if you think cats have that ability,
I'll be interested in how you demonstrate it.

···

-------------------------------------------
Hans Blom (951101b) --

This makes probability into a causal agency. ... Probability can't
cause anything.

But SUBJECTIVE probability can.

Of course, I should have caught that one. When a gambler sits at a table
holding four cards to a straight, he has a subjective sense of the
probability of filling it, and considering the size of the pot and the
impression of his skill that he is trying to establish in the other
players, he will definitely adjust his behavior in a way that takes
subjective probabilities into account. If his investment in the pot is
small, he might draw to the straight to make the other players think he
can't calculate the odds, useful for a later bluff.

In a model-based ("predictive") controller, an "idea" exists of
what the result of a certain action will be, modelled as e.g.
x[k+1] := f (x[k], u[k]). Generate an action u, and the predicted
result will e.g. be b * u. The term b might not be accurately known
but also have an unknown component (variance), so that when the
action u is executed, any result between bmax * u and bmin * u
might be perceived. In a probabilistic model- based controller,
estimates of both b and its uncertainty exist. And thus
probabilistic numbers can be used when "computing" an action.

Right, actual calculations of probability do come into this kind of
control model. However, we always have to ask ourselves whether the
behaving system we are trying to model has the ability to do the
calculations that the model does. It may be that a model capable of
doing these calculations can demonstrate a certain level of performance,
but that an organism can demonstrate the same level of performance (or
better, or not so good) _without_ literally calculating probabilities.
Remember that calculations of the kind you use in your modeling are done
by mathematics, manipulating symbols according to a specific system of
rules. A great many interesting things can be done that way, but not by
a system that can neither manipulate symbols nor follow the rules of
mathematics. Human beings like you and me can manipulate symbols with
pencil and paper, or with a calculator, or program a computer whose
operations can be interpreted in terms of similar manipulations, but
that takes a rather high level of organization to accomplish. And it's
quite slow, because discrete calculations are not well suited to
handling continuous relationships, particular multiple relationships
operating in parallel. However, there can be no argument that human
beings do NOT use such calculations in their behavior; they obviously
do. Not in all of their behavior, but certainly in some of it.

These subjective probabilities and likelihoods do not exist in the
outside world; they are properties ("knowledge") of the individual.
The "knowledge" of an individual might be completely incorrect or
uncertain, yet it will determine the individual's behavior.

I agree with that. The only caveat is that we must make sure that the
kinds of organisms we are talking about, and the levels of organization
we are talking about, contain the ability to compute such probabilities
and likelihoods. And of course we have to think of ways to show that
they actually do such computations.
-----------------------------------------------------------------------
Bruce Nevin (951101.1122 EST) --

In a constructed PCT model, the terms "reference level", "control",
and so on have a straightforward literal correspondence to well-
understood perceptions of computer science (or whatever the domain
of model making may be). We are explicitly invited to draw an
analogy between the behavior of the model and the behavior of the
modelled.

It is this -- something like the notion of "operational definition"
-- that distinguishes the terminology of PCT from other usage
sharply rather than by continuous gradation as Hans has suggested
by analogy to the heap puzzle.

Beautiful, Bruce. This is the difference between an analogy and an
analog. In the analog, the relations among the variables are asserted to
_the same_, not just _similar_.
-----------------------------------------------------------------------
Best to all,

Bill P.

[From francisco arocha, 95/11/02-10.15]

[From Bill Powers (951101.1340 MST)]

Hans Blom (951101b) --

BP >>This makes probability into a causal agency. ... Probability can't
>>cause anything.

HB But SUBJECTIVE probability can.

BP Of course, I should have caught that one. When a gambler sits at a table
holding four cards to a straight, he has a subjective sense of the
probability of filling it, and considering the size of the pot and the
impression of his skill that he is trying to establish in the other
players, he will definitely adjust his behavior in a way that takes
subjective probabilities into account. If his investment in the pot is
small, he might draw to the straight to make the other players think he
can't calculate the odds, useful for a later bluff.

But these are NOT subjective probabilities. The subjectivist interpretation
of probabilties states that probabilties reflect degrees of uncertainty of
a hypothesis or a proposition, not of an external event (that would be an
objectivist interpretation). The subjective sense the gambler has is just a
guess (usually qualitative) about the chance of an event, not about his own
sense of uncertainty. To call guesses, probabilities is confusing guesses
or guesstimates with probabilities, which amounts to confusing a
psychological category with its mathematical formulation.

HB In a model-based ("predictive") controller, an "idea" exists of
what the result of a certain action will be, modelled as e.g.
x[k+1] := f (x[k], u[k]). Generate an action u, and the predicted
result will e.g. be b * u. The term b might not be accurately known
but also have an unknown component (variance), so that when the
action u is executed, any result between bmax * u and bmin * u
might be perceived. In a probabilistic model- based controller,
estimates of both b and its uncertainty exist. And thus
probabilistic numbers can be used when "computing" an action.

BP Right, actual calculations of probability do come into this kind of
control model. However, we always have to ask ourselves whether the
behaving system we are trying to model has the ability to do the
calculations that the model does. It may be that a model capable of
doing these calculations can demonstrate a certain level of performance,
but that an organism can demonstrate the same level of performance (or
better, or not so good) _without_ literally calculating probabilities.

Which has been shown to be the case in many experiemntal studies of
decision making. The results from these studies are clear: Most people,
even those trained in probabilty theory, do not follow the probability
calculus in guessing about everyday life events. Why should we expect them
to use such a sophisticated piece of mathematics?

BP Remember that calculations of the kind you use in your modeling are done
by mathematics, manipulating symbols according to a specific system of
rules. A great many interesting things can be done that way, but not by
a system that can neither manipulate symbols nor follow the rules of
mathematics. Human beings like you and me can manipulate symbols with
pencil and paper, or with a calculator, or program a computer whose
operations can be interpreted in terms of similar manipulations, but
that takes a rather high level of organization to accomplish. And it's
quite slow, because discrete calculations are not well suited to
handling continuous relationships, particular multiple relationships
operating in parallel. However, there can be no argument that human
beings do NOT use such calculations in their behavior; they obviously
do. Not in all of their behavior, but certainly in some of it.

Yes.

These subjective probabilities and likelihoods do not exist in the
outside world; they are properties ("knowledge") of the individual.
The "knowledge" of an individual might be completely incorrect or
uncertain, yet it will determine the individual's behavior.

BP I agree with that. The only caveat is that we must make sure that the
kinds of organisms we are talking about, and the levels of organization
we are talking about, contain the ability to compute such probabilities
and likelihoods. And of course we have to think of ways to show that
they actually do such computations.

The claim made by some Bayesians is that the probabilty calculus is a
theory of the inductive reasoning process "used" by people. That is, that
it describes what goes on in people's head. I think it would be better to
say that guesses are properties of the individual; probabilties simply do
not exist, just like Mickey Mouse does not exist. You could treat someone
guesses with the probabilty calculus, but in this case it would be an
example of objecive probabiliies (your probabilty that someone would guess
something).

Hsta pronto,

francisco

j. francisco arocha Tel: (514) 398-4985
1110 Pine Avenue W. Fax (514) 398-7246
Cognitive Studies in Medicine
Centre for Medical Education E-mail: francisco@medcor.mcgill.ca
McGill University (alt) email: cybn@musica.mcgill.ca
Montreal, QC H3A 1A3