Skinner & Control; More cooking; variaboili

[From Bill Powers (920903.0800)]

Chuck Tucker (920902.1223) --

It's hard to know what Skinner was thinking of when he said variables
in the environment "control" behavior. He certainly couldn't have
meant the same thing as when he said that an experimenter "controls"
the behavior of a pigeon toward a preselected goal-state. He just
wouldn't have spoken of the environment controlling in that way. I
don't think we can pin him down to a specific meaning, because I don't
think he had one.

... the term 'control' cannot be used to describe any activity that
goes on between "self-regulating systems".

That puts my view succinctly.

···

----------------------------------------------------------------------
-
Penni Sibun (920902.1200) --

... in an ai/mentalist approach, perception is the input to
representation. representation is datastructures that hang around in
your head (whereas perception doesn't hang around in yr head, it's
basically a process). chapman and i et al. argue against that view

of >perc./repr.

This is a sticky problem with words. Perception can mean a whole
closed-loop process which includes the actions on the world required
to bring a perception into some state. It can mean the activities of
the perceptual part of the brain -- that is, a process of sensing and
constructing higher sensory variables out of lower ones. Or it can
mean the outcome of all these processes -- that is, something that is
apprehended as a perception, or as Martin says, a percept. To compound
that confusion, traditional uses of the term have separated sensation
(registering of sensory input energies) from perception
(interpretation, inference, and insight). I think that all these
usages reflect confusion about the whole subject.

I decided to cut through the confusion (at least in my own head) by
dropping all the overlapping and ambiguous usages and using perception
to mean simply the presence of a neural signal in any afferent pathway
at any level of complexity. I could do this because in the
epistemology of control, what is controlled in the final analysis is
the signal inside the brain that corresponds to some aspect of the
external world. In fact, the "aspect of the world" becomes
hypothetical, while the signals are all that are available to
awareness. This puts the perceived world inside the head from the very
beginning, so there is no question of "coding" or "representation."
The world known to the brain, including all its dynamic phenomena, is
a world of signals. That's the world that we see as being outside of
us, because I assume that the interface between awareness and whatever
else there is is always mediated by the brain. I haven't seen any
other way of making sense of subjective experience in the same
framework with observations of the behavior of others and the physical
construction of a person -- and believe me, I have tried for a long
time.

From your words, I get the impression that the AI concept of

"representation" is something still different. We have the world,
perception of the world, and representations of perception of the
world. This could be a halting step toward the sort of hierarchy of
perception that makes up HPCT. But it doesn't seem to be cast in terms
of experienced aspects of the world at the highest level. It seems
more like a conversion of higher aspects of experience into a
mathematical system that is almost deliberately nonintuitive.
---------------------------------------------------------------------

  This is because each level of perception does not start from
scratch (as in Brooks' subsumption architecture) but derives its own
type

i'm not sure this is right about brooks....

I think it is. "Higher" systems in Brooks' architecture get
information from the same sensors that feed lower systems; when a
higher system wants to act, it has to inhibit the lower system to keep
it from interfering. Look at his overall diagram. He shows an input
line that branches out to all the levels in the subsumption stack;
input information gets to the higher levels without going through the
lower systems.

In the HPCT model, the ONLY systems that can act directly on the
environment are those in the lowest levels of control. Higher systems
have to act by telling lower systems what states of the controlled
variables at that level to maintain; they can't bypass the lower
systems and operate the muscles directly. Also, many levels involve
perception and control of signals that are already being perceived and
controlled, individually, by lower systems. The higher-level
perceptions are functions of already-existing lower-level perceptions
that have been abstracted from sensory inputs by lower perceptual
functions. So in general, the inputs to higher systems have already
been processed by lower systems, many levels of them. This is very
different from Brooks' arrangement.
----------------------------------------------------------------------
--
Martin Taylor (920902.1430) --
RE: exploration as result of systematic control process.

I hadn't thought of this possibility. It comes close to an idea I

had >very many years ago, that organisms control to maintain a
preferred >level of variability in perception at all levels of
abstraction.

The problem with ideas like this is that putting them into a model, a
block diagram that shows how they would work is very difficult. If a
system is going to control for variability at all levels of
abstraction, you have to show how it gets information about
variability from all these levels, and how its actions affect the
lower systems to change the variability in the necessary direction. I
can't imagine how that could be done without interfering with the
control systems at all those levels. And just what is it about an ECS
that you would vary in order to change its "variability?"

The same problem shows up in your suggestions about a "alerting
functions". What is it that they are affecting in the controlling
systems? How do they know about the consequences of those effects?
What, exactly, are these systems supposed to do? I can see some
broadly-defined effects, but I don't see any models that would tell us
what to expect when connections of this sort are actually set up and
working.

We can't tell if conjectures like these are any good until they are
cast in the form of a model from which we could predict the
consequences of such an organization. To turn them into models, you
have to make specific propositions about the details. Until that's
done, there's nothing to test. You don't even know if the organization
you suggest is feasible on its own merits, much less descriptive of
behavioral organization.
----------------------------------------------------------------------
--
Penni Sibun (920902.1400) --

well, of course, to interactionists, institutions are neither in the
head nor in the evironment, but.... this list, csgl, is an

institution >cause we are all participating in it. it's not my head,
it's not in >your head, and it's not out there in the environment
somewhere just >sitting around.

My problem with institutions is that they're nouns, whereas what goes
on between organisms are processes and interactions -- verbs. What
something like CSG-L "is" depends on who's looking at it. For me, it's
an ongoing conversation with a person to claims to be Penni Sibun,
someone who uses the name Rick Marken, a writer who expresses himself
very much like Chuck Tucker, and so forth. The only real person on
this net is me. From someone else's viewpoint, I'm imaginary. What the
net "is" depends on each user's conception of it, and that conception
isn't out there in the world. It's in a head.

  If you objectify objects, you can't explain how the same

  object can have different roles depending on who's using it for

what

  purpose.

i don't see how this follows (unless ``objectify'' has a lot of
connotations here).

When the role is in the object, Out There, it's treated like a
physical property of the object. Everyone expects the role to be self-
evident to everyone else, as the role of a lawnmover is to cut grass.
By objectivifying the object, I mean projecting one's private
experience of it into a world that is assumed to be identical for all
observers.

  That's what I was asking about. If it's nondeterministic in that
  sense, then what selects which of the possible paths will be

taken?

that's not the machine's problem:

``...we shall...permit several possible `next states' for a given
combination of current state and input symbol. the automaton
[machine]...may choose at each step to go into any one of these legal
next states; the choice is not determined by anything in our model,
and is therefore said to be _nondeterministic_.''

Doesn't your quote say explicitly that it _is_ the machine's problem?
"The automaton [machine] may choose at each step to go into any one of
these legal next states..."

I think that what the quote means is that it isn't the _programmer_'s
problem -- that is, the simulation or model or whatever is making its
own choices based on current experience, rather than having those
choices programmed in from the start. But if you include the state of
the environment and the criteria for choosing, then the automaton is
deterministic, even if the programmer didn't determine its choices.
The only way I can see to produce a nondeterministic outcome is to
make the choice using random numbers or the output of a Geiger
counter.

  >no, a history is what happens between one point in time and

another

  >point in time.

  Do you mean that there are continuous processes taking place

BETWEEN

  nodes? I had thought that an operation simply jumped you from one
  node to another.

in section ``ontology of cooking tasks,'' a&h say: ``a _history_ is

a

function from natural numbers (representing `time') to world

states.''

roughly, a history is a record of what happens during a run of toast.

This still doesn't answer my question. If an operation like "put pan
on burner" is carried out, I assume that the history would record the
pan first in some other place, then after the operation, the pan on
the burner. What I was asking was whether the position of the pan was
recorded in this history over the whole trajectory between "on the
counter" and "on the burner," or whether just the end positions were
recorded. In other words, were the positions BETWEEN the nodes
recorded, or just the positions AT the nodes?. My impression is that
histories don't really use the REAL numbers to represent time, but
only the CARDINAL numbers. Is "history" time really physical time? If
you move a pot of coffee from the burner to the table too fast, will
it slop over?

The reason I ask is that unless physical time is used, none of the
physical properties of the elements of the task can be dealt with.
Only abstract properties from categories up (in my scheme) can be
handled.

What I don't get are all these nodes and arcs and that
stuff, which sounds like the innards of a computer program, not
something happening with someone doing it. chacun a son gout.

oh, absolutely. can you see that that's exactly my reaction when
y'all get into the details of reference signals etc. and all that
math? in both cases, the theories and their models are disjoint, and
it's an enormous act of faith (or science?!) to maintain the belief
that they have anything to do w/ each other.

This is why we don't start with the math, but with the phenomenon of
control. Once you have experienced and understood control as something
happening, the PCT explanation becomes the only feasible one; it's
clear that no other explanation can even come close to handling what
you see happening. It would, in fact, take an enormous act of faith to
believe that the control-theoretic explanation is wrong.

By this I don't mean that a specific model, assembled for purposes of
analysis, is unique or self-evident. I mean that the CONCEPT of which
the specific model is one embodiment seems unavoidable. If you're
holding back from accepting PCT, it's only because you haven't seen
the CONCEPT yet. This can't be done through mathematical analysis. You
just have to look at your own experiences and actions, and realize
that everything you know about them occurs first as a perception. And
that all your actions are organized to make perceptions be a certain
way, or not be a certain way. Until that gets through, none of the
math will seem particularly interesting or compelling.
----------------------------------------------------------------------
-
Best to all,\

Bill P.

[Martin Taylor 920903 19:00]
(Bill Powers 920903 0800)

(To Penni Sibun)

By this I don't mean that a specific model, assembled for purposes of
analysis, is unique or self-evident. I mean that the CONCEPT of which
the specific model is one embodiment seems unavoidable. If you're
holding back from accepting PCT, it's only because you haven't seen
the CONCEPT yet.

(To me)

We can't tell if conjectures like these are any good until they are
cast in the form of a model from which we could predict the
consequences of such an organization. To turn them into models, you
have to make specific propositions about the details. Until that's
done, there's nothing to test. You don't even know if the organization
you suggest is feasible on its own merits, much less descriptive of
behavioral organization.

I don't have or expect to have a specific model of alerting systems. I can
think of many. But the situation is like that of PCT itself. The basic
proposition of PCT, that "all (purposeful) behaviour is the control of
perception" is incontrovertible. There are many models that might instantiate
it. So with the alerting systems. Given the basic proposition of PCT,
their existence in an organism supplied with more sensory than motor degrees
of freedom is incontrovertible. There are many models for how they might
work.

As for "controlling for variability," I was more musing than proposing, but
if you want a proposal, you mentioned a while back that if you had two
one-way control systems back-to-back to make a two-way one, and if they had
the right (square-law?) control function, then the pair could be controlled
both for gain and for reference level. The normal (S-R?) view of adaptation
is a change of gain in the perceptual input, put there's no intrinsic reason
why that sensitivity shouldn't be in the output (also?).

I see here added complexity, but no implausibility, and I don't plan to
model it (yet?). It doesn't seem to be a mainstream problem, as the
alerting structure is.

···

=============

Unless I see something else for response in the next few minutes, this is
likely to be my last posting until Sept 24. (But my addiction to CSG-L may
make a liar of me until Saturday afternoon).

Martin

(ps 920903.2200)

   [From Bill Powers (920903.0800)]

···

--
   Penni Sibun (920902.1400) --

   >well, of course, to interactionists, institutions are neither in the
   >head nor in the evironment, but.... this list, csgl, is an
   institution >cause we are all participating in it. it's not my head,
   it's not in >your head, and it's not out there in the environment
   somewhere just >sitting around.

   My problem with institutions is that they're nouns, whereas what goes
   on between organisms are processes and interactions -- verbs.

yes, i agree. that's suggested by ``participation.''

What
   something like CSG-L "is" depends on who's looking at it. For me, it's
   an ongoing conversation with a person to claims to be Penni Sibun,
   someone who uses the name Rick Marken, a writer who expresses himself
   very much like Chuck Tucker, and so forth. The only real person on
   this net is me. From someone else's viewpoint, I'm imaginary. What the
   net "is" depends on each user's conception of it, and that conception
   isn't out there in the world. It's in a head.

well, i don't suppose solipsism is very useful. unless you want to
assume that everything is imaginary, then this list is something real.
its existence depends on our actively maintaining it.

   >``...we shall...permit several possible `next states' for a given
   >combination of current state and input symbol. the automaton
   >[machine]...may choose at each step to go into any one of these legal
   >next states; the choice is not determined by anything in our model,
   >and is therefore said to be _nondeterministic_.''

   Doesn't your quote say explicitly that it _is_ the machine's problem?
   "The automaton [machine] may choose at each step to go into any one of
   these legal next states..."

   I think that what the quote means is that it isn't the _programmer_'s
   problem -- that is, the simulation or model or whatever is making its
   own choices based on current experience, rather than having those
   choices programmed in from the start. But if you include the state of
   the environment and the criteria for choosing, then the automaton is
   deterministic, even if the programmer didn't determine its choices.
   The only way I can see to produce a nondeterministic outcome is to
   make the choice using random numbers or the output of a Geiger
   counter.

well, i probably confused you by saying ``machine''--let's stick to
``automaton.'' at any rate, neither is a program: an automaton is a
description, a theoretical abstraction. one can perfectly rigorously
say whether an automaton is deterministic or not; i gave the def.
above. determinism does not describe what the automaton does, it
describes how it is built.

i recommend the book i cited for learning about computational theory.
interestingly, chomsky can be blamed for a lot of it.

   >in section ``ontology of cooking tasks,'' a&h say: ``a _history_ is
   a
   >function from natural numbers (representing `time') to world
   states.''
   >roughly, a history is a record of what happens during a run of toast.

   This still doesn't answer my question. If an operation like "put pan
   on burner" is carried out, I assume that the history would record the
   pan first in some other place, then after the operation, the pan on
   the burner. What I was asking was whether the position of the pan was
   recorded in this history over the whole trajectory between "on the
   counter" and "on the burner," or whether just the end positions were

the natural numbers are the positive integers starting from 1, so a
function from natural numbers to world states implies discreet rather
than continuous time.