tabula rasa world

[From Mervyn van Kuyen (970917 CET)]

If Perceptual Control means that means that control skills are acquired in
order to increase the match between reality and a reference, why can't
this network shape its own reference as well? This would increase the
knowledge of the network as well. Since knowledge is valued by every
economy, an succesful culture will provide its members an environment that
enhances the gathering of knowledge and allocates the resulting skills at
key positions.

In others words, we shouldn't treat a cultural environment like tabula
rasa, while I prefer to do so with the initial mind.

Mervyn van Kuyen - mervyn@xs4all.nl

[From Bill Powers (970917.1934 MDT)]

Mervyn van Kuyen (970917 CET) --

Welcome aboard, Mervyn!

If Perceptual Control means that means that control skills are acquired in
order to increase the match between reality and a reference,

Better start again. Perceptual control means that a control skill is
acquired in order to increase the match between a perception and a
reference. Perceptions are, we presume, a function of some reality, but
what function we do not know (and quite probably can't know). So the
process of acquiring control means basically being able to act on the
external world to make our perceptions take on states we wish them to have,
while not conficting with each other.

why can't
this network shape its own reference as well?

If you look into "HPCT" (hierarchical perceptual control theory), you will
find that the theory says we are always adjusting our own references --
that is how higher levels of control operate, by adjusting the reference
signals for lower systems.

In others words, we shouldn't treat a cultural environment like tabula
rasa, while I prefer to do so with the initial mind.

I agree that each new organism has to organize itself in the presence of
all the other organisms that are already there. "Culture" is simply the sum
total of all the other (interacting) control systems and their individual
aims and control methods. Of course we must not think of culture as a
single thing; there are many cultures, often overlapping, and individuals
can belong to more than one at the same time (the scientist discussing how
he rears his children at a meeting of Bowlers for Christ).

Best,

Bill P.

[From Mervyn van Kuyen (970918 CET)]

Bill Powers (970917.1934 MDT)

Mervyn van Kuyen (970917 CET) --

If Perceptual Control means that means that control skills are acquired in
order to increase the match between reality and a reference,

Better start again. Perceptual control means that a control skill is
acquired in order to increase the match between a perception and a
reference. Perceptions are, we presume, a function of some reality, but
what function we do not know (and quite probably can't know).

A network that is allowed to change the transformation of its sensory
input before this input is compared to a reference, will escape effective
reinforcement by making perception and reference alike (probably *without*
exerting any useful control). Therefore, I prefer calling this input
'reality' and not perception. Perception is the result of transformations
that are constantly adjusted, I believe, like the reference.

why can't
this network shape its own reference as well?

If you look into "HPCT" (hierarchical perceptual control theory), you will
find that the theory says we are always adjusting our own references --
that is how higher levels of control operate, by adjusting the reference
signals for lower systems.

I don't see why a network shouldn't be allowed to build its
reference from the bottom up. A servoing system has the innate tendency
to hunt, focus (not in the negative system dynamical sense), so why not
suggest that's what we all start with in our life? I believe
in the tabula rasa mind. Micro-architecture of the brain is not
genetically encoded (cf. Edelman).

Regards, Mervyn

[From Bill Powers (970918.0806 MDT)]

Mervyn van Kuyen (970918 CET)--

A network that is allowed to change the transformation of its sensory
input before this input is compared to a reference, will escape effective
reinforcement by making perception and reference alike (probably *without*
exerting any useful control). Therefore, I prefer calling this input
'reality' and not perception. Perception is the result of transformations
that are constantly adjusted, I believe, like the reference.

Yes, what you describe does happen: we sometimes change the goal when we
can't make a perception match it. With regard to the rest of your post,
however, you're talking about processes that happen on two very different
time scales. The adaptation of a neural net to create a transformation
between a set of inputs and a perceptual signal representing some aspect of
them happens on a very slow time scale; once you have learned to recognize
"distance," for example, your way of perceiving distance is not likely to
change much for the rest of your life. However, given this transformation,
you will perceive many different distances which can change over a time
measured in fractions of a second. You can act on the world to make a
perceived distance match a reference distance; that is what PCT is about.
Control of perception means controlling the _amount_ of a _specific kind_
of perception; the amount can be affected in real time; the kind can't.

I don't see why a network shouldn't be allowed to build its
reference from the bottom up. A servoing system has the innate tendency
to hunt, focus (not in the negative system dynamical sense), so why not
suggest that's what we all start with in our life? I believe
in the tabula rasa mind. Micro-architecture of the brain is not
genetically encoded (cf. Edelman).

I agree, and that is part of the more general PCT model that includes the
process of reorganization. I agree that we do not start life with the
micro-architecture of the brain in place. However, what I call "levels of
perception" do exist in that the specific kinds of computations required to
form each level are inherited as types of neurons and abilities to change
connections in different layers of the brain. Thus if we didn't have a
certain pre-existing architecture, we would never learn to recognize (for
example) relationships. Many simpler animals can't.

The PCT architecture consists of a large number (thousands, at least) of
elementary control systems at many levels, each of which (once it becomes
organized) controls a single one-dimensional perceptual signal by sending
outputs to serve as reference signals for multiple lower-level systems. No
system sets its own reference signal; the reference signal must come from
outside it. No system reorganizes itself; it doesn't have any properties
(in the basic model) that are concerned with changing its organization.
Reorganization is something that is done TO a control system by some other
process functionally (if not physically) outside it. A control system does
not form itself; it comes into being as a result of some other process that
acts by forming, and later changing, control systems.

A reference signal, in PCT, is a signal to which a perceptual signal is
compared. By definition, a reference signal is an independent variable
relative to the control system it enters. Its source lies elsewhere. And it
is specifically a _signal_ having the same physical form as a perceptual
signal. So a low-level control system (such as a spinal reflex) can't set
its own reference signal. That's not the organization of the PCT model.

If you're going to propose a different model, fine. But as long as you're
talking about PCT, there is a large number of interacting properties of the
model that have to be taken into account. If you start offering
alternatives to one part of the model, everything in the model will be
affected, and you should know what the effects of your changes will be.
Unless you simply want to start from scratch, you have to become familiar
with the properties of the model as it exists before you start fiddling
with it -- nothing is present in the existing model without a reason,
including the idea that reference signals for one level of control come
from systems of a higher level.

Best,

Bill P.

From Mervyn van Kuyen (970718 21:00 CET)

Mervyn van Kuyen (970918 7:00 CET)--

A network that is allowed to change the transformation of its sensory
input before this input is compared to a reference, will escape effective
reinforcement by making perception and reference alike (probably *without*
exerting any useful control). Therefore, I prefer calling this input
'reality' and not perception. Perception is the result of transformations
that are constantly adjusted, I believe, like the reference.

[Bill Powers (970918.0806 MDT)]
Yes, what you describe does happen: we sometimes change the goal when we
can't make a perception match it. With regard to the rest of your post,
however, you're talking about processes that happen on two very different
time scales. The adaptation of a neural net to create a transformation
between a set of inputs and a perceptual signal representing some aspect of
them happens on a very slow time scale...

Actually I didn't meant *before* in the sense that a system will
inmediately change its structure. I meant structurally: if a system can
change a transformation that structurally (and temporally) lies before a
comparator this will make an escape from its criterion very easy. The
point is that I would call the patterns that arize structurally behind a
comparator perception, but I believe that PCT does the same thing:

Control of perception means controlling the _amount_ of a _specific kind_
of perception; the amount can be affected in real time; the kind can't.

I agree, and this is a property that PCT and the model that I am
implicitly discussing (1) have in common with all servoing systems: these
systems act until reference and measurement match.

No system sets its own reference signal; the reference signal must come
from outside it...

I agree, but a reference can be provided from outside the brain as well.

If you're going to propose a different model, fine. But as long as you're
talking about PCT, there is a large number of interacting properties of
the model that have to be taken into account.

So, in fact I am referring to a different model. In my model the reference
(its world model) becomes embodied by a fully recurrent network during a
stage in which it can exert no or minimal control. During this growing up,
its environment is supposed to provide a 'healthy' example (reference).
Later on, the system acquires physical control, which it in return will
use to recreate this example. In every stage, however knowledge and
behavior keep working in tandem, persuing a single goal: minimization of
mismatch between reference and input. Since this goal is also united in
PCT (if I may interpret your explanation this way) I am very interested in
the insights that the exploration of PCT has created. Since these two
models act on very similar principles (cybernetics) I think that any
discussion will be very interesting.

I am not intending to fiddle with PCT, just to familiarize, think it
over and discuss it :slight_smile: Thanks for your extensive and precise reply, I
would be very thankful if we could continue this discussion!
Have a nice day,

Mervyn

ยทยทยท

===================================================

(1) "Feedback in Knowledge-Oriented Neural Networks"
      www.xs4all.nl/~mervyn/vankuyen.html - an article I presented at a
      dutch conference (GRONICS'97) on information technology

[Martin Taylor 970919 10:40]

Mervyn van Kuyen (970718 21:00 CET)

(1) "Feedback in Knowledge-Oriented Neural Networks"
     www.xs4all.nl/~mervyn/vankuyen.html - an article I presented at a
     dutch conference (GRONICS'97) on information technology

Figure 1 seems to be important in understanding this article, but on two
separate attempts Netscape gave me the icon for an inaccessible image.
Figure 2 comes out with no problem. When I tried to access the gif
by addressing it directly (GRAPHICS/servo.gif), I got the message that
it was not on this server. Again, no problem with Figure 2.

Martin

]From Bill Powers (970919.0949 MDT)]

Mervyn van Kuyen (970718 21:00 CET)--

No system sets its own reference signal; the reference signal must come
from outside it...

I agree, but a reference can be provided from outside the brain as well.

Not in the PCT model. All that comes into the PCT model from outside the
brain is the set of sensory signals -- first-order perceptual signals --
representing intensity of stimulation of sensory nerve endings. There is no
way for a reference signal to reach the comparator of a control system
directly from outside the organism. All reference signals are set by other
systems inside the organism.

So, in fact I am referring to a different model. In my model the reference
(its world model) becomes embodied by a fully recurrent network during a
stage in which it can exert no or minimal control. During this growing up,
its environment is supposed to provide a 'healthy' example (reference).
Later on, the system acquires physical control, which it in return will
use to recreate this example. In every stage, however knowledge and
behavior keep working in tandem, persuing a single goal: minimization of
mismatch between reference and input. Since this goal is also united in
PCT (if I may interpret your explanation this way) I am very interested in
the insights that the exploration of PCT has created. Since these two
models act on very similar principles (cybernetics) I think that any
discussion will be very interesting.

I can see some general similarities in the conclusions we have drawn from
our models, but there are some very great differences as well. I will look
up your paper and perhaps will have some more relevant comments later.

Best,

Bill P.