Memory as Reference Signal

[From Rick Marken (2004.12.22.1530)]

Richard Thurman (2004.12.22.1345)

The cruise control explanation you gave helps me understand the
purpose -- but when I code up such a control system, I find that the
reference signal is already in 'memory'.

I think the deal with the cruise control is to have the system that is
selecting the speed (the driver, in the case of cruise control) indicate to
the system controlling speed (the cruise controller) when the currently
perceived speed should be the reference for speed.

I think what would be interesting is to try to develop a control model of a
typical memory situation: you are shown a picture, say, and asked to say the
pictured person's name. This is like a paired associate memory task. Show
pairs of nonsense syllables, say GUC-KOY and DAS-RIV, and then show one
component of the pair and ask the person to recall the other. I believe
that a person, when shown one component of the pair -- say GUC -- is
controlling for saying the syllable that went with that component.

I think a model of this kind of memory process would be very interesting. It
is clearly a control task but I don't really know how to build a model of
it. I think the model would have to involve some kind of imperfect
representation of the syllable that goes with the other syllable. But it's
way too complicated for me.

Any ideas anyone?

Best

Rick

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bruce Gregory (2004.1222.18430]

Rick Marken (2004.12.22.1530)

Richard Thurman (2004.12.22.1345)

The cruise control explanation you gave helps me understand the
purpose -- but when I code up such a control system, I find that the
reference signal is already in 'memory'.

I think the deal with the cruise control is to have the system that is
selecting the speed (the driver, in the case of cruise control)
indicate to
the system controlling speed (the cruise controller) when the currently
perceived speed should be the reference for speed.

And exactly how would the driver know when the currently perceived
speed should be the reference for speed? It might seem to someone that
you have introduced a deus ex machina to "solve" the memory problem.

The enemy of truth is not error. The enemy of truth is certainty.

[From Bill Powers (2004.12.22.1653 MST)]

Bruce Gregory (2004.1222.1843] --

And exactly how would the driver know when the currently perceived
speed should be the reference for speed? It might seem to someone that
you have introduced a deus ex machina to "solve" the memory problem.

How about trying for a positive contribution, Bruce? You're certainly smart
enough to do it, and nobody expects a complete solution when we're still
feeling our way through the problem. What do you suggest?

Best,

Bill P.

[From Bruce Gregory 92004.1222.1936)]

Bill Powers (2004.12.22.1653 MST)

Bruce Gregory (2004.1222.1843] --

And exactly how would the driver know when the currently perceived
speed should be the reference for speed? It might seem to someone that
you have introduced a deus ex machina to "solve" the memory problem.

How about trying for a positive contribution, Bruce? You're certainly
smart
enough to do it, and nobody expects a complete solution when we're
still
feeling our way through the problem. What do you suggest?

Perhaps you have forgotten that you rejected my suggestion (Hebbian
learning). Without a way to learn, I don't seen how the problem can be
solved. In practice, one simply invokes a higher level in the control
system (the driver in this case) that just happens to be controlling
the necessary perception. In other words, pick the starting conditions
that ensure that the desired action will be carried out. I don't find
this approach very satisfactory, but I admit that I am in the minority.

The enemy of truth is not error. The enemy of truth is certainty.

[From Bill Powers (2004.12.23.0120 MST)]

Bruce Gregory 92004.1222.1936) --

Perhaps you have forgotten that you rejected my suggestion (Hebbian
learning). Without a way to learn, I don't seen how the problem can be
solved.

Is Hebbian learning a model? I've never been able to understand how it's
supposed to work. How about trying to explain how it would solve the
"cruise control" problem?

Anyway, I wasn't describing learning; just performance. Most of what we do
most of the time doesn't involve learning. I think we have to understand
performance before we can understand learning.

In practice, one simply invokes a higher level in the control
system (the driver in this case) that just happens to be controlling
the necessary perception.

The driver is controlling some variable by means of setting speed to a
specific value. I remember the last time I used cruise control, going down
a long hill on the way into Durango and wanting to keep my speed at 55 (the
limit, enforced) without having to keep an eye on the speedometer, and
without having to keep my foot on the accelerator (stiff ankle). Speed
control was a means to other ends (several) and it was convenient to turn
it over to the machine.

Without a cruise control, similar ends (but not all of them like resting
the ankle) are carried out by using a foot on the accelerator. The same
higher-level goals are achieved by sending a reference signal to
lower-order control systems inside me and letting them handle the details.
I already know how to do all the things that need doing -- no serious
learning necessary.

But if you can see how Hebbian learning would clarify this model, by all
means tell us.

Best,

Bill P.

···

In other words, pick the starting conditions
that ensure that the desired action will be carried out. I don't find
this approach very satisfactory, but I admit that I am in the minority.

The enemy of truth is not error. The enemy of truth is certainty.

[From Bruce Gregory (20041223.0756)]

[Bill Powers (2004.12.23.0120 MST)

I already know how to do all the things that need doing -- no serious
learning necessary.

But if you can see how Hebbian learning would clarify this model, by
all
means tell us.

It seems that we are trying to solve two very different problems. Your
concern is with how things happen, my concern is with why one thing
rather than other thing happens. Now that I see this, I understand much
of the miscommunication that has transpired between us over the years.
At one point I said you were taking an engineering point of view. You
took this to be an insult, although I did not intend it as such.
Engineers are primarily concerned with how. Your description (and
Rick's) of a cruise control in operation meet all the "how"
requirements. My "why" questions can always be answered by postulating
a higher-level system that determines why by controlling an appropriate
perception.

PCT seems to me to be a theory of robotics pressed into service as a
theory of human experience. Not only is consciousness not explained in
PCT, it is a real annoyance. The hierarchy works perfectly well without
it (which is why I call it an epiphenomenon -- it is a real phenomenon,
but it does no heavy lifting in the model -- in fact, it does no
lifting at all). The same can be said for memory and emotions. A good
robot does perfectly well without either, thank you.

I'm sorry for the confusion that I caused by not realizing the
difference between the problems we were trying to solve. I will do my
best to see that that does not happen in the future.

The enemy of truth is not error. The enemy of truth is certainty.

[From Bill Powers (2004.12.23.0820 MST)]

Bruce Gregory (20041223.0756)--

PCT seems to me to be a theory of robotics pressed into service as a
theory of human experience.

Funny way to put it. "Pressed into service" sounds like a makeshift,
desperation move, whereas "pressing" control theory "into service" in the
early days of cybernetics was probably the greatest step forward in
understanding living systems ever made. I thought you agreed with that at
one point. What changed your mind?

Not only is consciousness not explained in PCT, it is a real annoyance.

Speak for yourself; I'm rather glad it exists, though I try not to say
annoying things about it. There wouldn't be much experience without it.
However, the functioning of brain and body which constitute the vehicle
employed by consciousness is explainable without it, just as we can
understand (but not design) a car without understanding the driver.

The hierarchy works perfectly well without it (which is why I call it an
epiphenomenon -- it is a real phenomenon, but it does no heavy lifting in
the model -- in fact, it does no lifting at all). The same can be said
for memory and emotions. A good robot does perfectly well without either,
thank you.

If consciousness directs reorganization where it is needed, as I have
proposed, then the hierarchy would not work nearly as well without it, if
it could get itself organized at all. As to "heavy lifting," are you
proposing that consciousness (rather then neural control systems) is
responsible for perception, comparison, action. memory, and emotion?

The latter two, memory and emotion, would probably make a robot function
much better if we put them into it rather than leaving them out. Look what
including memory did for cruise controls, which wouldn't work at all
without it. Emotions have to do with self- and species-preservation and
avoidance of injury, so I presume a robot with them would do better than a
robot without them.

I'm sorry for the confusion that I caused by not realizing the
difference between the problems we were trying to solve. I will do my
best to see that that does not happen in the future.

I hadn't noticed that you are trying to solve (as opposed to cause) a
problem. At least you haven't said what it is. Does Hebbian learning, which
has to do with the "strength" of active synapses in "cell assemblies" being
affected by other neural signals, have something to do with consciousness?
What problem does that model solve that PCT doesn't solve?

Best,

Bill P.

[From Rick Marken (2004.12.23.0950)]

Bruce Gregory (2004.1222.18430)--

Rick Marken (2004.12.22.1530)

I think the deal with the cruise control is to have the system that is
selecting the speed (the driver, in the case of cruise control)
indicate to the system controlling speed (the cruise controller)
when the currently perceived speed should be the reference for speed.

And exactly how would the driver know when the currently perceived
speed should be the reference for speed? It might seem to someone that
you have introduced a deus ex machina to "solve" the memory problem.

I think it once occurred to an executive at GM that in order to have cruise
control you would have to introduce a deus ex machina to solve the control
problem. Proving only that it's as easy to find fatuity in the executive
suites of industry as in the hallowed halls of Harvard.

In fact, I can think of how the cruise control itself could be augmented to
know when the currently perceived speed should be the reference for speed.
Just include a higher level system in the cruise controller that is setting
the speed of the cruise control on the basis of some sensed consequence of
the speed, such as proximity to cars ahead and/or ambient speed of traffic.
Using just a system that controls proximity to other cars, we get a two
level hierarchy of systems like this (I'll use equations because that's
easier than diagrams):

Level 2 system: speed reference = f2 (proximity reference - proximity
perception)
                proximity perception = g2 (proximity)

Level 1 system: accelerator output = f1 (speed reference - speed perception)

                speed perception = g1 (speed)

Environment: speed = h0.1 (accelerator output + physical laws of motion)

                proximity = h0.2 (speed + other cars)

When the proximity perception equals the proximity reference, the speed
reference will be set to the perceived speed that results in the desired
perception of proximity.

Proximity control is typically handled by the driver, who resets the speed
reference for the cruise controller, turning it on when the car is going the
desired speed and off when the car is going too fast or slow. In the model,
the speed reference is varied continuously in order to keep the proximity
perception at the reference.

RSM

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bruce Gregory (2004.1223.1515)]

Bill Powers (2004.12.23.0820 MST)

I hadn't noticed that you are trying to solve (as opposed to cause) a
problem.

Je démissionne.

The enemy of truth is not error. The enemy of truth is certainty.

[From Bruce Abbott (2004.12.23.1815 EST)]

Rick Marken (2004.12.22.1530)

I think what would be interesting is to try to develop a control model of
a

typical memory situation: you are shown a picture, say, and asked to say
the

pictured person’s name. This is like a paired associate memory
task. Show

pairs of nonsense syllables, say GUC-KOY and DAS-RIV, and
then show one

component of the pair and ask the person to recall the other. I
believe

that a person, when shown one component of the pair – say GUC – is

controlling for saying the syllable that went with that
component.

I think a model of this kind of memory process would be very interesting.
It

is clearly a control task but I don’t really know how to build a
model of

it. I think the model would have to involve some kind of imperfect

representation of the syllable that goes with the other syllable. But
it’s

way too complicated for me.

Any ideas anyone?

I think what you’re asking for is a memory system that uses associative
addressing. In such a system, a part of the information stored in memory
serves as the “address” for whatever else is associatively
linked to it.
I read an article some time ago (it might have appeared in Scientific
American
) in which an “associative network” was trained to
produce an entire visual pattern if presented with a sufficient portion
of that pattern. In effect, when presented with the portion, it
“remembered” the whole to which the part belonged. The training
involved negative feedback but negative feedback was not required during
the production of the whole from the part after training was
complete.

So, I’d be wary of trying to make the retrieval of memories a control
process. But that statement requires elaboration. If I want to fire a
loaded pistol, I pull the trigger. Normally, this sets off a chain of
events that culminates in the shot going off. If the cartridge fails to
fire, I may then engage in activities intended to fix the problem
(pulling the trigger again, ejecting the round and replacing it with a
new one, etc.) until a pull of the trigger does result in a shot being
fired. I am engaging in a control process to correct the problem, but the
sequence of events from trigger-pull to shot fired happens open loop.
Similarly, when I try to remember “KOY” when given
“GUC,” if nothing comes to mind, I may do things in an attempt
to overcome the failure (e.g., saying “KOY” to myself again, or
looking at the answer and then repeating “GUC, KOY” until KOY
comes to mind when I see “GUC”). Here I am trying to remember
“KOY” and taking corrective action if I fail (i.e., I am
controlling for remembering “KOY”). Even so, the retrieval
mechanism itself may operate in open-loop fashion.

Bruce A.

[From Rick Marken (2004.12.23.1600)]

Bruce Abbott (2004.12.23.1815 EST)--

So, I'd be wary of trying to make the retrieval of memories a control process.

But it seems to be a control process in the sense that I know whether or not
I have retrieved the correct memory. This is true even when I am objectively
wrong. For example, when I see someone and try to remember their name, I
know whether or not I've retrieved their name. So there must be something
like a reference against which the memories being retrieved are compared.
This is what is very paradoxical to me: how do you tell whether or not you
have recalled correctly what you cannot recall? If you have a way of knowing
that the retrieved memory is correct, why did you have to retrieve
possibilities at all? Apparently, you already knew -- had in you mind --
what you were trying to bring to mind.

Similarly, when I try to remember "KOY" when given
"GUC," if nothing comes to mind, I may do things in an attempt to overcome the
failure (e.g., saying "KOY" to myself again, or looking at the answer and then
repeating "GUC, KOY" until KOY comes to mind when I see "GUC"). Here I am
trying to remember "KOY" and taking corrective action if I fail (i.e., I am
controlling for remembering "KOY"). Even so, the retrieval mechanism itself
may operate in open-loop fashion.

Yes. The retrieval mechanism could just "fire off" possibilities: SOY, TOY,
POI, KOY. But these possibilities must be compared to something like a
reference (set to the value "KOY" in this case) so that you are able to say
"ah, it's KOY". But if there is a reference in place for the correct
response to "GUC", why do you have to go through the process of retrieving
alternative possibilities?

So the problem, for me, is that I can't see how to design a model of recall
without putting something into the model (a way of determining whether or
not the correct memory has been recalled) that seems to obviate the need for
recalling possibilities from memory.

Best regards

Rick

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bill Powers (2004.123.23.1650 MST)]

Bruce Abbott (2004.12.23.1815 EST)--

So, I'd be wary of trying to make the retrieval of memories a control
process. But that statement requires elaboration. If I want to fire a
loaded pistol, I pull the trigger. Normally, this sets off a chain of
events that culminates in the shot going off. If the cartridge fails to
fire, I may then engage in activities intended to fix the problem (pulling
the trigger again, ejecting the round and replacing it with a new one,
etc.) until a pull of the trigger does result in a shot being fired. I am
engaging in a control process to correct the problem, but the sequence of
events from trigger-pull to shot fired happens open loop. Similarly, when
I try to remember "KOY" when given "GUC," if nothing comes to mind, I may
do things in an attempt to overcome the failure (e.g., saying "KOY" to
myself again, or looking at the answer and then repeating "GUC, KOY" until
KOY comes to mind when I see "GUC"). Here I am trying to remember "KOY"
and taking corrective action if I fail (i.e., I am controlling for
remembering "KOY"). Even so, the retrieval mechanism itself may operate in
open-loop fashion.

These are all control processes, but that doesn't mean the association
itself is a control process (as you are pointing out). In the case of the
gun, the gun itself is part of the environment, and has the property that
if its trigger is pulled far enough, it will fire a bullet. It doesn't
matter what pulls the trigger.

However, a person can desire that the gun be fired, and that makes the
firing PART OF a control process. The person picks up the gun and pulls on
its trigger until the firing is perceived.

Mremory association can be considered part of the process of perception.
One a perception is received; it somehow evokes a second perception which
joins with the first to make a complete pattern (or the evoked perception
is treated by itself). If there is a reference level for the evoked
perception or the complete pattern, acting to bring about the first
("trigger") perception is part of a control process for evoking the whole
pattern or the second perception. That's what you were describing.

Of course as with all perceptions, the trigger perception can arise without
any action, and result in the evoked perception without any reference
signal or control action being involved. In that case the person will
simply experience the evoked perception or the complete pattern, but will
take no action to affect it. A uncontrolled evoked perception may also be a
disturbance of a higher-level perception. That would lead to higher-level
error signals and actions tending to counteract the effect on the
higher-level perception.

These are all straightforward deductions from the basic theory.

Best,

Bill P.

[From Bruce Gregory (2004.1223.1035)]

Bill Powers (2004.123.23.1650 MST)

Memory association can be considered part of the process of perception.
One a perception is received; it somehow evokes a second perception
which
joins with the first to make a complete pattern (or the evoked
perception
is treated by itself).

These are all straightforward deductions from the basic theory.

Indeed. The Devil, of course, is in the details. "It somehow evokes"
covers a multitude of sins. No wonder memory is no challenge to the
existing model. One perception simply somehow evokes another one. I
wonder why it took me so long to appreciate the genius of the model.
Complex reasoning? No sweat. One thought somehow evokes another one.
Gee, this is fun!

The enemy of truth is not error. The enemy of truth is certainty.

[From Bruce Gregory (2004.1223.2000)]

Let me make sure I understand this process. Not knowing what is in the
refrigerator, I open the door. I see an orange. This perception somehow
evokes a perception at the program level: IF you see an orange in the
refrigerator AND you are hungry, pick up the orange, peel the orange,
and eat the orange. IF you are not hungry, close the refrigerator door.

These certainly seem to be straightforward deductions from the basic
theory. I think I'm finally getting the hang of this. It's much simpler
than I thought.

The enemy of truth is not error. The enemy of truth is certainty.

[From Bill Powers (2004.12.23.1935 MST)]

Bruce Gregory (2004.1223.1035) --

Bill Powers (2004.123.23.1650 MST)

Memory association can be considered part of the process of perception.
One a perception is received; it somehow evokes a second perception
which joins with the first to make a complete pattern (or the evoked
perception is treated by itself).

These are all straightforward deductions from the basic theory.

Indeed. The Devil, of course, is in the details. "It somehow evokes"
covers a multitude of sins. No wonder memory is no challenge to the
existing model. One perception simply somehow evokes another one. I
wonder why it took me so long to appreciate the genius of the model.
Complex reasoning? No sweat. One thought somehow evokes another one.
Gee, this is fun!

I don't know how one perception evokes a remembered one, but I hope we
agree that this happens. If we agree on that, I hope it is obvious that we
can act to produce one perception as a means of evoking another one we've
forgotten -- for example, someone's name. You know, the guy who's always
got his toupee on crooked. And up pops the name. The deduction from theory
concerns how we can control for an unknown perception by acting to produce
a known one that may evoke the forgotten one, and thus satisfy the
reference condition of remembering it.

By the way, "these" in the cited phrase "these are all deductions ..."
referred to the control processes that follow once we accept the premise
that some perceptions do evoke others from memory. You left all those
remarks of mine about control out, making it appear that I said the
evocation of perceptions from memory followed from the basic theory. Of
course it does not; it is an observed phenomenon, not a deduction from
theory. I hope our readers understand that I do not make stupid assertions
like the one you inadvertently seemed to construct out my words.

I take it from your comments that you do know how one perception evokes the
memory of another. So why not just tell us? It would be nice to know.

Best,

Bill P.

[From Bill Powers (2004.12.23.1950 MST)]

Bruce Gregory (2004.1223.2000)--

Let me make sure I understand this process. Not knowing what is in the
refrigerator, I open the door. I see an orange. This perception somehow
evokes a perception at the program level: IF you see an orange in the
refrigerator AND you are hungry, pick up the orange, peel the orange,
and eat the orange. IF you are not hungry, close the refrigerator door.

These certainly seem to be straightforward deductions from the basic
theory. I think I'm finally getting the hang of this. It's much simpler
than I thought.

It's not THAT simple. And you know it isn't. What are you playing at?

Bill P.

[From Bruce Gregory (2004.1223.2217)]

Bill Powers (2004.12.23.1935 MST)

I take it from your comments that you do know how one perception
evokes the
memory of another. So why not just tell us? It would be nice to know.

If it were really important to you, Bill, you would worked on the
problem during the past 30 years. You didn't, so I conclude it isn't.
PCT works perfectly well without a model of memory, so why muddy the
waters?

The enemy of truth is not error. The enemy of truth is certainty.

[From Bruce Gregory (2004.1223.2218)]

Bill Powers (2004.12.23.1950 MST)

It's not THAT simple. And you know it isn't. What are you playing at?

Don't be coy, Bill. Where did I go wrong? Maybe Rick will explain the
error of my ways. How about it Rick?

The enemy of truth is not error. The enemy of truth is certainty.

From [Marc Abrams (2004.12.23.1858)]

[From Bill Powers (2004.12.23.0820 MST)]

Bruce Gregory (20041223.0756)–

PCT seems to me to be a theory of robotics pressed into service as a
theory of human experience.

Funny way to put it. “Pressed into service” sounds like a makeshift,
desperation move, whereas “pressing” control theory “into service” in the
early days of cybernetics was probably the greatest step forward in
understanding living systems ever made. I thought you agreed with that at
one point. What changed your mind?

Wasn’t the entire cybernetics movement about how control or ‘negative feedback’ contributed to our understanding of living systems? As I recall from George Richardson’s book; Feedback Thought in the Social Sciences, you took a divergent path from cybernetics by introducing a number of factors that only the engineering ‘servo-mechanism’ crowd adhered to and you had a few objections to some of the cybernetic notions as well.

At one point I agreed with most of what you said and thought. I have since revised my opinion based on much research I have done. I know that you think it a virtue to remain in the same place for as long as you can stand the heat, but I prefer to look elsewhere when I find I can’t quite solve a problem a certain way. Each to his own.

This is not to take away from your contribution, but like everything else on this world. Things evolve, and so will your theory eventually.

Not only is consciousness not explained in PCT, it is a real annoyance.

Speak for yourself; I’m rather glad it [consciousness] exists, though I try >not to say
annoying things about it. There wouldn’t be much experience without it.
However, the functioning of brain and body which constitute the vehicle
employed by consciousness is explainable without it, just as we can
understand (but not design) a car without understanding the driver.

Sorry pal, not fully. Much of what YOU are interested in can be explained without consciousness. NOT what I, Bruce and most psychologists are interested in though. Graham Brown in 1911 observed that an animal could still walk with a normal gait, that is, organized movement, even though they were deafferented. Sensory input was required ONLY to modulate for the variable environment it encountered.

You have a mistaken understanding of how our nervous systems and brain work. I will not attempt to bore you with any details, but our consciousness evolved, (or was created, take your pick) from a need to be able to navigate in a variable environment. Our 'brain can be divided into a ‘new’ brain and ‘old’ brain. You should have listened a bit closer to Glasser on this point. The old brain, and the one you happen to focus in on, is responsible for all of our internal needs. But it does not work alone. It works in conjunction with our ‘new’ brain. That is, relatively speaking, the cortex is only a few million years old.

So most, but not all of our control systems reside in the ‘old’ brain.

Consciousness provides a control system with a number of important and vital tools. Control systems do not ‘think’ or ‘care’ about anything, but if I were to design a control system and if I wanted to make a control system as efficient as I could, I would add a few things to ‘help’ the control system along. Some ‘supporting’ mechanism’s if you will.

I believe consciousness’ is one of those tools. I believe emotions is another one of those tools.

Consciousness provides the ability of a controlling organism to plan. That is, to try and PREDICT what might happen a minute from now, an hour from now and 6 years from now. Bruce Gregory has tried to make this point several times without much luck. What this is, is an attempt to reduce the variability of the input. Since our control systems do not ‘like’ error, one way of minimizing it is to reduce the variability of input.

The second way that consciousness helps the control process is by providing a large number, and high degree’s of freedom in resolving any error. This amounts to giving a control system enough ‘freedom’ to be able to choose a viable way of correcting error. That is, enough ‘learned’ ‘behaviors’ to deal with most situations. Of course ‘behavior’ here represents thoughts and imagination as well.

So all living_ animals have some consciousness. Humans, with our advanced neo-cortex have a bit more than others.

Without these capabilities, MOBILE life would be impossible. So consciousness, although not necessary to maintain life, people have lived for years in a coma, certainly makes for a different life’s experience.

So talking about the brain and not talking about consciousness, leaves out, depending on who is doing the talking, anywhere for 50 to 75% of what most BEHAVIORAL people are interested in, and what they are interested in is WHY we do what we do. NOT how this all occurs at the molecular level.

What you have done is simply replace the environment (behaviorism) with a control system. The questions as to WHY either one would actually cause behavior is still a mystery to you. I think I have a potential answer.

At least one i can try and explore. Trying to understand human behavior at the psychophysical level with a PCT model would be like trying to build a log cabin with toothpicks. It’s probably theoretically possible, but I’m not really interested in the attempt. I need to use something a bit larger for my project. Something at a higher level of abstraction than what PCT currently affords. is this a knock on PCT? I don’t see it that way.

Do microbiologists dis ecologists because of the different scales of interest?

The hierarchy works perfectly well without it (which is why I call it an
epiphenomenon – it is a real phenomenon, but it does no heavy lifting in
the model – in fact, it does no lifting at all). The same can be said
for memory and emotions. A good robot does perfectly well without either,
thank you.

The hierarchy doesn’t work at all in the real world of human control. As a model it works just fine. Humans are NOT models, and they are NOT computers and electronic circuits. The sooner you realize this the sooner you might be able to contribute something more then just rhetoric.

If consciousness directs reorganization where it is needed, as I have
proposed, then the hierarchy would not work nearly as well without it, if
it could get itself organized at all. As to “heavy lifting,” are you
proposing that consciousness (rather then neural control systems) is
responsible for perception, comparison, action. memory, and emotion?

What is a ‘neural control system’? What do you think ‘consciousness’ is?

I’m sorry Bill, your way out there. You have no real concept of how our nervous systems and brain works.

The latter two, memory and emotion, would probably make a robot >function
much better if we put them into it rather than leaving them out. Look what
including memory did for cruise controls, which wouldn’t work at all
without it. Emotions have to do with self- and species-preservation and
avoidance of injury, so I presume a robot with them would do better than a
robot without them.

‘Emotions’ have to do with a great deal more than that. Emotions and feelings initiate all controlled processes, not perceptions. Perceptions can and do initiate feelings and emotions.

A little something your model does not consider.

I’m sorry for the confusion that I caused by not realizing the
difference between the problems we were trying to solve. I will do my
best to see that that does not happen in the future.

I hadn’t noticed that you are trying to solve (as opposed to cause) a
problem.

As the great Will Rogers once said; “When you find yourself in hole the first thing you should consider doing is to stop the digging”.

I think Bruce has tried to take this approach. I know I have on CSGnet. It has not worked. You have your ideas and that seems to be that.

I’m not certain when or why you closed off all real contact with the outside world, and by this I mean the day you stopped listening to others and started digging in and trying to sell a line of goods, but it hasn’t worked and it won’t work.

What is ironic about all this is that the reason it won’t work is something you should know very well. Control systems, NEED to have large degree’s of freedom in order to resolve error. Translated, when I try to solve a problem using your hierarchy, and I can’t, I try other solutions that I think might work better. It’s only natural, and when and if I find something more comfortable and to my liking I will accept or reject the others on the capability of reducing the error I perceive. As they say; “Use whatever works best for you”.

At least you haven’t said what it is. Does Hebbian learning, which
has to do with the “strength” of active synapses in “cell assemblies” being
affected by other neural signals, have something to do with consciousness?

Who cares? Hebbian learning is NOT about ‘cell assemblies’. It’s not about how something is done. it is about why something is done.

What problem does that model solve that PCT doesn’t solve?

Hebbian learning adjusts the network’s weights such that its output reflects its familiarity with an input. The more probable an input, the larger the output will become (on average). Unfortunately, plain Hebbian learning continually strengthens its weights without bound (unless the input data is properly normalized). There are only a few applications for plain Hebbian learning, however, almost every unsupervised and competitive learning procedures can be considered Hebbian in nature.

I believe that Bruce is addressing the ‘predictive’ nature of learning.

As I said in my previous post about learning. Prediction is IMPERATIVE to our survival. Not pinpoint prediction, but ‘in-the-ballpark’.

What do you think the main thrust of Science is? One aspect is ‘descriptive’. The more alluring and more ‘prestigious’ goes to the ones who can come up with PREDICTIVE models. Control tells us why this is so, IF you have the capacity to look at a level of control higher then that of a ‘neural circuit’.

And a final note; Bill, you unfortunately view people like me and Bruce as your ‘enemies’. I say unfortunately because we are some of your truest supporters. Your real enemies abandoned you along time ago. Your enemies are the ones who refuse to tell you the truth as they see it and won’t bother wasting their time trying to help.

It is unfortunate because you really believe ‘friends’ don’t ‘hurt’ other friends. But how do you tell a friend he has bad breath without offending him?

It is also unfortunate because I think I have a great deal to contribute and I’m going to do it, with or without your support. I would much prefer to be working with you than not.

I have read everything you have published, and most things more than once. If I did not feel in my heart that I was not doing what you envisioned and hoped for 40 years ago then I would have, like so many others abandoned any hope and moved on. You can say this is my final attempt to get your interest and ear.

Bill. I don’t know if any of my ideas are any better than yours. But your ideas are NOT any better than mine, and that is the point. My focus on control is different than yours.

I hope you will think about this.

Marc

Bruce,

I was going to send this to CSGnet but decided to pass it by you first. I don’t want to involve you and as I said originally, you and I would agree when to go ‘public’ with this.

Marc

From [Marc Abrams (2004.12.23.1858)]

[From Bill Powers (2004.12.23.0820 MST)]

Bruce Gregory (20041223.0756)–

PCT seems to me to be a theory of robotics pressed into service as a
theory of human experience.

Funny way to put it. “Pressed into service” sounds like a makeshift,
desperation move, whereas “pressing” control theory “into service” in the
early days of cybernetics was probably the greatest step forward in
understanding living systems ever made. I thought you agreed with that at
one point. What changed your mind?

Wasn’t the entire cybernetics movement about how control or ‘negative feedback’ contributed to our understanding of living systems? As I recall from George Richardson’s book; Feedback Thought in Social Science, you took a divergent path from cybernetics by introducing a number of factors that only the engineering ‘servo-mechanism’ crowd adhered to and you had a few objections to some of the cybernetic notions as well.

At one point I agreed with most of what you said and thought. I have since revised my opinion based on much research I have done. I know that you think it a virtue to remain in the same place for as long as you can stand the heat, but I prefer to look elsewhere when I find I can’t quite solve a problem a certain way. Each to his own.

This is not to take away from your contribution, but like everything else on this world. Things evolve, and so will your theory

Not only is consciousness not explained in PCT, it is a real annoyance.

Speak for yourself; I’m rather glad it [consciousness] exists, though I try >not to say
annoying things about it. There wouldn’t be much experience without it.
However, the functioning of brain and body which constitute the vehicle
employed by consciousness is explainable without it, just as we can
understand (but not design) a car without understanding the driver.

Sorry pal, not fully. Much of what YOU are interested in can be explained without consciousness. NOT what I, Bruce and most psychologists are interested in. Graham Brown in 1911 observed that an animal could still walk with a normal gait, that is, organized movement, even though they were deafferented. Sensory input was required ONLY to modulate for the variable environment.

You have a mistaken understanding of how our nervous systems and brain work. I will not attempt to bore you with any details, but our consciousness evolved, (or was created, take your pick) from a need to be able to navigate in a variable environment. Our 'brain can be divided into a ‘new’ brain and ‘old’ brain. You should have listened a bit closer to Glasser on this point. The old brain, and the one you happen to focus in on, is responsible for all of our internal needs. But it does not work alone. It works in conjunction with our ‘new’ brain. That is, relatively speaking, the cortex is only a few million years old.

So most, but not all of our control systems reside in the ‘old’ brain.

Consciousness provides a control system with a number of important and vital tools. Control systems do not ‘think’ or ‘care’ about anything, but if were to design a control system and I wanted to make a control system as efficient as I could I would add a few things to ‘help’ the control system out.

I believe consciousness’ is one of those tools. I believe emotions is another one of those tools.

Consciousness provides the ability of a controlling organism to plan. That is, to try and PREDICT what might happen a minute from now, an hour from now and 6 years from now. Bruce Gregory has tried to make this point several times without much luck. What this is an attempt to reduce the variability of the input. Since our control systems do not ‘like’ error, one way of minimizing it is to reduce the variability of input.

The second way the consciousness helps is by providing a large number, and high degree’s of freedom in resolving any error. This amounts to giving a control system enough ‘freedom’ to be able to choose a viable way of correcting error.

so all living_ animals have some consciousness. We, with our advanced neo-cortex have a bit more than others.

Without these capabilities, MOBILE life would be impossible. So consciousness, although not necessary to maintain life, people have lived for years in a coma, certainly makes for a different life’s experience.

So talking about the brain and not talking about consciousness, leaves out, depending on who is doing the talking, anywhere for 50 to 75% of what most BEHAVIORAL people are interested in, and that is WHY we do what we do.

What you have done is simply replace the environment (behaviorism) with a control system. The questions as to WHY either one would actually cause behavior is still a mystery to you. I think I have an answer.

The hierarchy works perfectly well without it (which is why I call it an
epiphenomenon – it is a real phenomenon, but it does no heavy lifting in
the model – in fact, it does no lifting at all). The same can be said
for memory and emotions. A good robot does perfectly well without either,
thank you.

The hierarchy doesn’t work at all. As a model it works just fine. You can’t translate that into the real world because the real world does not operate like that. The sooner you realize this the sooner you might be able to contribute something more then just rhetoric.

If consciousness directs reorganization where it is needed, as I have
proposed, then the hierarchy would not work nearly as well without it, if
it could get itself organized at all. As to “heavy lifting,” are you
proposing that consciousness (rather then neural control systems) is
responsible for perception, comparison, action. memory, and emotion?

What is a ‘neural control system’? What do you think ‘consciousness’ is?

I’m sorry Bill, your way out there. You have no real concept of how our nervous system and brain works.

The latter two, memory and emotion, would probably make a robot >function
much better if we put them into it rather than leaving them out. Look what
including memory did for cruise controls, which wouldn’t work at all
without it. Emotions have to do with self- and species-preservation and
avoidance of injury, so I presume a robot with them would do better than a
robot without them.

‘Emotions’ have to do with a great deal more than that. Emotions and feelings initiate all controlled processes, not perceptions. Perceptions can and do initiate feelings and emotions.

A little something your model does not consider.

I’m sorry for the confusion that I caused by not realizing the
difference between the problems we were trying to solve. I will do my
best to see that that does not happen in the future.

I hadn’t noticed that you are trying to solve (as opposed to cause) a
problem.

As the great Will Rogers once said; “When you find yourself in hole the first thing you should consider doing is to stop digging”.

I think Bruce has tried to take this approach. I know I have on CSGnet. It has not worked. You have your ideas and that seems to be that.

I’m not certain when or why you closed off all real contact with the outside world, and by this I mean the day you stopped listening to others and started digging in and trying to sell a line of goods. but it hasn’t worked and it won’t work.

What is ironic about all this is that the reason it won’t work is something you should know very well. Control systems, NEED to have large degree’s of freedom in order to resolve error. Translated, when I try to solve a problem using your hierarchy, and I can’t, I try other solutions that I think might work better. It’s only natural, and when and if I find something more comfortable and to my liking I will accept or reject the others on the capability of reducing the error I perceive. As they say; “Use whatever works for you”.

At least you haven’t said what it is. Does Hebbian learning, which
has to do with the “strength” of active synapses in “cell assemblies” being
affected by other neural signals, have something to do with consciousness?

Who cares? Hebbian learning is NOT about ‘cell assemblies’. It’s not about how something is done. it is about why something is done.

What problem does that model solve that PCT doesn’t solve?

Hebbian learning adjusts the network’s weights such that its output reflects its familiarity with an input. The more probable an input, the larger the output will become (on average). Unfortunately, plain Hebbian learning continually strengthens its weights without bound (unless the input data is properly normalized). There are only a few applications for plain Hebbian learning, however, almost every unsupervised and competitive learning procedures can be considered Hebbian in nature.

I believe that Bruce is addressing the ‘predictive’ nature of learning.

Marc

[From Bill Powers (2004.12.24.0739 MST)]

Bruce Gregory (2004.1223.2217)--

If it were really important to you, Bill, you would worked on the
problem during the past 30 years. You didn't, so I conclude it isn't.
PCT works perfectly well without a model of memory, so why muddy the
waters?

I guess that means either you know and are not going to tell us how a
perception can evoke a memory of another perception, or you don't know and
don't want anyone to find out.

Best (well, second best)

Bill P.