Bickhard on HPCT

[from Gary Cziko 920625.1625]

Last weekend I was at Lehigh University in Bethlehem, PA to consult with
Donald T. Campbell about my book (which is very Campbellian).

While there, I was able to meet with Mark H. Bickhard, Luce Professor of
Psychology, Philsophy, Robotics, and Counseling. Bickhard has written
extensively on what he calls "interactivism" and has a view of cognition
and behavior which I find in many ways consistent with HPCT. But he finds
that HPCT is not adequate to deal with all important aspects of learning
and development.

I am sharing with CSGnet a transcript of that part of our conversation
dealing with HPCT. I think CSGnetters will find his comments of interest.
Bickhard seems to be one critic of HPCT who understands the basics of what
HPCT is all about.

While Bickhard will probably not join discussions on the net, I may send
him reactions that I feel he may be interested in. Perhaps this is one way
to coax him to share more of his ideas with us.--Gary

···

===============================================================
Mark Bickhard (MB) Interviewed by Gary Cziko (GC) 920621

MB: The knowing levels,I would want to claim, are levels of reflective
consciousness.

Now within any given knowing level, there could be other principles of
hierarchicalization but they won't get you to a new knowing level, like
potentially servomechanism hierarchies. But the relationship between one
level of a servomechanism hierarchy and another level is not a relationship
of epistemic aboutness. It's not a reflectivity.

GC: How do you get that? Where do you get that reflectivity of that
knowledge?

MB: Well, the basic notion is that if knowing is interactive then the sense
in which a system can know the world is in terms of interacting with it, or
at least being competent to interact with it, would be also a sense in
which some second-layer system could know the first one by interacting with
it. And, in fact, there would be reasons why that would be adaptive
because there are new things to be known at that level and there will be
new things to be known at the second level but that will require a third
level, and so on. But in order for that to happen, you've got to have a
system that actually interacts with the next lower level as differentiated
from a system that simply calls upon it in a control-flow sense, like from
one servomechanism to another.

GC: In Powers's model you have sitting on the side a reorganization system
and when there are errors this thing pushes a button that says "change in
some way." Is that not an interaction of the type that you're talking
about?

MB: Well, it's a very, very limited one. As a matter of fact, I have a
model of the macro-evolution of this second layer system and the first
limited version of it is in fact a learning system which is at the
meta-level with respect to the system it operates on. But it's very
computationally limited and is also a very interactively-limited system.
It can't do very much with the level below. I mean, what it does is
terrific, but it can't do it a lot. The second one is an elaboration or
modification of a learning system plan and I argue that it constitutes an
emotional system and it's more powerful than just a learning system. And
in the third one, I would argue, is a full reflective knowing system. And
each one of those is, so I argue anyway, an increase in adaptability over
the preceding step. Each one is a modification, not necessarily a trivial
modification, but a modification of the preceding step, and in that sense I
argue that they constitute a macro-evolutionary trajectory. And in fact
if you look at evolution, that is in fact the order in which they seemed to
have evolved.

GC: So the first one is a limited type of meta-system?

MB: Well, the very first one is a knowing system that doesn't learn much or
can't learn much. (yeah) And then comes a system where progressively more
powerful learning capacities and at some point you get an emotion system
and at some point you start getting reflectivity.

GC: Since we've sort of drifted into the hierarchy and talking about
control to some extent, I'm really fascinated with Powers's model and I
guess there's a number of reasons for that. But it's this notion of
purpose which I find really intriguing and how these fairly simple
servo-mechanisms seem to have purposes and resist disturbances. James
talks about how organisms and people are able to obtain consistently
certain things by varying behaviors, so his notion of controlling
perception I find intriguing as a basic model. You talked a little bit
last night about the problems you see with that and you mentioned the
problem of correspondences--there is no mapping that can be used in this
way that is going to be . . .

MB: I claimed that it faces ultimately a number of problems one of which is
the inverse of behaviorism. You can have very, very simple systems whose
competencies are an infinite class of possible behaviors. So there's no
way to characterize that system in terms of its behaviors as long as you're
restricting yourself to finite characterizations. The only finite
characterization that has been given that system is system organization,
not behaviors. And exactly the same point holds for input. You can have
very, very simple systems that can recognize, detect, differentiate (or
whatever kind of word you want) infinite classes of inputs, and so in terms
of input, there is no finite characterization of that system possible.
That's point one.

GC: I have difficulty with that.

MB: It's the same point as behavior. I mean, you yourself said Skinner
wasn't good because he was selecting the wrong things. That's because he
was selecting behaviors. In fact, the things that are being selected for
have more properties than just behavior. They have infinite classes of
behavioral properties and the same thing with inputs.

GC: If you get to the basic phenomenon of what would be called control
which is able to maintain your posture or your blood oxygen level or
whatever in despite of disturbances. A servo-mechanism does capture it to
some extent what's going on there at a simple variable level. So you have
this phenomenon which is able to resist disturbances and you do that by
having a reference level and a system where your response influences the
perception and of course the perception influences the behavior, but, at
the same time. So there's a phenomenon and this mechanism seems to capture
some of the basics that is going on here. Okay, so there's no problem at
that level, yet . . .

MB: Even at that level I would argue that it's a better characterization to
say that the servo-mechanism is "attempting" to achieve an internal state.

GC: An internal state? All it's really controlling is matching the input
to the reference level and as you manipulate the reference level you can
make an arm go up and do all kinds of things . . .

MB: But the result of that match is going to be some functionally
efficacious internal state in the system and THAT'S the internal state it's
trying to achieve.

GC: Not the reference level?

MB: Not the reference level. The only sense in which the reference level
is the state it's trying to achieve--well, that's not even a state, it's a
level--the only sense in which it's trying to achieve it is that under most
normal working conditions, the matching of the reference level will achieve
the internal state that it's after.

GC: Again, if I want to bring it back to a simpler level. If I have a
cruise control on my car, I can manipulate the reference level and I can
cruise at 65 or 55 or 45. You say that the system is not controlling the
perceived speed coming back?. You're saying it's doing more than that?

MB: Of course it is. If that's all it did and that match didn't yield a
further functional state, the servo-mechanism could not operate because
it's that further functional state that either turns the system on or turns
it off or switches to a different strategy or whatever it does. It's that
further functional state that has all of the functional efficacies. The
fact that there's a set point that under some conditions yields the further
state is an additional point. You can have such a system with such a
further state that doesn't involve a story of set points at all.

GC: That would resist disturbances?

MB: Sure. A TOTE mechanism does not require a comparator--that's just a
simple fact. All you need is an internal functional state that serves as a
switch and it either switches out of the system or it switches back to the
operate in a TOTE. And that switch does not have to be an internal
comparison switch. It's a switch that could be based on anything
whatsoever.

GC: But the fact that you do see systems which are controlling what appear
to be certain perceptions suggest that the switch is in fact operating.

MB: Some may be, right. And in some circumstances that will be adaptively
desirable for them to be that way. But that's a limited class of
circumstances.

GC: But in the phenomenon of control, you would recognize that organisms
are controlling . . .

MB: Well, organisms are attempting to control their own internal states.

GC: And how do they know what their internal states are?

MB: Well, they don't know what they are but the internal states have
functional efficacy, that's all.

GC: And what determines the functional efficacy?

MB: Just however it's organized. I mean, that has to do with the
functional organization of the system so it's not so much that something
determines it, it's rather that it's instantiated in some way or another in
the nervous system. Now something will determine it in the sense of
learning so then you need a model of how some part of a learning system can
modify the function of those other parts of the learning system or some
part of a learning system can modify its own function.

GC: So the notion of a higher level of controlled variables, something like
success as a professor or something like that, from a control-theory
perspective, I would argue that you are going to perceive that and it may
be some function of how you are dealing with your courses, how many
students do you have, how much grant money is coming in, how many
publications you're getting out, and if you perceive yourself as not
matching that, you will adjust your behavior and vary it in someway in a
hierarchical way to . . .

MB: At that sort of a level I would argue that thinking of that as a
variable is very, very seriously distorted. It has a much richer structure
than that. There are many different ways in which you can succeed as a
professor. There are many different ways you can fail as a professor.
Some are graded in the sense of ordered or partially ordered--some are not.
Virtually none of them except salary has anything like a number line
organization and so on.

GC: But you can see that it's appealing if you could extend this notion of
hierarchy all the way up.

MB: Oh, I'm perfectly willing to do that. I would just argue that they
don't all involve set points on a real number line. I would also argue
that no version of that is ever going to get you to another knowing level.
I would still further argue that a servo-mechanism hierarchy is not
necessarily the most advantageous architecture for every possible tasks and
that the brain doesn't necessarily use it for all possible tasks. I think
there are reasons why it uses it for evolutionary common tasks like
proprioceptive and kinesthetic control and so on, but I'm not at all
persuaded that the brain necessarily uses servo-mechanism hierarchy
architectures for higher level cognitive tasks.

GC: But are those tasks purposeful?

MB: Sure, but that doesn't mean that they're servo-mechanisms.

GC: So what is the other model then?

MB: Well, there's lots of those. For example, a lot of things in AI are
written in what is called a blackboard architecture and the idea there is
you have one big blackboard--sometimes you also have subordinate
blackboards which give you different principles of hierarchy--and then you
have a whole bunch of agents operating all at once and they're all showing
their results on the blackboard and checking the blackboard to see if there
are conditions on the blackboard that trigger their particular kind of
activity. There's no servo-mechanism hierarchy there. So that's another
sense in which the servo-mechanism architecture I would argue certainly
does exist for some things and certainly can be used for some things but
it's not a general architecture. It's not a general principle.

GC: But you would still consider that as being a purposeful system in some
way? I'm trying to relate these blackboards with some task or some
behavior or something like . . .

MB: But you can even construct a servo-mechanism out of the blackboard
architecture. So if a higher level servo-mechanism throws a goal down to
this blackboard and then all these little demons down here in parallel try
to achieve the goal and under some condition this one's behavior would be
more relevant than other conditions and these three over here will be
relevant and so on, that's easy enough. But you don't have to think of it
that way either. You can have the various demons in parallel throwing things
onto the
blackboard that in effect serves as goals for other demons.

GC: But they would be, in that case, higher in the hierarchy.

MB: Not necessarily because that can be a pure loop. There doesn't have to
be any higher level. It can be a heterarchy not a hierarchy.

  I think it's [a servomechanism hierarchy] an extremely powerful
perspective. It's just not powerful enough. It's not a sufficiently
general architecture. And when you try to apply it to things like I want
to be successful as a professor, like I said, I just think it's highly
distorted and I think it does as much harm as good at those sorts of levels
because it's simply obscuring all of the structure there.

GC: It's trying to simplify it into a single variable somehow. And when I
think about what some of these PDP circuits can do . . .

MB: But if you've got hierarchies of defeasibility relationships or
hierarchies of critical principles, there's no way to construct a variable
out of that. You cannot collapse into a variable the relationships of
affirmation and infirmation and the defeasibility exceptions and all that
kind of stuff. You can't do that with a real number line.

[Written comment by Mark Bickhard added when reviewing transcript}

MB: . . . it [the transcript] does not include the point that an automaton
or Moore machine recognizer can serve the function of a functional test for
a TOTE organization without there being any set point--a final state
switches out--to Exit, and any other terminal state switches to Operate.