Subject: Re: Scale Invariance and HPCT

[From Erling Jorgensen (2003.12.06.2100 EST) sent later]

Bill Powers (2003.12.04.1520 MST)

I find it extremely tempting to think of finding simple general principles
from which we could generate the entire heirarchy. Or maybe a principle of
reorganization like what the neural net people dream of, which would
spontaneously create the whole hierarchy, or _a_ whole hierarchy, of
control.

Under the proposal of HPCT, there are eleven (or so) distinct _types_ of
perception in a human hierarchy of control. Anecdotal evidence suggests
these may be held in common (i.e. universal?) among humans, although of
course the actual enactment of specific instances at each level will vary
by person and, I think, by cultural environment.

If the eleven types of human perception hold up to testing, I think that
means we would need _eleven general principles_, that are somehow hard-wired
into humans by a sufficiently common evolutionary history.

I think you already have the two general principles for the first two levels
of the hierarchy. An Intensity is a constructed perception corresponding
to "how much" of a given form of environmental energy, which has been
transduced or transformed (I'm not sure the right term here) basically into
frequency of neural firing.

A Sensation is a constructed perception of combinations of intensities,
using the basic principle of "weighted sums."

The "wish list" I have for HPCT is to set up a rudimentary form of the
general principle for a Configuration, say, or for a Transition, embed that
principle somehow in a simplified & simulated neural architecture, and
then set a PCT reorganization program running & see what it would come up
with, in terms of modified perceptual input functions.

On the same wish list, obviously, is how to simulate an environment that
has variables capable of being constructed into perceptual configurations,
etc. If I follow the modeling efforts that I have seen, it is very difficult
to model realistic properties of an analog environment on a digital computer.
I think there may also be a danger of begging-the-question, by putting the
construction out in the simulated environment itself, and not among the
computational properties of the simulated neural architecture.

[Aside, re: modeling the environment. This is from memory, but I think
you offered a good example of the kind of thing that's needed, Bill, in your
working control model of an inverted pendulum. A computer program does not
have "gravity," and yet that is what was needed in order to test whether a
series of control systems linked hierarchically could balance a couple of
inverted pendulums hooked together, (I think that is how it was set up).
If I remember correctly, you had to endow the upper point of each pendulum
with a formula that would accelerate it toward the baseline, if it deviated
from a perfectly vertical position. My physics background is sketchy, but
I presume Newton influenced the formula for your simulated gravity, so that
the dynamics of the model itself would work in a realistic manner. The key
point is that all of that had to be imported into the simulation, even if
it had to be put there in a somewhat artificial way. Gravity was an essential
property of the environment for the task at hand, which had to be represented
in the simulation in a comparable (though not necessarily identical) way.
In the context of this particular post & my wish list, I don't know what
"formulas" would allow configurations to be in (or constructed out of) the
simulated environment of the kinds of models dancing in my wee little head.
(And despite the season, Bill, I really don't expect you to be Santa here,
notwithstanding the other modeling gifts you and your elves have produced!)
End of aside.]

I do have a couple of guesses, however, about the "general prinicples"
embodied in those two low levels of the hierarchy, (i.e., Configurations
and Transitions). It seems to me that the essence of a Transition is to
construct and then map onto one another, so that they are perceived in
concert, "a variance together with an invariance". That combination would
be the new invariance, so to speak, perceived at the Transition level.

Change can only be perceived in its own right with reference to something
that is not changing, (or at least not changing in the same way). I think
of things like differences of optical flow in figure-ground distinctions.
(Maybe this is due to the recent posts about J.J. Gibson -- I do think he
came up with some good ideas about possible controlled variables, as we
would call them, despite his placing everything of importance within the
environment.)

It seems the cleanest form of this variance+invariance perception is
"something different in the same location". And I believe the 'unchanging
location' portion could be enacted either temporally or spatially. To have
something move _relative to its background_ is to perceive a spatial
transition. To have something appear _where it wasn't before_ is to
perceive a temporal transition.

In moving to Configurations, we're talking (at least in part) about the
"something" of the previous two sentences. The best suggestions I have
seen, as far as how to enact a general principle for constructing a
configuration, come from Warren McCullough (McCullock?) -- I think it
was in _Models of the Mind_. While he didn't call it this, I think the
general principle for Configurations is "feature co-occurrence".

McCullough postulated that we extract (I would say, construct) features
from our environment, and those signals register somewhere in the cortex.
He then suggested that each feature might randomly project numerous copies
of itself into adjoining cortical tissue. Features that co-occur stably in
the environment would also co-occur in this adjoining tissue, _regardless_
of the locale of that tissue that might be sampled. He had a diagram
showing each of a dozen symbols re-occuring at random spots across the page.
But wherever he might draw a circle to show a cluster of them, enough
different symbols would be encompassed within the circle to show that they
were occurring together. It struck me that this was a way to construct a
new invariant, consisting of a cluster of features. The symbols did not
have to be in the same topographical relationship to each other in each
cluster -- in other words, they did not have to _look_ like a configuration.
It was enough if they co-occurred together wherever the arbitrary cluster
was drawn.

In terms of enabling such aspects to emerge in the neural architecture
model of my wish list, I believe it would simply be necessary for each
weighted-sum neuron (representing the sensation-perceptions) to _have to_
project X number of copies of itself in a random fashion to some common
neural field where configurations might be constructed. Given the research
into neural plasticity of the brain, that seems like a minimal requirement
to impose on a simulated neural architecture.

I'm having a hard time wrapping my brain around the other things needed
for my wish-list model. To see whether it would construct such a thing as
a configuration, it seems it would be necessary for some kind of sustained
error that could not be resolved merely by trying to control the
sensation-perceptions. So there would have to be some kind of rudimentary
intrinsic variable(s), left in a state of error with mere sensation control.
That would be the engine for reorganization, which would have to be able to
change the parameters for what was controlled, including constructing new
perceptual input functions (specifically, in this untapped cortical field
receiving projections of the sensation-perceptions).

I have had some sense for how difficult modeling can be, but my appreciation
is certainly growing as I try to inventory the various component parts. So
far, my wish list hasn't even completed the full circuit of the control loop!
Simulating the output side of the loop runs into the need for muscles or
actuators or something that could affect the Intensity, et al. perceptions
that are being controlled and/or constructed. And it raises again the problem
of simulating realistic properties within the environment, in terms of the
environmental feedback function, so that behavior from the control loops
has a chance of succeeding. I am not sure which parts of this ambitious
wish list could be left for future development, and which are essential just
to get a working set of loops going -- which could then self-organize, and
reorganize, etc., etc.

The key idea I started with at the top of this lengthy post was that likely
each level in the proposed HPCT hierarchy would have its own general principle
for how those types of perceptions would be constructed. (It goes without
saying that the developing hierarchy doesn't actually perceive and use a
"principle", per se. That is simply our reconstruction and distilled
description of the processes that may be involved.) But I do believe the
notion of general organizing principles is a fruitful one, if only because
there appears to be a great deal of commonality in the kinds of perceptions
that humans can create.

Having said that, however, it is an open question whether _other_ species
construct their perceptions in similar or qualitatively different ways from
humans. They obviously have developed filling other evolutionary niches,
most likely leading to at least some distinct ways of perceiving their
relevant environments. For that reason, I do not believe there is a single
"principle of reorganization like what the neural net people dream of, which
would spontaneously create the whole hierarchy".

The only principle anywhere near that kind of status is "negative feedback
control" itself. And that, as I think Martin recently point out, is what
is constitutive of life itself.

All the best,

Erling

P.S. Not sure when I'll actually get to send this. My CSG subscription
is with my work e-mail address. We're socked in with quite a snowstorm
at present. They're anticipating 18 to 24 inches!

Post-P.S. The snowfall turned into 34 inches!

[From Rick Marken (2003.12.08.1450)]

Erling Jorgensen (2003.12.06.2100 EST) sent later]

A very thoughtful and thought-provoking essay, Erling!

Nice work.

Best

Rick

···

--
Richard S. Marken, Ph.D.
Senior Behavioral Scientist
The RAND Corporation
PO Box 2138
1700 Main Street
Santa Monica, CA 90407-2138
Tel: 310-393-0411 x7971
Fax: 310-451-7018
E-mail: rmarken@rand.org

[From Bill Powers (2003.12.08.1838 MST)]

Erling Jorgensen (2003.12.06.2100 EST) sent later] –

If the eleven types of human
perception hold up to testing, I think that

means we would need eleven general principles, that are somehow
hard-wired

into humans by a sufficiently common evolutionary
history.

True. The reasons for such hard-wiring (supposing it holds up) are not
self-evident, however. Is it because the real world is actually organized
in such ways? Or is it because this way of structuring reality is just
good enough to allow survival and communication within a
species?

The “wish list” I have
for HPCT is to set up a rudimentary form of the

general principle for a Configuration, say, or for a Transition, embed
that

principle somehow in a simplified & simulated neural architecture,
and

then set a PCT reorganization program running & see what it would
come up

with, in terms of modified perceptual input functions.

I think your wish list outruns science at Configurations. I have a
feeling that someone, some expert in analytical geometry or conformal
mapping or some such thing (if I knew I would have done it), has the
tools necessary to figure out how configuration perception works, I
don’t. Some useful steps may have been taken by researchers in visual
pattern recognition and speech recognition, but the crudity of the
results from these fields indicates that we can’t call the problem solved
yet.

[Aside, re: modeling the
environment. This is from memory, but I think

you offered a good example of the kind of thing that’s needed, Bill, in
your

working control model of an inverted pendulum. A computer program
does not

have “gravity,” and yet that is what was needed in order to
test whether a

series of control systems linked hierarchically could balance a couple
of

inverted pendulums hooked together, (I think that is how it was set
up).

If I remember correctly, you had to endow the upper point of each
pendulum

with a formula that would accelerate it toward the baseline, if it
deviated

from a perfectly vertical position. My physics background is
sketchy, but

I presume Newton influenced the formula for your simulated gravity, so
that

the dynamics of the model itself would work in a realistic manner.
The key

point is that all of that had to be imported into the simulation, even
if

it had to be put there in a somewhat artificial way. Gravity was an
essential property of the environment for the task at hand, which had to
be represented in the simulation in a comparable (though not necessarily
identical) way.

Yes, there is a lot that can be done in modeling the environment. Gravity
is just one feature; in the inverted pendulum, Newton’s laws of
acceleration are used, as well as dynamical relationships of other kinds.
That’s all physics, and physical models are very far advanced compared
with other kinds. There are even a couple of new twists on analyzing
dynamical systems in that model.

I do have a couple of guesses,
however, about the “general prinicples”

embodied in those two low levels of the hierarchy, (i.e.,
Configurations

and Transitions). It seems to me that the essence of a Transition
is to

construct and then map onto one another, so that they are perceived
in

concert, “a variance together with an invariance”. That
combination would

be the new invariance, so to speak, perceived at the Transition
level.

I think that is probably a deeper insight than either of us can yet
realize. We identify the variables of experience in terms of invariances:
impressions that remain unchanged when certain transformations are
applied. The shape of a cube, for example, is invariant with respect to
rotation, translation, and change of scale. It is not invariant with
respect to shearing or twisting transformations, or other transformations
that do not leave distances between points the same.
But at the same time, the transformations that leave a cube still a cube
represent changes. variable attributes of the cube. For example, the
shape does not change with changes in position – but obviously the
position of the cube, one of its attributes, can change. Same for
rotation: rotating the cube leaves it still a cube, but its angles
relative to the coordinate axes change; its property of orientation
changes.
Invariances are always defined with respect to some set of
transformations, and it’s easy to forget that those transformations
alter attributes of the invariant thing – the attributes that do
not define its identity. Because a cube has a shape that is invariant
with respect to orientation and position, we can control its
orientation and position any way we like without changing the cube into
something else. The invariance defines the identity of the variable; the
transformations that do not alter its identity define the state of the
variable – that which can change about the invariant.
I almost understood this idea long ago. Here is a quotation from the
first paper I published with Clark and MacFarland in 1960:

···

========================================================================
A variable is always a conbination of two classes of percept. One class
contains percepts that do not vary; by these percepts we keep
track of the identity of the variable. The other class contains
percepts which do change; these percepts carry the information about the
“magnitude” of the variable. “Magnitude” is used here
[to indicate]… the general class of variable attributes. (Living
Control Systems, p. 4).

========================================================================

That attempt to derive a fundamental principle more or less fizzled out.
I don’t remember what “percept” was supposed to mean. But I’m
still convinced that somewhere in this collection of statements is a
principle that somebody ought to be able to apply to perceptual input
functions. Now you bring it up again, and it still sounds pregnant with
meaning, the more so because it arose independently in another brain. Now
all we need is to be told who thought of all this first, and what this
field of mathematics is called…

The key idea I started with at the
top of this lengthy post was that likely

each level in the proposed HPCT hierarchy would have its own general
principle for how those types of perceptions would be
constructed.

Yes, and that is exactly the idea of the hierarchy that I intended. Each
level consists of a particular way of computing perceptions out of the
perceptual signals below. We inherit the organizations (at each level)
capable of carrying out such computations (what you refer to as a
principle), but not particular examples of perceptions of the relevant
kind. The particular examples (most of them, that is) we have to learn
from experience with the world we happen to be born into: the world of a
Chinese fisherman or a Liverpool rock star.

Having said that, however, it is an
open question whether other species

construct their perceptions in similar or qualitatively different ways
from

humans.

It certainly is. We desperately need to get in touch with an alien
intelligence that arose from other roots.

Great ideas, Erling. I wish I could say SHAZAM and put them together into
a huge breakthrough.

Best,

Bill P.

[From Erling Jorgensen (2003.12.09.2045 EST)]

Bill Powers (2003.12.08.1838 MST)

Erling Jorgensen (2003.12.06.2100 EST)
we would need _eleven general principles_, that are somehow hard-wired
into humans by a sufficiently common evolutionary history.

True. The reasons for such hard-wiring (supposing it holds up) are not
self-evident, however. Is it because the real world is actually organized
in such ways? Or is it because this way of structuring reality is just good
enough to allow survival and communication within a species?

I suspect it is a bit of the former and a good deal of the latter. I'm a big fan
of "good enough". If it works, it works; who cares if it's Real? (And just how
would we know anyway?) In fact, I often use with my clients a variation of
your koan, namely, "Perception _is_ Reality". BTW, this is meant as a
starting point for exploring with them ("So tell me about _your_ reality..."),
not an excuse for shutting it down as if nothing more need be said ("It's
all perception, man.")

On the other hand... There is that unsettling chunk of data that, yes, a
negative feedback control system does indeed seem to "work". If everything
were handled solely through imagination, perceptions could be anything
(even the numbers of our "toy models" ;-> ). Ask for them, and they're there.
No need to bring them about. No need to make them work. But in PCT,
that is not the standard form of controlling. We close the control loop
_through_ the environment. The properties of some kind of actual
environment (whatever they may be) are free to put their two cents in.
And that becomes something of an epistemological paradox. It shouldn't
work so well as it does, without there being some kind of regularities in
that environment. But all we will know of that self-same environment is
what we construct and deem it to be.

Any "hard-wiring" that passes muster would need to be explained. That
would include the disquieting fact of similar capabilities being hard-wired
into more than one human (let alone a species). 'Cause that's all it takes
analytically. Once you have two, you've got a "pattern that connects," to
use Bateson's pregnant phrase. That's all the crevice that's needed, for
this troublesome question to insert itself and start to erode our notions
of species-specific adaptations. Just how do we think "similar capabilities"
got there? Human evolution is longer than just the hominids. "Hard-
wired" means cellular mechanisms among neural machinery! And that
goes back a very long ways.

I think it's helpful to remember that inserting layers of control of neural
perceptions was an evolutionary innovation serving control of intrinsic
variables -- things like access to the right nutrients for the newly-encased
environment of the cytoplasm. I guess I see the intrinsic control
systems and the perceptual control hierarchy existing in a symbiotic
relationship, (similar to Lynn Margulis' notions of symbiosis). As
presently constituted among multi-cellular organisms, neither set of
systems (i.e., intrinsic & perceptual) could exist without the other.
What began as chemical control expanded into neural control, as
multi-cellular arrangements exploited the ionization properties of
their aqueous environment.

And here, I think we come back to the "good enough" proposal. Was
construction of neural discharges, and neural communication between
cells, essential and inherent to those early intercellular environments?
No, it was _possible_; a good enough innovation that worked sufficiently
well, for cells to still get access to the compounds that were essential
for their functioning. And any innovation that doesn't get in the way of
controlling intrinsic variables tends to stick around - (assuming there
was enough need for reorganization to start casting around for
innovations in the first place).

Such were the first steps, as I envision them, leading to a perceptual
control hierarchy of neural variables. Once a cell can create and control
(through control and replenishment of ions) a neural potential, then it
becomes possible to create and control repeat performances of that
potential - which is a new type of perception called a frequency. Control
and replenish the right kinds of neurotransmitter chemicals, and you
get a post-synaptic potential, which can propagate along a chain
of cells. Connect that with cells that can contract in opponent-process
arrangements, and you get some kind of muscle actuators, which can
have new kinds of effects on and through the environment. Develop
cells specialized to perceive various types of energy in that environment,
and you start to get multi-cellular perceptual control loops. I obviously
don't know all the details of cellular development and phylogenesis to
be able to itemize all the steps, but I think these are the types of
processes involved in the "wiring." To make it "hard"-wired, I think we
would then have to discuss the control processes involved in the DNA;
(and who is up to that task, the Human Genome Project notwithstanding?)

What I'm trying to suggest with this somewhat speculative reconstruction
is how each layer of control might be built out of the type of control
previously existing. Your perceptual control hierarchy, Bill, proposes
classes of control among neurally constructed variables. But those
were built upon various forms of chemical control - which is just the
sort of interface we need, coincidentally, for moving between the
intrinsic variables of the intrinsic control systems and the rest of the
perceptual control hierarchy.

We really do need a marriage between control physiologists and PCT.

then set a PCT reorganization program running & see what it would come up
with, in terms of modified perceptual input functions.

I think your wish list outruns science at Configurations. I have a feeling
that someone, some expert in analytical geometry or conformal mapping or
some such thing (if I knew I would have done it), has the tools necessary
to figure out how configuration perception works, I don't.

I agree. There is major neural computation going on to arrive at Configurations.
And this forms something of a bottleneck in trying to construct realistic
layers to further test the "hierarchical" part of HPCT. But my wish is closer
to what Bruce G. was alluding to, in talking about "toy models." (Once I got
past colloquial connotations of "toy", I could see it was not a pejorative term,
at least as I construed Bruce's intent.) I _want_ a toy model. An awful lot
can be learned from them.

I was looking at a previous post I sent to the net, and realized that my
attachment truncated part of the final paragraphs I had intended to send.
One of them referred to how the early tracking task demos were implemented.
They had a simplicity, that was actually a source of rigor in displaying
the essential facts of control. For instance, in a demo of a cursor tracking
a target on the screen, only one variable was being controlled by the
computer program. It was a Relationship-perception, consisting of the
difference in position between the target and the cursor. All the
implementing perceptions of arm movements & pressure on the mouse
& speed & direction of mouse movements, etc., were left to the person
running the demo to provide. The key variable of the target-cursor
Relationship, however, was ideally suited to enact on the digital computer,
by means of the difference of the pixel positions.

The fact that such demos were highly simplified versions of the actual
dynamics involved did not rob them of their power to predict the behavior
of the subject to a very high degree. Those were certainly "toy models,"
as physicists would use the term. And yet they showed very clearly
what even a simple model could accomplish. KISS -- (my version of
Occam's razor). This PCT stuff must really be something, if you can
leave all sorts of things out of the working model, and still predict an
essential aspect of the subject's behavior dead on!

So with Configurations, we don't need as our first approximation the
kinds of things that might come from "analytical geometry or conformal
mapping or some such thing." All we need is a demonstration of
principle. If we can construct _anything_ that enacts "feature
co-occurrence," then we can start to layer that with weighted-sum
sensations and degrees of intensity, to begin to imagine how a hierarchy
might be built up while still maintaining stable control at each level.

I think my underlying hope with getting a semi-rich but still simplified
hierarchical simulation going, is so that I can see in action the range
of parameters operating. Yes, we need _toy_ model components of
each aspect of HPCT, working together in the same simulation. And
by this I don't mean working out the computations of eleven layers of
perceptual input functions. I mean, perhaps, a three-layer hierarchy,
embodying correct if highly simplified general principles at each layer,
but then with a protocol simulating reorganization (such as you described
in your response to my MOL post), and some rudimentary memory
protocol as a source for reference signals, and connections such that
parameters could be adjusted by reorganization, and a pared-down
environment that elementary muscle actuators could actually affect,
etc. I realize my "wish list" is rapidly expanding here, and yet we may
have the beginnings of toy components of a lot of these things. I'd
like to see them interacting. Realism can come later.

I think I also harbor a secret hope that a reorganization protocol,
with capabilities to actually connect with parameter inputs, might
do some of the work for us. Why try to design what we clearly
do not yet know how to design? We have a very powerful tool in
the reorganization proposal itself. It functions as a single-minded
control system, perceiving and reducing toward zero a single
variable, that of degree of error. If we can just give it the right
field to work in, why not harness that power to see what kind of
perceptual input functions it might design? We might learn a
lot from the attempt, especially if we are not premature in
insisting on something "realistic". What are toys for anyway,
if not to play? (Then again, maybe I'm just getting carried away
with the spirit of the Season...)

It seems to me that the essence of a Transition is to
construct and then map onto one another, so that they are perceived in
concert, "a variance together with an invariance". That combination would
be the new invariance, so to speak, perceived at the Transition level.

I think that is probably a deeper insight than either of us can yet
realize. We identify the variables of experience in terms of invariances:
impressions that remain unchanged when certain transformations are applied.
.....

I am not sure how to respond to the rest of your post. I am not sure if you
are applying this to Transitions, or whether you are saying something
about each perceptual layer. It almost seems as if you are saying we
identify every perception by an implicit comparison of what it is and what
it is not - when we note "impressions that remain unchanged when certain
transformations are applied."

I agree that every level is the construction of a new type of invariance,
which can remain stable (through control actions) regardless of other
things changing. But I think I am saying that change itself is only
perceived through the layer we are calling Transitions. There is probably
something hard-wired about this, (there's that word again), in that
many sensory receptors seemed to be tuned to note the sudden
_appearance_ of a form of energy. But at that level, I think it is the
presence of an intensity that is controlled, not its changeability. (I'm
not sure if I'm painting myself into a corner here.)

I do have some other floating ideas about whether Transitions
should be lower in the hierarchy. Guess I need to re-read your B:CP
rationale for how you felt Transitions were dependent on Configurations.
I don't remember questioning it before, so I am guessing your arguments
were persuasive to me in the past.

I think you're right - this is "a deeper insight than [at least one!] of
us can yet realize." It is getting late. Let me make just one more
pass at this...

You're aware, I'm sure, that I like turning toward Gregory Bateson,
and I'm wondering if maybe he can help get some clarity here.
I think I view every perceptual level, once it is constructed, as
"a difference that makes a difference." The first "difference" is the
perception part of the loop, while the reference input to the loop
is what "makes a difference". But I think this is not the same as
describing a Transition type of perception. There, I think we are
specifically contrasting differences that make a difference and
those that do not, out of our lower level perceptions. And it is
precisely that contrast that "makes a difference" at the Transition
level.

Once again, thanks for all your provocative ideas, in whatever
stages of formation. Take care.

Erling

[From Bill Powers (2003.12.10.0940 MST)]

Erling Jorgensen (2003.12.09.2045 EST)--

The properties of some kind of actual
environment (whatever they may be) are free to put their two cents in.
And that becomes something of an epistemological paradox. It shouldn't
work so well as it does, without there being some kind of regularities in
that environment. But all we will know of that self-same environment is
what we construct and deem it to be.

This is probably a hint about how reorganization works with an initially
unorganized, but evolved, brain. We have to learn both what perceptions to
control, and what actions to take that will control them. I'm pretty much
up against that problem in the multi-control model. At present there's no
structure in the environment -- no mutual dependencies in the environmental
variables. That may be why I can't get the output reorganization method to
find the stable equilibrium instead of the unstable one. I have to do a lot
more experimenting because it seems that if reorganization is applied to
the input instead, stability results. That is, I think it does -- I've
forgotten some of the things I did, and have to do them again not that I've
tumbled to that unstable equilibrium, with small errors but positive feedback.

I don't think we'll get past that last epistemological hurdle, though.
Sure, we learn to control specific perceptions, and yea, we have to learn
which actions will affect those perceptions in the negative feedback sense.
But does that leave us with the True Picture of Reality, or just Good
Enough? I do think that if this multicontrol approach gets anywhere, we'll
be closer to an answer, though.

You're full of great ideas, but I'm in learning mode right now and have a
relationship with -- uh, oh yeah -- Mary to preserve, and don't quite have
the bandwidth to do you justice. Keep going, though, even if all you get
from me is passive observation mode.

Bjorn said something interesting: that Smalltalk was organized like HPCT. I
have my doubts, but there might be something to that idea. Suppose we say
Classes are what I called Categories. Could all the methods and properties
and whatever else there is inside classes amount to lower-order
perceptions? Or is the category level too low? The reason I bring it up
here is that it may offer a way to build those toy models where we just say
that the appropriate perceptions appear somehow and then build the control
systems on that assumption. Are you going to join in? I suspect that this
Virtual Machine idea is going to make Smalltalk excruciatingly slow, but
maybe not.
We can hope not. The best thing about it is that the same programs will run
on a whole lot lof platforms, including Macs and Linux. Just download the
right Virtual Machine for your computer.

I've been meaning to ask you this. We put on quite an MOL demo one year,
and I've always wondered if there were any repercussion from it for you.

Best,

Bill P.