[From Bill Powers (920901.1900)]
Penni Sibun (920901.1200) --
... maybe we can say that interactionists think that you can't
describe or explain the individual w/o also the interaction.
Depends on what you want to describe about the individual. I come down
more on the side toward the individual, because people live in a lot
of different environments but they don't change their characteristics
for every one. Of course a person can change his/her actions according
to the circumstances, but it's the person who has the goals, like
eating, not the environment. At least the nonsentient part of the
environment.
a primary interactionist field, ethnomethodology, came
about explicitly in reaction to sociology. rather than asking ``how
can we describe the[s]e here institutions,'' it asks, ``are there
institutions, and if so what, and if so how constituted.''
Sounds pretty much the same to me. In either case, the institutions
are reified. Not that I don't believe in them. I just think they exist
in people's heads, not in the environment.
``playing a role'' is a well-understood way of talking about these
things. the advantage of talking about roles instead of perceptions,
is that you don't have to ask perceived by whom.
Well-used, I'd say. Understanding how an object can play a role is
something else. I think it makes a difference who is perceiving the
object. If you objectify objects, you can't explain how the same
object can have different roles depending on who's using it for what
purpose.
a machine is deterministic if there is only one way
to get from one state to the next; it's nondeterministic otherwise.
That's what I was asking about. If it's nondeterministic in that
sense, then what selects which of the possible paths will be taken? If
there's some systematic selection method, then it's deterministic
again. It's nondeterministic "all the way down" only if the choice is
random. So the question is, when there are alternate paths, what
determines the path actually taken?
no, a history is what happens between one point in time and another
point in time.
Do you mean that there are continuous processes taking place BETWEEN
nodes? I had thought that an operation simply jumped you from one node
to another.
no, it says that the goal of breaking an egg is satisfied if an egg
(any old egg) is broken. it doesn't have to be egg-47 or egg-13.
So the only egg that can
break is the one being operated on by the break operation and
there's no need to keep track of which egg is broken.]
that's true. why do you find this contradicts the above?
Because the way the proposition is stated, it doesn't matter whether
the agent breaks the egg into the pan or whether someone else blunders
through the room and breaks it on the floor, or if the agent finds it
already broken in the carton. If it's broken the goal is satisfied,
the way I read it.
This last summary is ever so much simpler than all the verbiage
that led up to it, because the attempt to present a general case has
been reduced to a specific case.
in what way is it specific?
Now we simply have goal-setting, perception of the current state,
comparison with the goal, and action that reduces the error. That I
understand. What I don't get are all these nodes and arcs and that
stuff, which sounds like the innards of a computer program, not
something happening with someone doing it. chacun a son gout.
···
---------------------------------------------------------------------
Avery Andrews (920902.0914)--
By contrast, real planning, as carried out with some modicum of
success >by real people (we have 4 people, 1 car, 7 hard commitments &
3 wishes >-- how are we going to get thru Saturday?) is carried out in
domains >where people have a lot of practical experience.
Let's see if I can tease out my idea on this subject, which I've tried
to say before without much clarity.
What's essential about planning isn't that people, cars, commitments,
and so on are involved. It's that you can't have two different
sequences involving the same variable going on at the same time. Or
that you can't have an object in two places at the same time. Or that
A can't be both below B and above it at the same time. What kind of
problem it is is the "practical experience" side of it. What practical
experience shows is that when you try to do things like this
simultaneously, the world won't cooperate and you end up in an
internal conflict. There are some things you can't set as reference
signals simultaneously.
So a true planning level has to be concerned with the conflict itself,
not with what it's about. The conflict has to be discovered. It can be
discovered in ongoing experience, or it can be discovered when you try
to run two parts of your imagined world at the same time. If your
internal model is pretty good, you can discover conflicts without
having them actually happen -- you can't even imagine a cup on a
saucer and at the same time under it, although you could imagine it on
the saucer and also upside down. You can imagine driving the car away,
but you can't imagine your wife coming out of the house ten minute
later and also driving it -- the the car you're driving -- away.
That's the practical experience part.
But the planning part has to be something else. It has to be about how
you resolve conflicts between variables of different types. One way is
resequencing (you wait to go on your trip until your wife gets back).
One way is changing spatial or temporal relationships (you drive your
wife to her appointment and go on to yours and pick her up on your way
back). At the planning level, all you work with are logical variables.
At that level they don't mean specific experiences any more. If you
can't do x and y at the same time, then you sequence them. If doing x
rules out doing y, then you make a choice between x and y. And so on.
What I'm trying to do is get the details of examples sorted out from
the specific functions that have to be performed. Some functions are
of lower level than planning. What the planning level does is organize
the way the lower-level functions are carried out.
Well, I can see I have only half an idea here. But it's worth throwing
on the table. With the eggs.
---------------------------------------------------------------------
RE: Chapman's book.
I now have a copy of Chapman's book on Sonja. Heavy going; he alludes
to more than he explains. The part on visual perception isn't anywhere
near as specific as I was lead to believe, ahem. He does have some
sort of data on phenomena of visual perception, and that's good and
useful. But the methods of achieving them that are proposed look
suspiciously like computer-program methods that just happen to pop
into mind -- bit blitting (filling out an area until a boundary is
reached), search trees, and so on. I'd say that Chapman (or Uhlmann)
is no closer to understanding form recognition than I am.
But something useful has already come out of reading this book. I
looked at the way Chapman was handling the problems and thought, "No,
that can't be how it works." And then I looked at my way and I thought
"No, that can't be how it works." It may have been something Chapman
said, or just the general atmosphere of his approach, but I suddenly
realized that treating perceptual signals simply as "how much of a
given perception is present" is just no good for the higher levels.
Probably not for any level above sensations.
Maybe it was this. Chapman seems, although not clearly, to adopt the
principle of "coded" perception, in which a system that receives a
perceptual signal can tell what it's about from the information in the
signal. In my model, of course, all perceptual signals are alike, so
they can't report both the state of the perception and its identity.
Mine is the pandemonium model, where it's up to higher-level systems
to make sense out of the behavior of lower-level signals without
knowing what they mean.
It was suddenly obvious to me that I was missing a bet with the
pandemonium model. The meaning of a given perceptual signal's
magnitude doesn't have to be just "how much." It can be "where on the
scale of variation that is being sensed." I don't mean that this
information is coded into the signal; I'm just talking about the
significance of the signal magnitude that is created by the form of
the perceptual function -- what it mean in terms of lower-level
signals.
For example, there can be a signal that represents an arm angle in the
horizontal plane. When the arm is swung all the way left, the signal
is maximum and when it is centered, the signal is zero. Now you could
say that this signal represents the amount of leftness, but we can
also say that it represents the state of something as a position
between the extremes of the range. In this case there's not much
difference.
But then I thought of one of the control examples in Demo 1 (or in
some demo or other), where you can alter the proportions of a diamond.
By working the control stick you can make the diamond go smoothly from
squat and wide to narrow and tall. You can hold it in any given shape
against disturbances, or match it to another instance of a diamond.
This is a change of shape, but it's taking place in some systematic
space. So a single signal's magnitude, suitably derived from lower
signals, could represent where on this continuum of shapes the diamond
is right now. There isn't any "how muchness" to it. It's simply that
the magnitude of the signal maps onto the continuum of shapes in a
unique way. If you set the reference signal to a specific magnitude,
you can work the stick until the perceptual signal has that magnitude,
and then the diamond will be in a specific shape.
When you think of just one such continuum you aren't much further
ahead. But now think of lots of different continua, different ways of
mapping continuous changes in the visual field onto a single
magnitude. For instance, one continuum might be the relative lengths
of pairs of the sides of the figure (I won't say diamond now). Another
might be the angular position of a line from the centroid to each
vertex. Another might be the total size of the figure (the vertices
moving radially in and out). If you have enough of these continua,
sufficiently different and sufficiently independent, the "diamond"
becomes a particular set of values of the perceptual signals
indicating positions along each of these continua. A higher-level
system could then come to recognize this particular combination as the
meaning of "diamond." Perhaps even a weighted sum.
Maybe everybody else was already thinking about it this way, but I
wasn't. Now I begin to get a little optimistic about actually modeling
configuration perception. This is going to have to simmer for a while.
----------------------------------------------------------------------
Best to all,
Bill P.