Sequences

[From Bruce Abbott (960113.1905 EST)]

Someone I'm not listening to in order to keep my perception of

sarcastic remarks near my reference level of zero (960113.1300) --

Bruce Abbott (960113.1450)

How can a sequential process in which X ----> X be open loop?

Because X(n-1) is not having an effect on X(n) _while_ X(n)
is having an effect on X(n-1). Your equation describes a
system with one direction of causality -- from X(n-1) to X(n);
that's _open loop_; there is no causal loop.

Here is the diagram:
                           X -------[f(X)]
                           ^ |
                           > >
                           +-----------+

This is a delay loop; not a closed causal loop. Nice try, but
no cigar.

Consider this system:

(1) X = k1*Y

(2) Y = k2*X;

Now there's a closed loop if I ever saw one. Y affects X and
_simultaneously_, X affects Y.

The loop gain, g, is obtained by multiplying the coefficients around the loop:

(3) g = k1*k2

Simulate this iteratively on a computer and on each iteration you have the
following inside the loop:

(5) X(n) = k1*Y(n-1), and

(6) Y(n) = k2*X(n-1)

But the same result can be given by

(7) X(n) = k1*k2*X(n-1) = g*X(n-1)

Here, g is a linear operator in a sequential process. But as equation 3
shows, it is also the loop gain of the closed loop given by equations 1 and 2.

Bruce Abbott (960112.2015)

To be in equilibrium the loop gain has to be 1.0

Bill Leach 960113.09:55 U.S. Eastern Time Zone --

Equilibrium systems clearly have "circular causality" so the "closed loop
method of analysis" is appropriate.

. . .

Discussing "gain" in such (equilibrium) systems is always problematical.
If the system is in a "stable state" then the loop gain of the entire
system when including effects of all disturbances is exactly unity. Gain
is only meaningful when (transient) unstable conditions exist.

Thanks for the confirmation, Bill.

Regards,

Bruce

[From Rick Marken (2003.06.04.1645)]

Bruce Nevin (2003.06.02 17:14 EDT)

Dependence is more general and adaptable than sequence. Sequences can do
much of what we might think are at the program level; dependence can do not
only sequences, but also if-then, and, or, and possibly negation. Control
of dependence is the only cognitive ability that is needed to account for
syntax and the informational capacities of language (Harris 1991).

I don't understand. If I perceive (and control) only dependence -- not sequence
or program -- then how am I able to perceive A B C as part of a repeating sequence
(A then B then C) or as the output of a program (if A then B else C). When I
perceive it as part of a repeating sequence, I will keep the sequence happening by
producing A B C after A B C; when I perceive it as the output of a program I will
keep the program happening by producing C C C after A B C. How could control of
dependence explain what I am doing in those two cases? PCT would explain it as
control of two different types of perception: sequence and program.

So is there a Dependance level instead of a Sequence level?

I think dependence is a perceptual relationship. So it's already handled by the
relationship perception level.

How would a sequence-level control system differ from dependences stored in
memory? Is there a way to test this experimentally?

I don't even know what this means. Are you asking for an experiment that would
distinguish control of dependence from control of sequence? If so, I think there
is, as long as "dependence" refers to a perceptual type that is not a sequence. If
"dependence" means "A then B" then a dependence is a sequence.

Best

Rick

···

--
Richard S. Marken, Ph.D.
Senior Behavioral Scientist
The RAND Corporation
PO Box 2138
1700 Main Street
Santa Monica, CA 90407-2138
Tel: 310-393-0411 x7971
Fax: 310-451-7018
E-mail: rmarken@rand.org

[From Bruce Nevin (2003.06.05 14:29 EDT)]

Rick Marken (2003.06.04.1645)–

Bruce Nevin (2003.06.02 17:14 EDT)

Dependence is more general and adaptable than sequence. Sequences
can do

much of what we might think are at the program level; dependence can
do not

only sequences, but also if-then, and, or, and possibly negation.
Control

of dependence is the only cognitive ability that is needed to
account for

syntax and the informational capacities of language (Harris
1991).

I don’t understand. If I perceive (and control) only dependence –
not sequence or program – then how am I able to perceive A B C as part
of a repeating sequence (A then B then C)

I control C. C cannot occur unless B has occurred. B cannot occur unless
A has occurred. (I cannot control A and B and C concurrently.)

When I perceive it as part of a repeating
sequence, I will keep the sequence happening by producing A B C
after A B C

A must occur when C has occurred. Plus the above.

So is there a Dependance level instead of
a Sequence level?

I think dependence is a perceptual relationship. So it’s already handled
by the relationship perception level.

OK, I’ll buy that. Sequence then is an interconnection of
dependence-relation controllers. There’s nothing there but the
dependence-relation controllers. Control of sequence is not carried out
by another system on a sequence level, but rather by dependence-relation
controllers hooked up in a certain way, such that successful control by
one is the condition on which the next one is dependent in order to start
controlling.
In B:CP pp. 142-145, Bill proposed a sequence of specialized flip-flop
latches on the sequence level. Think of these as dependence controllers
on the relationship level. The signal from the [s] recognizer goes many
places. One place it goes it to a relationship controller that sends
along a signal that corresponds to perceiving [jus] juice. But in
order for it to fire, a signal from the [u] detector must have been
received by another relationship controller; that signal is the
independent term of this relationship. That second controller sends a
signal to the first only if it has received a signal from the [j]
detector.

Input
Output

[j], [u]
[ju] Controls relationship of
[j], [u]

[ju], [s]
[jus] Controls relationship of [ju],
[s]

when I perceive [A B C] as the output of a
program I will keep the program happening by producing C C C after A B C.

or [how am I able to perceive A B C] as the
output [i.e. one of the possible outputs] of a program (if A then B else
C).

This is simpler to state in terms of relationships than in terms of
dependences. A program is an arrangement of relationships such as
consequence (if-then), or equivalence of alternatives (exclusive or), or
optionality (there’s a better way to state inclusive or as a relationship
but I’m lazy). It isn’t that there are program controllers in a separate
part of the brain, but rather that relationship-controllers right there
on the relationship level are interconnected in a certain way. Just as
there aren’t sequence controllers in a separate part of the brain, but
rather that relationship-controllers right there on the relationship
level are interconnected in a certain way.

Are you asking for an experiment that
would distinguish control of dependence from control of sequence? If so,
I think there is, as long as “dependence” refers to a
perceptual type that is not a sequence. If “dependence” means
“A then B” then a dependence is a sequence.

You have said it. A dependence relation of a temporal sort is a
sequence.

    /Bruce

Nevin

···

At 04:44 PM 6/4/2003, Richard Marken wrote:

[From Rick Marken (2003.06.05.1525)]

Bruce Nevin (2003.06.05 14:29 EDT)–
Rick Marken (2003.06.04.1645)–

If I perceive (and control) only dependence
– not sequence or program – then how am I able to perceive A B C as part
of a repeating sequence (A then B then C)

I control C. C cannot occur unless B has occurred. B cannot occur
unless A has occurred. (I cannot control A and B and C concurrently.)

Yes. That explains how I could control the perception of the sequence
ABC. But it doesn’t explain how I can perceive and control the program
if A then B else C, which could also be represented as ABC.

I think dependence is a perceptual relationship.
So it’s already handled by the relationship perception level.
OK, I’ll buy that. Sequence then is an interconnection of dependence-relation
controllers. There’s nothing there but the dependence-relation controllers.
No. There’s also the sequence. I can perceive that A depends (in
some way that you never specified) on B but I can also perceive the sequence
A B. The dependence relationship between A and B is not the same as the
sequence (unless that’s what you are referring to by the word “dependence”).
Say that the dependence is that B doesn’t happen unless A occurs first.
This dependence can still exist even if C happens to occur right after
A. Then we get the sequence ACB, which is not the same as the sequence
AB but B is, nevertheless, actually dependent on A in both cases, in the
sense that it wouldn’t have occurred unless A had occurred.
Control of sequence is not carried out by another
system on a sequence level, but rather by dependence-relation controllers
hooked up in a certain way, such that successful control by one is the
condition on which the next one is dependent in order to start controlling.
I don’t see how you can design as system that controls sequences unless
it perceives sequences. Unless a dependence relation is a sequence, such
a relationship is not going to work as the input to a sequence controller.
It isn’t that there are program controllers in a
separate part of the brain, but rather that relationship-controllers right
there on the relationship level are interconnected in a certain way.
They have to be interconnected in such a way that they become a perceiver
of a particular program. Individual relationship controllers perceive relationships
but they can’t perceive that they are part of the program “find the square
root of 100” rather than the program “divide 100 by 60”.
Program perceiving input functions are needed to perceive and control
programs; sequence perceiving input functions are needed to perceive sequences;
relationship perceiving input functions are needed to perceive relationships,
etc. Because perception is hierarchical (in theory) the sequence
perceiving input functions may receive inputs that are the outputs of the
relationship perceiving input functions, for example. But just because
there are a bunch of relationship perceiving input functions that perceive
the elements of a sequence doesn’t mean that the system can perceive and
control sequences. There must be an input function that produces a signal
indicating the state of the sequence (the degree to which is is occurring,
perhaps) in order for a system to perceive and control a sequence.

Best regards

Rick

···

Richard S. Marken, Ph.D.

Senior Behavioral Scientist

The RAND Corporation

PO Box 2138

1700 Main Street

Santa Monica, CA 90407-2138

Tel: 310-393-0411 x7971

Fax: 310-451-7018

E-mail: rmarken@rand.org

From Bill Powers (2003.06.05.1645 MDT)]

Bruce Nevin (2003.06.05 14:29 EDT)--

I control C. C cannot occur unless B has occurred. B cannot occur unless A
has occurred. (I cannot control A and B and C concurrently.)

Let's take these statements apart. A useful principle is that an
"occurrance" can't affect anything in the future unless it leaves a
physical effect that persists. All actions and interactions must take place
in present time.

In logic design, this means that a transition of A from off to on must set
a flip-flop (a simple memory bit), the state of which indicates whether A
has occurred at some time in the past. Call the state of this flip-flop A'
(A-prime). If A' is true or on, then A has occurred. Once the flip-flop is
set, A can return to the off state; A'-on continues to indicate that the
"occurrance" has occurred. Note that the state of A' indicates _one or more
occurrances_ of A. Note also that A' must be off before the first
occurrance of A that is to be considered.

Now consider the statement, "B cannot occur unless A has occurred." This
statement as it stands does not say that if A does occur, B will occur. I
will assume that the occurrance (change of state) of B depends not only on
A' being on, but on some other event taking place: D (a control action, I
suppose). If D occurs while A' is on, B _will_ occur. If D occurs with A'
in the off state, B cannot occur.

We now have to ask what is meant by an "ocurrance" of B. Does this mean a
transition from B being absent (off) to being present (on) and then another
transition taking place immediately from on to off, or does it mean the
off-to-on transition occurring and then B staying on
indefinitely? Whichever is meant, or whichever happens, we can let B turn
another flip-flop on, the state being symbolized by B'. If B' is on, B has
occurred (even if B then returns to zero).

Now that we have B', we can ask whether C _will_ occur when B' is on, or if
another variable E must occur with B' on before C can occur. And then we
have to answer the question of what we mean by an "occurrance" of C. The
answers are left as an exercise to complete the logic design.

An unanswered question is whether these logical relationships are being
imposed on perceptions, or whether they express properties of the
environment. Either way, it's pretty clear that a logic-level control
system is needed to provide the required variables D and E in the proper
order to allow C to occur, and to perceive the states of A', B', maybe C',
D, and E.

I've gone through this to show what's required to turn an informal-language
description into a set of quantitative statements that we could build into
a working model. Hope it was at least interesting.

Best,

Bill P.

[From Bruce Nevin (2003.06.08 14:08 EDT)]

Rick Marken (2003.06.05.1525)--

Bruce Nevin (2003.06.05 14:29 EDT)--

Rick Marken (2003.06.04.1645)--

If I perceive (and control) only dependence -- not sequence or program -- then how am I able to perceive A B C as part of a repeating sequence (A then B then C)

I control C. C cannot occur unless B has occurred. B cannot occur unless A has occurred. (I cannot control A and B and C concurrently.)

Yes. That explains how I could control the perception of the _sequence_ ABC. But it doesn't explain how I can perceive and control the _program_ if A then B else C, which could also be represented as ABC.

If A then B else C generates (AB or AC), it does not generate ABC. It also generates AX, where X is a category comprising (B or C). So it could be stored in memory as a relation of equivalence (either will do) between two dependency relations, AB and AC. Or it could be stored in memory as a dependency relation between A and the equivalence set (B or C).

I think dependence is a perceptual relationship. So it's already handled by the relationship perception level.

OK, I'll buy that. Sequence then is an interconnection of dependence-relation controllers. There's nothing there but the dependence-relation controllers.

No. There's also the sequence.

This could be no more than the interconnection of dependence-relation controllers.

I can perceive that A depends (in some way that you never specified) on B

With a different dependence-controller.

but I can also perceive the sequence A B.

You can perceive the sequence A B with a controller of a dependence relation between A and B. It's only when you get 3 or more terms that you get more than one dependence relation interconnected.

The dependence relationship between A and B is not the same as the sequence (unless that's what you are referring to by the word "dependence").

Temporal relation is one kind of dependence relation.

Say that the dependence is that B doesn't happen unless A occurs first.

I cannot control B without first controlling A.

This dependence can still exist even if C happens to occur right after A. Then we get the sequence ACB, which is not the same as the sequence AB but B is, nevertheless, actually dependent on A in both cases, in the sense that it wouldn't have occurred unless A had occurred.

Lots of irrelevant things happen in the course of any sequence. Birdsong out the window. Clock ticking. Spider moving a leg. Breathing. That does not make them part of the sequence. The dependence is a relation between B and A, regardless of whatever else might happen to occur.

Control of sequence is not carried out by another system on a sequence level, but rather by dependence-relation controllers hooked up in a certain way, such that successful control by one is the condition on which the next one is dependent in order to start controlling.

I don't see how you can design as system that controls sequences unless it perceives sequences.

I don't deny that a mechanism is required to control a sequence. I'm talking about some alternative models of that mechanism.

Unless a dependence relation is a sequence, such a relationship is not going to work as the input to a sequence controller.

Temporal relation is one kind of dependence relation.

It isn't that there are program controllers in a separate part of the brain, but rather that relationship-controllers right there on the relationship level are interconnected in a certain way.

They have to be interconnected in such a way that they become a perceiver of a particular program.

Yes. All that is required is the dependence controllers and their interconnections.

But care is called for here. Metaphors are treacherous. There isn't a whole lot of evidence that the encoding by which we program a computer directly corresponds to what humans do.

Individual relationship controllers perceive relationships but they can't perceive that they are part of the program "find the square root of 100" rather than the program "divide 100 by 60".

Mathematical problem solving is not a good source of examples, having its basis in language. Think of something else that you diagnose as being controlled at the Program level.

Program perceiving input functions are needed to perceive and control programs; sequence perceiving input functions are needed to perceive sequences; relationship perceiving input functions are needed to perceive relationships, etc. Because perception is hierarchical (in theory) the sequence perceiving input functions may receive inputs that are the outputs of the relationship perceiving input functions, for example. But just because there are a bunch of relationship perceiving input functions that perceive the elements of a sequence doesn't mean that the system can perceive and control sequences. There must be an input function that produces a signal indicating the state of the sequence (the degree to which is is occurring, perhaps) in order for a system to perceive and control a sequence.

Yes, that is the standard theory.

If some cognitive process is based in associative memory some interesting possibilities open up.

         /Bruce Nevin

···

At 03:24 PM 6/5/2003, Richard Marken wrote:

[From Bruce Nevin (2003.06.08 14:09 EDT)]

Bill Powers (2003.06.05.1645 MDT)--

It's infuriating, I'm sure, to have some twerp suggest changes who hasn't built any models. (Well, tweaked some spreadsheet simulations, but that hardly counts.) And it's true that any verbal formulation can be made more precise. But this is not just because words are imprecise or ambiguous. The thing to be rendered more precisely in the design of a model is not the words, after all, but the phenomena that the words were meant to indicate.

Perhaps it's hard to imagine how anything in addition to logic circuit design can enter into a model.

As you imply by the word "simple", memory is of course not limited to a memory bit flipping on and off. Two perceptions can be associated in memory, and if the association is that when either is present in the environment the always is also, one consequence is that if one is perceived the other {is assumed present, is looked for, is startling by its absence, etc.} Is this to be modeled by a control system whose control of a relationship between two perceptions has the appearance of associative memory? (In which case we must ask why and how does its control of that relationship matter to arbitrary other control systems yielding such different responses on different occasions.) Or could it be modeled by these various control systems getting signals from associative memory rather than from lower-level perceptual input or higher-level error output?

I control C. C cannot occur unless B has occurred. B cannot occur unless A
has occurred. (I cannot control A and B and C concurrently.)

Let's take these statements apart. A useful principle is that an
"occurrence" can't affect anything in the future unless it leaves a
physical effect that persists. All actions and interactions must take place
in present time.

Therefore memory is required for control of temporal relationships, including dependences where the related perceptions are not present at the same time.

In logic design, this means that a transition of A from off to on must set
a flip-flop (a simple memory bit), the state of which indicates whether A
has occurred at some time in the past. Call the state of this flip-flop A'
(A-prime). If A' is true or on, then A has occurred. Once the flip-flop is
set, A can return to the off state; A'-on continues to indicate that the
"occurrence" has occurred.

In the latching sequence in B:CP for the word "juice" we want the reinforcing signal in the flip-flop for [j] to decay if the next two segments are not [u] and [s]. Suppose the phrase that is heard is "jam on my shoe". We don't want the flip-flop for that initial [j] to still be hanging around when the [u] of "shoe" comes in. In fact, if the second latch is turned on by a signal from the [j] latch but does not immediately receive a signal from the [u] recognizer it has to send an inhibitory signal back to turn off the recirculating loop in the [j] recognizer. The person could be saying "jam - juice I mean". (I don't mean to pick on that 1973 proposal, I realize it was presented as a demonstration of feasibility.) The temporal delay or even interruption tolerated between terms of a sequence varies arbitrarily.

Note that the state of A' indicates _one or more occurrences_ of A.

In general, I should think a sequence ABC isn't expansible as AA*BB*CC* (that is, with Kleene star notation, AABC, AAABC, ABBC, etc.), so this design is departing from the phenomenon. A provision for temporal delay and possible interruption between terms of a sequence probably would as a byproduct limit each term to one occurrence.

Note also that A' must be off before the first occurrence of A that is to be considered.

This potentially conflicts with the above ("one or more").

Now consider the statement, "B cannot occur unless A has occurred." This
statement as it stands does not say that if A does occur, B will occur.

Yes. It only says if you're controlling B you can't do so unless you are already controlling A (or the result or product of controlling A). I can't drive to the store without first getting into the car. I can't replace the swing-latch on the fence without first cutting a piece of wood of appropriate size.

I will assume that the occurrence (change of state) of B depends not only on
A' being on, but on some other event taking place: D (a control action, I
suppose). If D occurs while A' is on, B _will_ occur. If D occurs with A'
in the off state, B cannot occur.

You don't need D etc. if when I say "A occurs" I am referring to control of A, as above. My use of the word "occur" has confused things.

We now have to ask what is meant by an "occurrence" of B. Does this mean a
transition from B being absent (off) to being present (on) and then another
transition taking place immediately from on to off, or does it mean the
off-to-on transition occurring and then B staying on
indefinitely? Whichever is meant, or whichever happens, we can let B turn
another flip-flop on, the state being symbolized by B'. If B' is on, B has
occurred (even if B then returns to zero).

Now that we have B', we can ask whether C _will_ occur when B' is on, or if
another variable E must occur with B' on before C can occur. And then we
have to answer the question of what we mean by an "occurrence" of C. The
answers are left as an exercise to complete the logic design.

An unanswered question is whether these logical relationships are being
imposed on perceptions, or whether they express properties of the
environment.

Both, so long as the control loop is closed through the environment.
I see a flash and then I hear thunder. This temporal sequence is evidently a property of the environment (although science, even physics, never proves anything), but the perception of them as a sequence is imposed on the perceptions of the flash and the crash. Indeed (according to hypothesis) those event perceptions are imposed on (transition? configuration? and) sensation perceptions, which are imposed on intensity perceptions. I control those perceptions.

I assumed you used the word "logical" because this is an exercise in logic design, and not because a logic level of the hierarchy is involved. But now you take that as an obvious step.

Either way, it's pretty clear that a logic-level control
system is needed to provide the required variables D and E in the proper
order to allow C to occur, and to perceive the states of A', B', maybe C',
D, and E.

That's quite a leap, and an unnecessary one. If "A occurs" etc. refers to control of A etc. then there is no D etc. If A' is the persistence of A by a recirculation loop as in the B:CP latch mechanism then it does not need to be perceived on a higher level. If the dependency is an association in memory no recirculation loop or other additional mechanism is required for persistence. Perception of A evokes the memory of the association A then B, which in some way activates control of B. (Once we could say that the memory provides a reference for controlling B, but it seems we can't say that any more. See below.)

I've gone through this to show what's required to turn an informal-language
description into a set of quantitative statements that we could build into
a working model. Hope it was at least interesting.

There's quite a bit unspecified in the memory-based idea. How is memory implemented? How do signals get from memory into control loops and vice versa, and in general how are logic circuits and memory connected together?

We had the surmise that memory provides reference signals. This ran afoul of the origination of reference signals in higher-level error. No way has been specified for error output to address memory yielding reference signals that vary appropriately. There remains only the special case of innate ("intrinsic") references.

How do signals get from memory into control loops? Memory can provide perceptual input signals, or at least to me it seems obvious that this is what is happening when we 'remember' something.

When perceptual inputs that originate in memory are controlled, how is the control loop closed? Not through the environment. Through the imagination switch? That would have to be imagination switches, plural, many, down to whatever level of detail at whatever levels of perception are evoked in the memory and affected by control.

For example, a couple of years ago I bought a sailboat for $350. This entails quite a bit of learning.

I'm standing in front of the house holding a 20 foot piece of half-inch line. I've spliced an eye in one end that just fits over a cleat on my boat, and an eye in the other end through which I've fitted a carabiner. My eyes turn toward a distant tree, but it seems that they don't focus on it; in any case, I don't see it. There's a kind of tunnel vision. What I'm looking at (albeit faintly) is my mooring float on the surface of the water. It's a styrofoam lobstering float with a piece of 1-inch PVC driven through the hole. The line up through it ends in an eye loop spliced around a metal thimble. I see the carabiner being hooked through that eye and the line paying out to the cleat on the boat. But when I cast off I want to leave the line in the water. What's the point of the carabiner? It encourages someone to unhook the line from the float and carry it on the boat. But the near end of the line passes through a 4-foot length of plastic tubing, which I glance at now and heft in my hands - chafing gear to prevent the line wearing through in a storm and parting like it did last year (brief glimpse of the mooring float in the water and no boat, revisit associated emotions). Makes it even more awkward to stow the mooring line forward of the mast. Image of line and chafing gear underfoot as we work the sails, being kicked overboard, etc. Anyway, you're supposed to drop it in the water and pick it up on your return. Aha! I see one of those square chain link things that opens on one side with a kind of nut and threaded shaft. I used one to connect the line on the mooring float to the anchor rode itself. Get another and save the carabiner for someplace you need it. I focus my eyes down on the line in my hand and remove the carabiner from the loop in the end. I now have a reference for buying that other kind of connector, and later that afternoon I drive down to the store to get it, and then I put it on the line. I'm one increment closer to being ready to put the boat in the water.

Here, I've just remembered and written about a sequence that involved memory, imagination, and real-time control of perceptions. There were remembered perceptions that changed as I controlled them. This control of perceptions interiorly rather than through the environment is how I experience imagination.

For example, when I remembered the environment on the boat out on the water, I imagined clipping the line to the mooring float with the carabiner in my hand. I imagined someone unclipping it from the mooring. I included my memory (and perhaps my real-time perceptual input) of the line in my hand, and I combined the two, leaving one end of the mooring line on the cleat forward of the mast and stowing the remainder nearby somehow. And that seemed very awkward to me. Taking it aft and stowing it was very awkward as well, there's more than enough to do as you drop sails on approach to the mooring. I remembered the threaded connector holding two lines together when I prepared the anchor and mooring float earlier this Spring. I saw such a connector replacing the carabiner.

Each of these images includes remembered perceptions on several levels, at least intensities, sensations, configurations, transitions, and relationships. They are not as vivid as the trees in the distance (if I let my eyes see them, so to speak), but they are present. That is, when I see a cleat on top of the boat, I see the color of the boat, the color of the cleat, the edges that delineate the configurations, the configurations, my hands stretching the loop as I slip it over the cleat, the line running taut out through the guide to the float, and so on.

Perhaps I am not controlling these perceptions inwardly, but am only combining memories -- e.g. transitions such as the loop being stretched over the cleat, the line lying loose and being pulled taut. My only memory of pulling that loop over the cleat was when I did it experimentally while standing on the frame of the trailer and reaching up to do it where I could barely see the cleat, to confirm that I hadn't made it too small. Yet I saw myself doing it from above, as though standing over the cleat. It appears that I combined my memory of bending above the cleat together with my memory of what that loop looks like in my hands. (It did seem to be that very loop in that very line.) Just as I combined my memory of the carabiner on the end of the line with my memory of the loop at the mooring float, and then substituted that other type of connector in place of the carabiner. This combining and altering of images, which is what I call imagination, seems like control, but if so the control loop seems to be closed through memory rather than through the environment. In imagination, I can change the color of the boat, I can make the line out of pink playdoh (Oops! now the boat is plastic). While I do this, my eyes are looking "into the distance" and I am not seeing my environment.

Signals seem to come into control loops from memory as perceptual inputs. Memories seem to include signals from low in the hierarchy, and it seems to be possible to change those perceptions. These changes seem to be controlled. When I actually stand on my boat I can't change the metallic color of the cleat to black or red or burgundy just looking at it, but I can remember what it looks like and then I can change that inward image of it, its color, its shape, I can even make it move. (Disney, what have you done to us with your dancing dinner plates?) So this seems like control closed through something like associative memory rather than closed through the environment.

A model of these phenomena could also simplify the model of dependence and sequence. It could account for our ability to quickly invent and control ad hoc relations and sequences, without invoking reorganization.

···

At 07:31 PM 6/5/2003, Bill Powers wrote:

[From Bill Powers (2003.06.08.1258 MDT)]

Bruce Nevin (2003.06.08 14:09 EDT)--

It's infuriating, I'm sure, to have some twerp suggest changes who hasn't
built any models. (Well, tweaked some spreadsheet simulations, but that
hardly counts.) And it's true that any verbal formulation can be made more
precise. But this is not just because words are imprecise or ambiguous. The
thing to be rendered more precisely in the design of a model is not the
words, after all, but the phenomena that the words were meant to indicate.

Not infuriating at all. What you propose is a class of controlled variable
to be added to to the model, or if not added, at least identified within
available features of the model. Before that can be done, we have to
specify exactly what phenomenon is being described, and that requires
analysis into more specific processes. Example: I wanted to add memory to
the model, and the first cut at this was to decide that the memory
phenomenon had to be equivalent to a time-delay in a perceptual channel.
This had nothing to do with associative addressing, control, or anything
else: just the basic phenomenon. Then the basic idea could be elaborated.

Perhaps it's hard to imagine how anything in addition to logic circuit
design can enter into a model.

Modeling requires one to decide what is important in a new model and what
can be left as a detail to be figured out later. We know we have to start
with simple things and then add details; just diving into the deep end of
the pool before paddling around in shallow water for a while is dangerous.
Concepts like "dependency" are not primitive givens, but complex notions
with shifting meanings, like most terms in ordinary language. We have to
take them apart before we can see how or if they add something new to the
phenomena of interest.

As you imply by the word "simple", memory is of course not limited to a
memory bit flipping on and off. Two perceptions can be associated in
memory, and if the association is that when either is present in the
environment the always is also, one consequence is that if one is perceived
the other {is assumed present, is looked for, is startling by its absence,
etc.} Is this to be modeled by a control system whose control of a
relationship between two perceptions has the appearance of associative
memory? (In which case we must ask why and how does its control of that
relationship matter to arbitrary other control systems yielding such
different responses on different occasions.) Or could it be modeled by
these various control systems getting signals from associative memory

This is diving into the deep end of the pool. A memory bit flipping on and
off is _sufficient_ to indicate that something has happened, and serves as
a memory of an occurrance when all you need to know is whether it has
happened. This leads to a deeper understanding of what we mean by an
occurrance: until the need for a lasting trace is noticed, we take it for
granted that we know that an event has occurred even though it is not
occurring right now. That's why the flip-flop came up: I noticed this fact
about occurrances, realized that there has to be some indicator of a past
occurrance, and the flip-flop was the simplest model of such an indicator
that I could think of. Someone else might have thought of a mark on a chalk
board, or a change in RNA chemistry. No need to invoke all the human memory
apparatus with all its complexities at this point.

Therefore memory is required for control of temporal relationships,
including dependences where the related perceptions are not present at the
same time.

Any lasting trace can be called "memory", but not all memory is the
equivalent of a full-blown tape recorder. The retina, for example, retains
the effects of a flash of light for something like a tenth of a second, so
we continue to see the light after the photons are no longer arriving.
Should we call that a "memory?" I don't think so; asserting membership in
that abstract class tells us nothing we don't already know. Time-delays are
involved in perceiving rates of change, but I doubt that they share any
important properties with other processes we call memory.

Then there is the fact that some perceptions, like the development of
strength in the center of a chess board, take time to form, and an equal
time to change state. The occasion for such a change of state of the
perception might be a move of a pawn that took place a minute or two ago.
So should we call this persistent effect of the move a "memory" of it?
Again, I don't think so. It's just a slow perception.

So perhaps the bit flip shouldn't even be thought of as memory; it could be
just the way in which a perceptual signal is generated that says "A has
occurred." As you say, this signal should fade with time so we don't get
cluttered up with every past event that has ever occurred. Keep it simple.

In the latching sequence in B:CP for the word "juice" we want the
reinforcing signal in the flip-flop for [j] to decay if the next two
segments are not [u] and [s]. Suppose the phrase that is heard is "jam on
my shoe". We don't want the flip-flop for that initial [j] to still be
hanging around when the [u] of "shoe" comes in.
In fact, if the second
latch is turned on by a signal from the [j] latch but does not immediately
receive a signal from the [u] recognizer it has to send an inhibitory
signal back to turn off the recirculating loop in the [j] recognizer. The
person could be saying "jam - juice I mean". (I don't mean to pick on that
1973 proposal, I realize it was presented as a demonstration of
feasibility.) The temporal delay or even interruption tolerated between
terms of a sequence varies arbitrarily.

See? You're designing with neural currents already. I agree that some
spontaneous timed reset mechanism needs to be there -- some experimentation
would be needed to see what the time delay should be (what is
"immediately?"). The actual circuit design wouldn't be a problem. Defining
what is wanted is always the hardest part.

Note that the state of A' indicates _one or more occurrences_ of A.

In general, I should think a sequence ABC isn't expansible as AA*BB*CC*
(that is, with Kleene star notation, AABC, AAABC, ABBC, etc.), so this
design is departing from the phenomenon.

OK, if you want to say that ringing the doorbell to get someone to open the
door won't work if you ring more than once. I don't think you want to say
that. In cases where multiple occurrances of A are not equivalent to the
sense that "A has occurred," you need to state in what way the second
occurrance changes the situation, and build that change into the model.
Obviously, the effect of the "second" A can't occur until a previous A has
occurred, so we're still talking about a related problem. We need an
indication that A has occurred at least once.

A provision for temporal delay and
possible interruption between terms of a sequence probably would as a
byproduct limit each term to one occurrence.

If that special rule, for some reason, is really what is required of the model.

Note also that A' must be off before the first occurrence of A that is to
be considered.

This potentially conflicts with the above ("one or more").

Not at all. An "occurrance" is a state transition. What happens after the
transition is not part of the definition of an occurrance. If A exists
continually, it does not continue to "occur". If you were designing a
detector for an occurrance, you would have to arrange for it to be
triggered by the transition of A from absent to present, off to on. If you
wanted this triggering to be repetitive, you'd have to get A back into the
off state before each new transition. This could be done by having it decay
as you suggested above, or by explictly resetting it, as is the case for a
flip-flop. Or A could be an inherently transient event, like a bang on a drum.

If anyone is unclear about flip flops: A flip-flop is a device with two
kinds of input and one output. One of the inputs flips the output on, after
which it will stay on until an input of the other kind flops it off. A wall
rocker switch for a light is a flip-flop: push one end of the rocker down
and the light will turn on and stay on until you push the other end down.
So if the light is on (or the on-end of the switch is down), that is an
indication that the on-end of the switch has been pushed, even after the
push is over with. If another push of the same end occurs, that end stays
down and the light stays on, so all that the light (or the switch) can
indicate is that at least one push of the on-end has occurred. In order to
indicate each of a series of pushes, whatever subsystem notes that the
light is on should turn it off by pushing the other end of the switch. That
makes it ready to indicate another push. Or the switch could reset itself,
like the pushbutton on a hot-air hand dryer.

Now consider the statement, "B cannot occur unless A has occurred." This
statement as it stands does not say that if A does occur, B will occur.

Yes. It only says if you're controlling B you can't do so unless you are
already controlling A (or the result or product of controlling A). I can't
drive to the store without first getting into the car. I can't replace the
swing-latch on the fence without first cutting a piece of wood of
appropriate size.

If you're controlling B you can't do so? Do you mean that if you _try_ to
control B, you will fail unless you've already controlled A? I think
there's some confusion here between states (which are persistent) and state
transitions (which are transient occurrances).

In each case, what is the "flip-flop" that indicates that the previous
event has occurred? In the first case, the event of getting into the car
is indicated as having occurred by being seated in the car, isn't it? You
can't drive until you're seated in the car. In the second case, suppose
you're interrupted with an emergency that takes you away for a couple of
hours, so when you get back to the workshop, how do you know whether you
finished the event of cutting the piece of wood before the phone rang?

I will assume that the occurrence (change of state) of B depends not only on
A' being on, but on some other event taking place: D (a control action, I
suppose). If D occurs while A' is on, B _will_ occur. If D occurs with A'
in the off state, B cannot occur.

You don't need D etc. if when I say "A occurs" I am referring to control of
A, as above. My use of the word "occur" has confused things.

Has it? What happens if you try to control B before A has occurred? You can
apply an output to B, I suppose, but under the rules, B will not occur
because A has not occurred. You can try to start the car, but you can't
because you you haven't got into it yet. "Trying to control B" is the sort
of thing I meant by "D".

An unanswered question is whether these logical relationships are bein
imposed on perceptions, or whether they express properties of the
environment.

Both, so long as the control loop is closed through the environment.
I see a flash and then I hear thunder. This temporal sequence is evidently
a property of the environment (although science, even physics, never proves
anything), but the perception of them as a sequence is imposed on the
perceptions of the flash and the crash. Indeed (according to hypothesis)
those event perceptions are imposed on (transition? configuration? and)
sensation perceptions, which are imposed on intensity perceptions. I
control those perceptions.

"Imposed on" was not a good expression. I was asking whether the condition
that you can't do B until A has occurred is a property of the environment,
or an arbitrary rule you have adopted (you can't have dessert until you've
eaten your vegetables).

I assumed you used the word "logical" because this is an exercise in logic
design, and not because a logic level of the hierarchy is involved. But now
you take that as an obvious step.

Either way, it's pretty clear that a logic-level control
system is needed to provide the required variables D and E in the proper
order to allow C to occur, and to perceive the states of A', B', maybe C',
D, and E.

That's quite a leap, and an unnecessary one. If "A occurs" etc. refers to
control of A etc. then there is no D etc.

OK, you're saying that if you control A, you WILL immediately be
controlling B. I don't think so. I think that you can try to control B, but
it won't be possible unless A has occurred. If you try to control B by
applying the appropriate output, you will fail unless A has occurred.
"Applying the appropriate output" is D.

>There's quite a bit unspecified in the memory-based idea. How is memory
>implemented? How do signals get from memory into control loops and vice
>versa, and in general how are logic circuits and memory connected together?

How memory is implemented is no more essential for us to know than how
perception of configurations is implemented. We have to agree on the
phenomena; the details will just have to wait until we know more.

>We had the surmise that memory provides reference signals. This ran afoul
>of the origination of reference signals in higher-level error. No way has
>been specified for error output to address memory yielding reference
>signals that vary appropriately.

Why get hung up on that? There are many ways to convert a signal of
variable magnitude into an address that picks one discrete item from a list
of them. This is especially easy if we suppose that this mode of
control-through-memory is confined to the higher levels where all signals
are symbolic rather than analog. For that kind of control system it would
be simple to write a computer program that would convert each error signal
into a choice of a remembered lower reference condition that would, if
achieved, tend to reduce that error because it did so in the past.

But let's get the trotter back in front of the sulky. I don't insist that
reference signals come from memory. What I do insist on is that the model
be able to reproduce a perception that occurred in the past, the process
called "doing the same thing again." I insist that the model be able to
create signals standing for perceptions that have actually occurred in the
past, and to make sure those signals are sent into the perceptual pathways
where that kind of signal will be correctly understood. I insist that it be
able to create perceptions in imagination as well as by acting on the
environment. I insist that we be able to examine potential reference
signals to see whether making perceptions match them would produce the
result we want, and to do this without putting those reference signals into
active service until we want to. I insist because these seem to be
phenomena that require explanation. My memory proposal was a fairly simple
start on an explanation. Got something better?

When perceptual inputs that originate in memory are controlled, how is the
control loop closed? Not through the environment. Through the imagination?
switch? That would have to be imagination switches, plural, many, down to
whatever level of detail at whatever levels of perception are evoked in the
memory and affected by control.

As many as are required to explain what we experience, no more and no fewer.

I liked your example of working with mooring your boat, but am not sure
what point you were making amid all the details.

Each of these images includes remembered perceptions on several levels, at
least intensities, sensations, configurations, transitions, and
relationships. They are not as vivid as the trees in the distance (if I let
my eyes see them, so to speak), but they are present. That is, when I see a
cleat on top of the boat, I see the color of the boat, the color of the
cleat, the edges that delineate the configurations, the configurations, my
hands stretching the loop as I slip it over the cleat, the line running
taut out through the guide to the float, and so on.

This tallies with my experience of imagination, though yours seems to
include more lower-level perceptions than mine do. Check back when you're 76.

Perhaps I am not controlling these perceptions inwardly, but am only
combining memories -- e.g. transitions such as the loop being stretched
over the cleat, the line lying loose and being pulled taut.

I think we can manipulate imagined things much as we do in reality. The
main difference is that we short-circuit the reference signal into the
perceptual channel instead of using it to tell a lower-order system to
provide the corresponding perception. Since successful control would
produce the same perception, it's as though we had instantly accomplished
the goal-state. You can pull an anchor up by hand with no effort at all,
unless you wish to imagine the effort, too, as well as its normal result.

Signals seem to come into control loops from memory as perceptual inputs.
Memories seem to include signals from low in the hierarchy, and it seems to
be possible to change those perceptions. These changes seem to be
controlled. When I actually stand on my boat I can't change the metallic
color of the cleat to black or red or burgundy just looking at it, but I
can remember what it looks like and then I can change that inward image of
it, its color, its shape, I can even make it move. (Disney, what have you
done to us with your dancing dinner plates?) So this seems like control
closed through something like associative memory rather than closed through
the environment.

I agree withn these observations. It's not necessarily done through
associative memory; in fact I'm uncomfortable with settling on any
particular mechanisms here. We have a long way to go before guessing in
that much detail will get us anywhere.

A model of these phenomena could also simplify the model of dependence and
sequence. It could account for our ability to quickly invent and control ad
hoc relations and sequences, without invoking reorganization.

Well, that model pretty much exists already, at least in the form of a
specification for what it has to accomplish. I agree that we have the
ability to experiment in imagination, to plan and "reason" about things by
seeing what works and doesn't work in imagination. It's pretty clear that
the control processes are essentially the same as they are when we're
really acting, except that below a certain level the reference signals are
perceived directly instead of being brought into existence as real-time
perceptions by lower systems. But I don't think we're in a position to make
a working model of that yet, not a general model guaranteed to work for all
possible cases. More likely, if we had one specific simple instance to work
with, we might be able to construct one model that would work for that case
only. But a lot of assumptions would have to be built into it, and we
wouldn't be able to supply the details. Such a model might have a
higher-order output signal going into a memory module and a lower-order
reference signal coming out of it, but it wouldn't go so far as to explain
what goes on inside that module to make the right thing come out.

So where does this leave us? Sort of inconclusive.

Best,

Bill P.

[From Bruce Nevin (2003.06.13 20:54 EDT)]

Bill Powers (2003.06.08.1258 MDT)–

What you propose is a class of controlled
variable

to be added to to the model, or if not added, at least identified
within

available features of the model.

I conceded to Rick that dependences are relationships, so there’s no new
class of variables.

What I propose is a more extensive and more active role for associative
memory in the model.

We know we have to start

with simple things and then add details; just diving into the deep end
of

the pool before paddling around in shallow water for a while is
dangerous.

A memory bit flipping on and

off is sufficient to indicate that something has happened, and serves
as

a memory of an occurrance when all you need to know is whether it
has

happened.

When that is all that we need to know, then this suffices.

When we need to know more, a more elaborate model of memory is necessary.
If that model can also account for this simple requirement, then would
the above still suffice? (That is, redundantly.)

Any lasting trace can be called
“memory”, but not all memory is the

equivalent of a full-blown tape recorder.

The following is not an example of this.

The retina, for example, retains

the effects of a flash of light for something like a tenth of a second,
so

we continue to see the light after the photons are no longer
arriving.

Should we call that a “memory?” I don’t think so; asserting
membership in

that abstract class tells us nothing we don’t already know.

The point of the example is rather that not all retention of effects
should be called memory. (Although such metaphors are common in studies
of materials.)

Time-delays are

involved in perceiving rates of change, but I doubt that they share
any

important properties with other processes we call
memory.

The basis for this doubt is not clear. In a canonical control loop, there
is only the present moment, the delta of r-p and its transforms around
the loop. Loop delay has no bearing on perception of rates of change. I
suppose it is possible that rate perception is a class of controlled
variables to be added to the model, but that hasn’t been proposed so far
as I know. I had assumed that change, and rate of change, were derived
from relationship between memory and present input, but my basis for that
assumption is weak indeed. (Does it require a perception of the age of a
memory, begging the question?) Rate of visually perceived motion could be
a derivative of effort required to maintain apparent motionlessness in
the visual field, some kind of cousin of Rick’s ball catcher.

Then there is the fact that some perceptions
… take time to form, and an equal time to change state.

Yes, in the model the time it takes for continuous control, however
time-consuming, is not a function of memory storage and retrieval.

See? You’re designing with neural currents
already.

I anticipated that someone might think that my point is to reject logic
design and so it seemed best to demonstrate understanding at least.

Note
that the state of A’ indicates one or more occurrences of
A.

In general, I should think a sequence ABC isn’t expansible as
AABBCC*

(that is, with Kleene star notation, AABC, AAABC, ABBC, etc.), so
this

design is departing from the phenomenon.

OK, if you want to say that ringing the doorbell to get someone to open
the

door won’t work if you ring more than once.

So when can a step be repeated? Sounds like a do-until loop in a program,
not a sequence.

But this is an inapt example. Suppose getting someone to open the door is
a step in a sequence. It is controlled by various means, starting with
ringing the doorbell. If someone sees you coming, and you see through the
window that they’re getting up and coming to the door, you don’t ring the
doorbell. If you ring the doorbell and no one comes, maybe you ring it
again, maybe you knock, maybe you peer through the window, maybe you go
around to the back door. These are not in any sequence, and the openness
of choice about them indicates that no one of them is “the”
current step in that sequence; rather, they are alternative means of
getting someone to come and talk to you. The ringing of the doorbell
might be repeated, but is iteration a sequence? “Hammer until nail
is driven and set” might be an iterative loop, interruptible by a
bent nail or a bruised thumb. But ringing the doorbell is not iterated in
a mindless loop “ring bell until door opens” except perhaps by
some small children (not all!) doing what they have been told is supposed
to work. (Setting aside bell-ringing as provocation, where the iteration
itself is controlled as such.)

Here’s the confusion of ends and means again:

I don’t think you want to say

that. In cases where multiple occurrences of A are not equivalent to
the

sense that “A has occurred,”

If I am typing the sequence ABCDE, multiple occurrences of C as in
ABCCCCCCCCCCCCCDE definitely constitute an error, even ABCCDE. The only
reason for iteration is if to accomplish the current step the pressing of
the C key has some effect and the first try didn’t complete the step by
having that effect. So if the typewriter key is bent and the one for the
letter C bounces off the guides and doesn’t reach the paper through the
ribbon (remember those?), or if the keyboard is defective and a C doesn’t
appear on the screen, maybe I press the C again and again until it does.
Or maybe I interrupt the sequence to fix the mechanism. Or press the
space bar and write it in with a pen later. Different means to complete
that step and move on to the next.

you need to state in what way the second

occurrence changes the situation, and build that change into the
model.

Obviously, the effect of the “second” A can’t occur until a
previous A has

occurred, so we’re still talking about a related problem. We need an

indication that A has occurred at least once.

A provision for temporal delay and

possible interruption between terms of a sequence probably would as
a

byproduct limit each term to one occurrence.

If that special rule, for some reason, is really what is required of the
model.

What sequence cannot be interrupted?

Now
consider the statement, “B cannot occur unless A has occurred.”
This

statement as it stands does not say that if A does occur, B will
occur.

Yes. It only says if you’re controlling B you can’t do so unless you
are

already controlling A (or the result or product of controlling A). I
can’t

drive to the store without first getting into the car. I can’t replace
the

swing-latch on the fence without first cutting a piece of wood of

appropriate size.

If you’re controlling B you can’t do so? Do you mean that if you try
to

control B, you will fail unless you’ve already controlled A? I think

there’s some confusion here between states (which are persistent) and
state

transitions (which are transient occurrances).

I’m controlling getting enough sleep, being rather short of that lately,
but I can’t actually go to sleep right now as I have a meeting at 3:30.
So I plan and expect not to get up at 6:00 tomorrow because I don’t have
to get my daughter off to school on Saturday. If someone were to disturb
that CV with a need for me to do something early tomorrow I would
immediately resist the disturbance. That demonstrates that I am
controlling that perception, as means of getting enough sleep, even
though I can’t get enough sleep right now at this moment. There is a
dependency between my staying in bed past 6:00 and my getting enough
sleep.

In each case, what is the
“flip-flop” that indicates that the previous

event has occurred? In the first case, the event of getting into
the car

is indicated as having occurred by being seated in the car, isn’t it?
You

can’t drive until you’re seated in the car. In the second case,
suppose

you’re interrupted with an emergency that takes you away for a couple
of

hours, so when you get back to the workshop, how do you know whether
you

finished the event of cutting the piece of wood before the phone
rang?

I perceive the state of the piece of wood. I might not go look at it
until I had perceived the state of the gate and ‘remembered’ (leaving the
question of what that means unanswered) my control of it being fixed, the
sequence as means of control, the steps of that sequence that have been
done, and the step that was not yet done when I was interrupted. But even
in that degree of forgetfulness and reminder, I know that I hadn’t
finished cutting the piece of wood by perceiving the state of the wood,
either in memory or directly in the environment when I go look at
it.

I
will assume that the occurrence (change of state) of B depends not only
on A’ being on, but on some other event taking place: D (a control
action, I suppose). If D occurs while A’ is on, B will occur. If D
occurs with A’ in the off state, B cannot occur.

You don’t need D etc. if when I say “A occurs” I am referring
to control of

A, as above. My use of the word “occur” has confused
things.

Has it? What happens if you try to control B before A has occurred?

I can control waking up later than 6:00 tomorrow before I go to sleep
tonight. I can control the length of the piece of wood being 7
inches (say) before selecting the wood and putting it on the saw table.

This is of course a confusion of ends and means. The B (waking up after
6:00, the wood being 7 inches long) comes temporally after the A (going
to sleep tonight, selecting and cutting the wood) but the control
relationship between them is not sequential, it is contingent: the
subordination of means to ends. Here, different means may be substituted;
in a sequence ABCDE nothing else, such as the letter F, may be
substituted for C. ABFDE is not the sequence ABCDE. But if I find an
otherwise appropriate piece of wood that happens to be 7 inches long I
don’t have to run it past the saw blade anyway so that I can move on to
the next step.

But my objection to your introducing A’, D, and so on was that they are
not necessary. I don’t need some program-level CV in order to perceive
that the gate is not fixed (which originally occasioned my perceiving in
imagination the steps to fix it and starting to carry them out, and which
now evokes those perceptions from memory) or that the wood hasn’t yet
been cut (which reminds me “where I was” when I was
interrupted).

You can

apply an output to B, I suppose, but under the rules,

Under the rules? In these examples, it’s under the environmental
conditions.

B will not occur

because A has not occurred. You can try to start the car, but you
can’t

because you you haven’t got into it yet. “Trying to control B”
is the sort

of thing I meant by “D”.

An unanswered
question is whether these logical relationships are bein

imposed on perceptions, or whether they express properties of the

environment.

Both, so long as the control loop is closed
through the environment.

“Imposed on” was not a good expression. I was asking whether
the condition

that you can’t do B until A has occurred is a property of the
environment,

or an arbitrary rule you have adopted (you can’t have dessert until
you’ve

eaten your vegetables).

Socially mandated rules are arbitrary but they are part of the
environment. They are controlled as means to an end. Social ends in the
environment constituted of social relationships.

I assumed you used
the word “logical” because this is an exercise in logic

design, and not because a logic level of the hierarchy is involved. But
now

you take that as an obvious step.

Either way, it’s pretty clear that a
logic-level control

system is needed to provide the required variables D and E in the
proper

order to allow C to occur, and to perceive the states of A’, B’, maybe
C’,

D, and E.

That’s quite a leap, and an unnecessary one. If “A occurs” etc.
refers to

control of A etc. then there is no D etc.

OK, you’re saying that if you control A, you WILL immediately be

controlling B. I don’t think so. I think that you can try to control B,
but

it won’t be possible unless A has occurred. If you try to control B
by

applying the appropriate output, you will fail unless A has
occurred.

“Applying the appropriate output” is D.

I don’t follow. If I try to cut the wood before picking it up I will
fail, not because of a problem of logic, but because the wood is not
brought against the saw blade. This is an environmental contingency. Is
the example inappropriate?

We had the surmise that memory provides
reference signals. This ran afoul

of the origination of reference signals in higher-level error. No way
has

been specified for error output to address memory yielding
reference

signals that vary appropriately.

Why get hung up on that? There are many ways to convert a signal of

variable magnitude into an address that picks one discrete item from a
list

of them.

And at lower levels then converts that discrete item back into a smoothly
varying reference “45 degrees up and to the left of the mark and 1
inch away”?

This is especially easy if we suppose that
this mode of

control-through-memory is confined to the higher levels where all
signals

are symbolic rather than analog. For that kind of control system it
would

be simple to write a computer program that would convert each error
signal

into a choice of a remembered lower reference condition that would,
if

achieved, tend to reduce that error because it did so in the
past.

So the proposal only works at higher levels? If there is a different
mechanism at lower levels, why not use it throughout? So the fact that
this is easier at postulated higher levels accomplishes nothing for your
argument.

But let’s get the trotter back in front of the
sulky. I don’t insist that

reference signals come from memory. What I do insist on is that the
model

be able to reproduce a perception that occurred in the past, the
process

called “doing the same thing again.” I insist that the model be
able to

create signals standing for perceptions that have actually occurred in
the

past, and to make sure those signals are sent into the perceptual
pathways

where that kind of signal will be correctly understood. I insist that it
be

able to create perceptions in imagination as well as by acting on
the

environment. I insist that we be able to examine potential reference

signals to see whether making perceptions match them would produce
the

result we want, and to do this without putting those reference signals
into

active service until we want to. I insist because these seem to be

phenomena that require explanation. My memory proposal was a fairly
simple

start on an explanation.

We are of course in violent agreement.

Got something better?

Is this belligerent phrase suppose to mean that I should say nothing
until I do? That the search for something possibly better should be
silent and private? Please be more explicit.

When perceptual
inputs that originate in memory are controlled, how is the

control loop closed? Not through the environment. Through the
imagination?

switch? That would have to be imagination switches, plural, many, down
to

whatever level of detail at whatever levels of perception are evoked in
the

memory and affected by control.

As many as are required to explain what we experience, no more and no
fewer.

I liked your example of working with mooring your boat, but am not
sure

what point you were making amid all the details.

It’s too easy to diddle facile abstractions without highly specific
examples on which to test them. So this was a fairly specific example.
And I couldn’t see how the imagination switch could account for it.

One question was where do remembered perceptions get into the loop. I was
starting from the notion that they can’t get in as reference perceptions
by interposing memory between error output and reference input because
the reference input is continuously variable. (As in: “perceptions
are carried by neural signals that indicate only how much of a given
type of perception is present: how much intensity, how good a fit to a
given form, how much like a given relationship, and so on.” (Bill
Powers 2003.06.13.0750 MDT).) If we say that the variability itself - the
rate and direction of change, and the rate and direction of acceleration,
etc. - is stored in memory then we have stored plans for behavioral
outputs.

They seem to get into the loop as though perceived in the environment. Of
course this could be an effect of references being looped back to input.
But as I say I don’t see how they can come in as reference input. So
maybe indeed they do come in as perceptual input. Then we control them
with our usual references. (Indeed, we must, in order to try things out
in imagination.)

In the standard view, input signals can be references looped back to
input by the imagination switch. If the imagination switch is closed
“in as many places as necessary” (a cute evasion), then
references could come in as a consequence of controlling a few perceptual
inputs that come from memory. It’s the same commodious vicus of
recirculation (sorry - obscure literary reference).

From memory come perceptions of the eye in the end of the line and the
cleat on the bow deck of the boat. Suppose these enter the hierarchy from
memory as perceptual input signals going up the hierarchy. They go up to
a “loop over cleat” relationship controller, which in turn
sends signals to a system for which that relationship perception is means
of controlling the boat being fast to the mooring. That system starts
controlling the “loop-over-cleat” relationship perception. But
now the input from memory has the loop separate from the cleat. (I
witness this happening. I don’t know what occasions the change.) To the
“loop-over-cleat” system, and consequently to the “make
the boat fast” system, these inputs are disturbed relative to the
reference. Perhaps the signal is an “amount” of that
loop-over-cleat relationship, and the amount is lower than the reference,
so the relationship controller outputs an error signal. This is mapped to
a reference value for the location of the loop and a reference value for
the location of the cleat.

Why do those perceptions arise from memory? I think maybe the higher
systems start controlling, and in the absence of sensory input they evoke
input from memory.

If this is so, then remembering and imagining are really
indistinguishable. In either case, perceptions arise from memory because
a control system is controlling in the absence of sensory input. If the
process of controlling seems to us to change the perceptions, we call it
imagining, and if it doesn’t seem to we call it memory.

I agree withn these observations. It’s not
necessarily done through

associative memory; in fact I’m uncomfortable with settling on any

particular mechanisms here. We have a long way to go before guessing
in

that much detail will get us anywhere.

Saying that these signals come from memory really says nothing at all
about mechanism. It only says that they don’t come through sensors from
the environment.

So where does this leave us? Sort of
inconclusive.

If memory is local, as your RNA guess suggests, then the evoked input
signal starts out vague, completely unspecified at lower levels. That
fits my subjective impression. By persisting with control - and that
requires a degree of discipline in continuing to control the same
signal, not skipping around - we enable reference signals to propagate
down to lower systems. As we control the inputs of more systems at more
levels, the subjective impression is of a clearer, more specific sensory
experience.

Here of course we touch on the basketball players improving their foul
shots with no ball in their hands, Maxwell Maltz & Co. and the
varieties of ‘creative visualization’ discipline generally.

Here, too, we may find that we have a plausible account of hypnotic
suggestion and kindred phenomena.

For now, this reply has been delayed long enough, or perhaps too long,
and the window for making it is closing again.

    /Bruce

Nevin

···

At 05:37 PM 6/8/2003, Bill Powers wrote:

[From Bill Powers (2003.06.13.2037 MDTY)]

Bruce Nevin (2003.06.13 20:54 EDT)

Time-delays are
involved in perceiving rates of change, but I doubt that they share any
important properties with other processes we call memory.

The basis for this doubt is not clear. In a canonical control loop, there
is only the present moment, the delta of r-p and its transforms around the
loop. Loop delay has no bearing on perception of rates of change.

No, but a lag in the perceptual function can provide a simple means of
detecting rate of change. Let's see if I can explain.

Suppose we have a signal entering a subtractor via two paths: the additive
one is direct and the subtractive one enters after passing through a lag
function:

                                   >\
           -----LAG--------------> |- \
           > > \
     input_| | \______________ rate of change
                                   > /
           > > /
           ----------------------->|+ /
                                   >/

Say that the lag imposes a pure delay of 0.1 second relative to the other
input. This means that if the input signal increases at the rate of 1 unit
per second, the lagged input will be 0.1 unit behind the unlagged input, so
the difference between them will be constant at 0.1 unit. If the amplifier
(the triangle) has a gain of 10, the output will be constant at 1 unit as
long as the input is increasing at 1 unit per second.

If the input now increases at 5 units per second, the output will change to
a constant 5 units; if the input changes at -5 units per second, the output
will be constant at -5 units. In general, the output indicates the rate of
change of the input with a resolution of 0.1 second.

I suppose it is possible that rate perception is a class of controlled
variables to be added to the model, but that hasn't been proposed so far
as I know.

Yes it has. It is the fourth level of perception, called "transitions." I
can see where that might become confused with events; perhaps this level
should be called "time derivatives". I do mention that in B:CP.
Neurologically, we should find elements like those in the diagram above at
this level.

I had assumed that change, and rate of change, were derived from
relationship between memory and present input, but my basis for that
assumption is weak indeed.

See above; you are right. But it's not really memory as we use the term, is it?

>So when can a step be repeated? Sounds like a do-until loop in a program,
not a >sequence.

I think we are talking about events, not sequences. Ringing the doorbell is
an event. An event perception is a signal that a specific event has
occurred. As I reason, this requires that the event perception arise from a
storage device like a flip-flop, so when the occurrance is over
(occurrances are subjectively instantaneous), the perception does not
instantly disappear. Repeating the event right away does not cause a new
event signal to appear; the old one is still there. So at the sequence
level, only a single event signal is received. The question of how quickly
an event signal must fade to be seen as two events when the event is
repeated is an empirical one, and it may depend on the type of event (a
bang on a drum as opposed to moonrise). I think that repetition rate may be
a dimension of event perception, but that's not clear to me.

This is uncomfortably like a digital process; perhaps it doesn't belong
below the relationship level.

But this is an inapt example. Suppose getting someone to open the door is
a step in a sequence. It is controlled by various means, starting with
ringing the doorbell. If someone sees you coming, and you see through the
window that they're getting up and coming to the door, you don't ring the
doorbell. If you ring the doorbell and no one comes, maybe you ring it
again, maybe you knock, maybe you peer through the window, maybe you go
around to the back door. These are not in any sequence, and the openness
of choice about them indicates that no one of them is "the" current step
in that sequence; rather, they are alternative means of getting someone to
come and talk to you.

Maybe they are alternate sequences, and it is the logic level that is
trying first one, then another. If this sequence has no effect, abort it
and try that one ("abort" = "reset").

I'm controlling getting enough sleep, being rather short of that lately,
but I can't actually go to sleep right now as I have a meeting at 3:30.

So you're not controlling getting enough sleep. You're trying to, perhaps,
but it's not working. That is the circumstance in which I would say
"controlling for getting enough sleep," meaning you still want to get
enough sleep but you're not succeeding just now. The reference signal is
there and the control system is operating, but its action is not bringing,
or has not yet brought, the perception to the reference level. "Controlling
for" is like "aiming for" or "trying for." It takes a description of a
reference level as an object.

...when you get back to the workshop, how do you know whether you
finished the event of cutting the piece of wood before the phone rang?

I perceive the state of the piece of wood.

That's what I had in mind. There has to be something to indicate that the
event has occurred, even when it's not occurring when you look.

I can control waking up later than 6:00 tomorrow before I go to sleep
tonight. I can control the length of the piece of wood being 7 inches
(say) before selecting the wood and putting it on the saw table.

No, you can't. Controlling means bringing a perception to a reference
level. What you're describing is setting a reference condition, not
bringing the perception to the required level. When you set the alarm for
7:00 AM, you're controlling for getting more sleep, but you haven't got it
yet. You'll probably wake up at 6:00 anyway, so the control process will
not have worked.
When you decide that you need a piece of wood 7 inches long, you've
prepared to set a reference length when you get around to measuring and
cutting -- particularly cutting -- the piece of wood. But you haven't got
the 7-inch piece of wood until you cut it. You may not get it if there's no
wood of the right kind.

This is of course a confusion of ends and means. The B (waking up after
6:00, the wood being 7 inches long) comes temporally after the A (going to
sleep tonight, selecting and cutting the wood) but the control
relationship between them is not sequential, it is contingent: the
subordination of means to ends.

Sophistry, otherwise known as controlling by fair means or foul for being
right. Control means actually making the perception match the reference
signal. You don't have control until that happens.

But my objection to your introducing A', D, and so on was that they are
not necessary. I don't need some program-level CV in order to perceive
that the gate is not fixed (which originally occasioned my perceiving in
imagination the steps to fix it and starting to carry them out, and which
now evokes those perceptions from memory) or that the wood hasn't yet been
cut (which reminds me "where I was" when I was interrupted).

That's why I brought up the possibility that some state of the environment
might serve as the sign that an event has occurred. When there is no such
sign, one can only try to remember whether the event has happened yet; the
sign has to be internal.

You can
apply an output to B, I suppose, but under the rules,

Under the rules? In these examples, it's under the environmental conditions.

Yeah, but you're the one who said the environment works that way. It's your
rules I was trying to understand.

"Imposed on" was not a good expression. I was asking whether the condition
that you can't do B until A has occurred is a property of the environment,
or an arbitrary rule you have adopted (you can't have dessert until you've
eaten your vegetables).

Socially mandated rules are arbitrary but they are part of the
environment. They are controlled as means to an end. Social ends in the
environment constituted of social relationships.

I wasn't talking about social rules; just rules you decided to operate by,
wherever they came from. If you can abide by them or not, as you please,
they don't have the force of natural (or social) laws: they can be changed
arbitrarily, while the rules governing natural phenomena can't.

I don't follow. If I try to cut the wood before picking it up I will fail,
not because of a problem of logic, but because the wood is not brought
against the saw blade. This is an environmental contingency. Is the
example inappropriate?

I don't know, I'm just trying to find out what the example _is_. If you're
saying that it is physically impossible for B to occur until A has
occurred, that puts the cause in the environment. But if you're saying that
you refuse to do B until A has occurred, that's quite a different matter;
you could do B even if A didn't occur, if you wanted to.

Why get hung up on that? There are many ways to convert a signal of
variable magnitude into an address that picks one discrete item from a
list of them.

And at lower levels then converts that discrete item back into a smoothly
varying reference "45 degrees up and to the left of the mark and 1 inch away"?

No, into smoothly varying values of X and Y position ( or R and theta, it
doesn't matter). The phrase you quote is at the levels that employ symbols,
which the tracking systems don't use. Before that phrase can become a
reference signal for tracking, it must be converted into two signals with
the correct values. Remember, I am saying that the involvement of memory in
setting reference signals probably does not apply at the lower levels.

So the proposal only works at higher levels? If there is a different
mechanism at lower levels, why not use it throughout?

Because at the higher levels the perceptions are discrete, not continuous,
and far too slow to do something like tracking a moving target.

But let's get the trotter back in front of the sulky. I don't insist that
reference signals come from memory. What I do insist on is that the model
be able to reproduce a perception that occurred in the past, the process
called "doing the same thing again." I insist that the model be able to
create signals standing for perceptions that have actually occurred in the
past, and to make sure those signals are sent into the perceptual pathways
where that kind of signal will be correctly understood. I insist that it be
able to create perceptions in imagination as well as by acting on the
environment. I insist that we be able to examine potential reference
signals to see whether making perceptions match them would produce the
result we want, and to do this without putting those reference signals into
active service until we want to. I insist because these seem to be
phenomena that require explanation. My memory proposal was a fairly simple
start on an explanation.

We are of course in violent agreement.

Just so.

Got something better?

Is this belligerent phrase suppose to mean that I should say nothing until
I do? That the search for something possibly better should be silent and
private? Please be more explicit.

I mean something with as many or more details worked out. It's easy to make
proposals that handle one problem at a time.

I liked your example of working with mooring your boat, but am not sure
what point you were making amid all the details.

It's too easy to diddle facile abstractions without highly specific
examples on which to test them. So this was a fairly specific example. And
I couldn't see how the imagination switch could account for it.

What the imagination switch(es) account(s) for are the following phenomena:

1. We seem to have to choose between imagining a given perception and
seeing the same perception in real time. We can't do both at once, so this
requires an arrangement that makes these two modes of perceiving mutually
exclusive. That is what the switch does on the perceptual side.

2. When we imagine a perception that is a reference level for a lower
system, there seems to be a choice between perceiving the error/reference
signal, and using it as the input to lower-level comparators. We can't, or
don't, do both at once. Occasionally, while we're imagining the reference
signal, some of it seems to "leak through" to the lower level comparators,
making us twitch as we imagine the actions. So the switches have a few
nanoamps of leakage current. This is the switch we throw when we "put a
reference signal into effect."

If you found these phenomena in the experiences you were describing, I
didn't notice your mention of them.

Notice that these considerations have nothing to do with memory. Even a
continuously variable reference signal can be looped back into a perceptual
channel, where it will appear as if lower systems had brought a perceptual
signal to that same state. If memory were involved, it would have to be
able to produce a series of closely-spaced values of the reference signal,
to approximate continuos variation. But imagining can work even without memory.

One question was where do remembered perceptions get into the loop. I was
starting from the notion that they can't get in as reference perceptions
by interposing memory between error output and reference input because the
reference input is continuously variable.

That's no problem if you'll allow some form of digital to analog converter.
If the selected memories are digital and the reference signal that results
is analog, then some such converter is required. The circuitry isn't the
problem.

(As in: "perceptions are carried by neural signals that indicate only _how
much_ of a given type of perception is present: how much intensity, how
good a fit to a given form, how much like a given relationship, and so
on." (Bill Powers 2003.06.13.0750 MDT).) If we say that the variability
itself - the rate and direction of change, and the rate and direction of
acceleration, etc. - is stored in memory then we have stored plans for
behavioral outputs.

Yes, but that's pretty unlikely. To plan outputs you not only have to have
rates and directions of change, but the _right_ rates and directions of
change to make the current environment, including disturbances, behave as
you wish. That depends on being able to analyse the properties of your own
lower systems and muscles, as well as the physics of the part of the
environment being affected, complete with dynamics and kinematics.

They seem to get into the loop as though perceived in the environment.

I don't see the problem. They get into the loop as perceptions. To add that
they are perceived "in the environment" brings in location, relationship,
and so on. Higher system may not know the difference between an imagined
perception and a real one, and thus behave as if the perception came from
the environment, but of course control of the environment will not work
that way. Controlling an imagined signal has no effect on systems lower
than the place where the loop is closed.

Of course this could be an effect of references being looped back to
input. But as I say I don't see how they can come in as reference input.
So maybe indeed they do come in as perceptual input. Then we control them
with our usual references. (Indeed, we must, in order to try things out
in imagination.)

Yes. But understand that when the reference signal is looped back into the
perceptual channel, for that control system it's as though the lower
systems had acted instantly and perfectly to make their perceptions (copies
of which reach the higher systems) match the given reference signal. The
lower systems never receive the reference signal, but the higher system
doesn't know that.
I realize that there are some details about this that haven't been worked
out yet. Does the output signal substitute for the perceptual input before
or after the perceptual input function? I've been assuming _after_, so only
a single signal is required. But the output signal gets fanned out to be
the reference signals for many lower systems. If those signals were looped
back to the input of the perceptual input function, we'd need as many
switches as there are lower systems in the loop. So it looks much simpler
to suppose that the single output signal, before being fanned out, is
switched to substitute for the single signal that normally comes from the
perceptual input function.

In the standard view, input signals can be references looped back to input
by the imagination switch. If the imagination switch is closed "in as many
places as necessary" (a cute evasion), then references could come in as a
consequence of controlling a few perceptual inputs that come from memory.
It's the same commodious vicus of recirculation (sorry - obscure literary
reference).

See above paragraph. "As many places as necessary" may be clearer now.
There would have to be a switch for each input to the higher perceptual
input function. I am not fond of that version.

From memory come perceptions of the eye in the end of the line and the
cleat on the bow deck of the boat. Suppose these enter the hierarchy from
memory as perceptual input signals going up the hierarchy. They go up to
a "loop over cleat" relationship controller, which in turn sends signals
to a system for which that relationship perception is means of
controlling the boat being fast to the mooring. That system starts
controlling the "loop-over-cleat" relationship perception. But now the
input from memory has the loop separate from the cleat. (I witness this
happening. I don't know what occasions the change.)

While you're imagining the line and the cleat in a certain desired
relationship, are you also perceiving the actual relationship between the
same line and the same cleat? Or at a lower level, while you're imagining
the line, are you simultaneously perceiving the real line? Or do the two
modes of perception seem mutually exclusive? This is what the switch on the
perceptual side was about. The other switch is meant to account for the
apparent fact that while you're imagining how the line is to be put around
the cleat, you aren't simultaneously trying to put the real line around the
real cleat that way. If it is a fact -- is it? You didn't say.

Why do those perceptions arise from memory? I think maybe the higher
systems start controlling, and in the absence of sensory input they evoke
input from memory.

Yes, I think that happens. But it also happens if the lower systems get
their reference signals indirectly, through memory, rather than through a
direct connection to the higher order output signals. That's a separate
question.

If this is so, then remembering and imagining are really indistinguishable.

Yes, they can be. I think imagining is distinguished from remembering
mainly on the basis of how familiar the result seems, how it ties in with
all our other perceptions. Of course when we deliberately try to invent
something new, we assume we are imagining, and most often we are, except
when someone points out that the result is definitely something we
experienced before. I can imagine a horse with a pig's tail, and I don't
recall having seen that combination anywhere, but who knows? Obviously the
horse and the pig's tail come from previous experience. But together?
Probably not...

In either case, perceptions arise from memory because a control system
is controlling in the absence of sensory input. If the process of
controlling seems to us to change the perceptions, we call it imagining,
and if it doesn't seem to we call it memory

Good observation. If we know we're synthesizing or manipulating the
imagined material, it's clearly not a memory, though we may recognize the
parts as memories. But that's a logical conclusion; if we don't happen to
have developed that particular train of thought, we might think we're
remembering even as we actively create the memory. I don't think this has
the force of a basic principle.

If memory is local, as your RNA guess suggests, then the evoked input
signal starts out vague, completely unspecified at lower levels. That fits
my subjective impression. By persisting with control - and that requires a
degree of discipline in continuing to control the same signal, not
skipping around - we enable reference signals to propagate down to lower
systems. As we control the inputs of more systems at more levels, the
subjective impression is of a clearer, more specific sensory experience.

Something like that. This gets into language and concepts that are pretty
hard to pin down. What is an "unspecified" perceptual signal? I agree about
the sense of things becoming gradually clearer and more specific, but this
seems more like what I think of as reorganization. Not that that's very
clear, either.

Best,

Bill P.