AW: preassumptions of the situation to use a closed single valued transformation model

[Bruce Nevin (2003.04.10 23:05 EDT)]

… There is solid research only for the
first type,

some strongly

suggestive results for the second type, and only the beginning
of

discussion of the third type.

Ahh - ok - I understand … this means the systems you are thinking
about

are mainly internal controlled.

I don’t see that this follows from the words you quoted, but no matter.
Rather than being “internally controlled”, they control their
input signals according to internal values for those signals and by means
of their output actions through their environment. Those output actions,
in combination with unpredicted environmental influences, determine the
inputs to the sensors of the system, which generate the input signals. It
is not the case that they (that is, their behavioral outputs) are
controlled, either internally or externally. Nor is it the case that
their reference signals (set points) are controlled, either internally or
externally - that is, there is no agent apart from the working of the
system itself that sets the values of the reference signals. Importantly:
perceptual control theory is concerned primarily with modeling living
organisms - living control systems.

My question for this list is the
following:

I am looking for a publication where it is made clear which

assumptions of

the situation must be made that it is worth or even right to use
a

transformation model for this situation which is single

valued and closed?

I don’t know what publication will answer your question about the
Asby-type model that concerns you. (Not quite “centuries” ago,
but I agree ancient history in the universe of discourse about
cybernetics.)

If my impression of your model is right, the
relation between Ashby?s

transformation model and the negative feedback control system is,
that

Ashby’s modell is interesting only for the part of your model where
the

system tries to correct the error. The Operator would be the starting
value

which has an error. This Error should be corrected through the

transformation from the starting value (whith error) to the end
value

(without error).

In most cases, a reference value at level n of the hierarchy is
changed by the outputs of systems at level n+1 of the hierarchy. I
guess you are talking about the exceptional cases in which reorganization
is involved. See Chapter 14 of Behavior: The control of
perception
. This and other publications are listed at
http://www.ed.uiuc.edu/csg/.

For me there seems to be two ways to perform
this transformation from

starting value to end value. Either it is clear how to perform a
specific

transformation, then the system uses this “trivial”
input-output

intervention to reach the aimed end value.

If it is “clear” then it is clear to some higher-order
system(s), which thereupon change the lower-level reference value, the
ordinary case described above.

Or it is not clear, then the

system has to work with smal steps to the aimed end
value.

This is the exceptional case involving reorganization, which indeed does
proceed by small increments.

My questione for this transformation is the
following: How can a system know

if it should use a “trivial” way or not when it assesses the
situation?

Which criteria can a system use to know if a “trivial”
transformation works

or not?

The question is, how does the system know anything, and what sort of
thing does it know? If it is “trivial”, there is an already
existing system (B) for controlling a relationship, sequence, or program,
and perceptual inputs, including the input that is resulting in error in
some system (A), match the input requirement to start controlling (B),
and at the conclusion of controlling (B) there is no error. Subjectively,
it’s called problem solving. If problem solving doesn’t solve it, and
error persists, and it matters to you (a function of loop gain), then in
time reorganization starts changing values, weights, connections, etc.
until something new does reduce error. What Bateson called reaching into
the random (in connection with double bind theory, I think, or maybe the
deutero-learning stuff). This can also be thought of as problem solving,
and often is, but of a more arduous and actually mysterious sort, in
which people speak of incubation and inspiration and such.

Do get acquainted with perceptual control theory if this interests you,
and get back to us when you have.

    /Bruce

Nevin

···

At 05:19 AM 4/10/2003, Mag. Roland Ernst wrote: