[From Bill Powers (930807.1715) --
Rick Marken (930807.1100) --
Sounds to me as if the counter-roll to which the accountants are
supposed to match their figures was a precursor of modern
methods. I suppose that then they baked their rolls. Now they
cook the books.
···
-------------------------------
RE: D(1) AND D(2)
Allan can chime in on this if I'm wrong. I think that the
approach we take is usually centered in the controlling system:
that is, when we talk about what the control system can "know,"
we mean pretty much what is contained in the perceptual signal.
Because the control system has no way of perceiving its own error
signal or output signal, and certainly no way of perceiving the
forms of its own functions, we see the totality of the world of
which the control system knows as being contained in the
perceptual signal. So from our point of view, the only
information that exists for the control system IS the perceptual
signal. And of course we use "information" in an informal sense,
meaning an analog representation and not a formal mathematical
quantity.
I think Allan takes a more exteriorized view, just as we do when
working out a model of a control system. The question Allan seems
to be asking isn't whether the control system knows of the other
signals and functions, but whether the other signals and
functions can be used by an observer/analyst to balance the
informational books. The observer/analyst is presumably
omniscient and precognitive; if there are disturbing causes in
the environment now, or if they will occur in the future, the
observer/analyst knows all about them -- what they are, what they
will be, and what functions connect them to the CEV. The o/a also
knows the forms of all functions in the control systems. So this
really leaves just the question of analyzing a system and
environment in which all components and signals are fully known.
I've asked this question before, but it sort of got lost in the
details of other arguments. Is there supposed to be some sort of
conservation-of-Information law? If, for example, the output of a
control system turns out to contain a great deal of Information
about the effective causes of fluctuations in the CEV, is there a
rule that says this Information must have come from some place
earlier in the system, and ultimately from the CEV? This kind of
law seems to be strongly implied in some comments, but nobody has
said explicitly that it exists or has shown why it must exist.
Does Information have to "come from" somewhere? Is there a rule
against its spontaneous appearance or its loss? If there is a
power supply for a control system that is drawn upon as a source
of negative entropy, can it restore Information in the output
function that was lost earlier in the process?
Also, I'm still not sure how the concept of "information about"
something fits into information theory.
I don't really want to reopen this whole argument; better that
the parties involved pursue their own areas of expertise until
they have something to show. Our best point of contact is through
working models.
--------------------------------------------------------------
Best,
Bill P.