[Paul George 940720 17:00]
Having read through the first 8 chapters, I begin to see the source of the
misunderstanding of my posts, as most discussion seem to focus on the first
three levels (orders) of the CNS.PCT does agree with my mental model, there
were just some scope and terminology issues. All in all a great piece of work.
I guess it's lack of acceptance is due to perceptual filtering, or more likely
that 'other schools' never actually _read_ (with understanding) the book. I
found Bill Power's post on the e. coli paper review commitee revealing as to
the nature of 'peer review'. However you should be careful that you don't
follow the same pattern in judging other's work. It is important to evaluate a
piece of work using the _author's_ frame of reference, not your own.
Translation is your job when you are not using terminology in a 'standard' way
or when using 'non-standard' concepts.
May I suggest that the 'FAQ'/intro add a summary of the nature of neural
currents, the types of neural circuits, and the distinction between first order
and higher order systems. Since B:CP is relatively hard to find (lacking money
to order it from the Powers), this would avoid a lot of confusion. {I had to
get my copy via inter-library loan from a Cincinnati public library (I live in
Cleveland). It apparently wasn't available from the universities in the area,
of which there are many)
The essence of PCT is the functioning of organisms (a normal focus of
psychology as opposed to sociology). It can be summarized thusly:
All action and sensation is produced via the interaction of neurons and
muscles. Further, the central nervous system is composed of neurons. It
therefor follows that any perception or behavior, regardless of complexity,
must be producible via the interaction of neurons through neural currents (in
the absence of some other mechanism - 'a ghost in the machine'). We can
demonstrate that negative feedback control is the mechanism used for first
order interaction with the environment, and at the level of spinal reflex. We
can use models to show it can work for higher order behavior. Thus, by default
we presume it is the mechanism used at all levels.
When I talk about a system or control system I am usually looking at more than
one entity. A biological analogy to a process control system would be a
supervisor monitoring 2 operators, who are each operating a set of tools
involved in a manufacturing process. This is similar to HPCT, except that
neural currents are not the _only_ mechanism for interaction of the 'nodes'. I
don't think that this really changes anything, except by adding 1st level input
and output function errors and adding the possibility of non pulse coded
signals.
Please bear with me on the following as I am using a 20 year old work. There
may have been elaborations in the mean time, but I presume there would be a
'2nd edition' if there were major changes.
An observation: error signals don't theoretically have to only have positive
values from an information standpoint. You can cheat by 'biasing' the signal.
If the signal may vary from 0-30 ppm, I can set the comparitor and output
function so that 15 = no error (a logical 0). I could then set an 'upper
threshold' at 25 and a lower threshold at 5. The output function could use this
information to 'select' the proper action or magnitude of action. This could
allow a stepped sawtooth output function rather than a continuous one. I'm not
saying that this _ever_ happens in biology, or that it can't be implemented
with a network of simple nodes, just that it would work. This might be useful
for designing automata using PCT.
Question: why is it illegal to pass an error signal (directly or through a
'repeater' output function) to another node as a 'sensation'? While the analogy
may not bear close examination, pain might be something of this type. The
output is just passed up to tell a higher level controller that something is
wrong, and how badly. It can be used to indicate that 'control' is not working
and another strategy must be applied. It is up to the higher level to set
reference levels elsewhere in the nnetwork so that the error signal is
mitigated. It's not very useful at the lowest couple of orders, but might be
useful at the 'cognitive' levels of control. This also might correspond to
'alerting'. It is just another input signal, but generated by a comparator
rather than a lower order input function. Again, I don't assert that it
actually exists in nervous systems, or that it is necessary. OTOH in
engineering we sometimes find that a more complex structure is more efficient
than the equivalent constructed from simple structures. If nothing else you
have propagation and processing lags. Biology has a limitation in that it
developed by 'growing like topsy'. New levels and nodes were added to existing
ones that worked.
As your thinking has evolved over 20 years, do you have any problem with having
'neural logic' as a part of input, output, or comparator functions? Is there
any theoretical problem with a single node (or subsystem) being at different
levels of the hierarchy with respect to other nodes or subsystems {Higher
apparently means 'sets another's reference level'} at the same time? My mental
model can envision a network rather than a true hierarchy
On another minor matter, I am not sure that nodes interact only through neural
currents in biological systems. Some output functions result in the release of
hormones, neurochemicals or substances like adrenaline (blanking on the name).
These are certainly sensed by other nodes, and not necessarily hierarchically
(i.e 4th level affecting 4th or other level). I guess this could be viewed as
an 'envronmental' interaction of physical laws, but seems to me more like a 2nd
or higher order interaction. I'm not sure it really matters if a signal is
transmitted in terms of neural current frequency or in terms of a chemical
concentration which must be translated via an input or reference signal
transducer function. Again, it may be just where you draw the system boundary.
Re Tom Bourbon [940719.1202]
I don't think creating a taxomy of types of signals or types of control nodes
is silly in and of itself, any more than distinguishing between orders of
control systems or sections of the brain. Yea it is all 'just control' or 'just
neural currents', but the distinctions are sometimes useful. We wouldn't get
very far in medicine or physiology if we fixated on the idea that 'cells are
just cells' and ignored their differentiations and groupings (e.g. organs). We
may be able (and have) to define standard structures or patterns used for
various 'types' of perception and control. The question is which groupings or
distinctions make sense.
Enough digital diarrhea for today
Paul