Martin's D of F paper

From Bill Cunningham 920701.2020

(Martin Taylor 920701.0240)

The net sure has been silent today. Everyone must be digesting the two
long and very thoughtful posts from Bill P and Martin.

I find Martin's degrees of freedom argument a very powerful answer to my
doubts and fears. The paper is of fundemental importance for my application.

The similarity detection/difference discussion is right on target. My first
reaction was that Kanerva algorithm was a similarity detector, but John Gabriel
reminded me that it can work either way. Does that help with the notion of
a switch hitting ECS?

Previous past net discussion on "subjective probability" had bothered
me with by its extension into type 1 and 2 decision errors, particularly
the consequence of false alarms wherein a control system would lock onto the
wrong percept and not let go. A switched control system with an alert
mechanism would remove the false alarm problem as a deterent to survival in
nature, allowing concurrent offense and defense. I guess track-while-scan
radars sorta fit this description.

Martin doesn't state so explicitly, but he has presented a case for a
control system for the control system (regardless of how implemented).
I have argued for a long time that investment in the control of information
flow will pay larger dividends compared to generating, passing or processing
more data. The degrees of freedom argument puts some meat on those bones.

A final thought on hyperlexia. That discussion immediately conjured up
my exposure to high speed Morse code operation. As a teenager, I participated
in a high speed plain text practice net. Somewhere between 30 and 40 words
a minute, plain text ceases to be dits and dahs or even letters. Plain text
starts sounding like spoken language, to the point that typing or writing
becomes wasteful relative to notetaking. I got so I could carry on a normal
conversation at 40 wpm, limited more by the keyers of the day. However,
I couldn't do squat with 5-letter code groups.
No redundancy for error correction, just the damned letters. However,
real Morse operators hear only the letters. I've seen them continue to
type for 4-5 minutes after cessation of 60 wpm code groups--with zero
errors. One of the folks I try to sell HPCT is trying to find out what
makes good and bad operators. Maybe there's a clue here.

Martin's paper does leave me uncomfortable with the issue of how one ECS
wrests control from another. Not the idea, but the mechanism, stability
and sharing issues. Any ideas?

Dunno about the rest of you, but I'm still digesting.

Bill C.

[Martin Taylor 920701 10:20]
(Bill Cunningham 920701.2020)

Kind words. Thank you.

Martin's paper does leave me uncomfortable with the issue of how one ECS
wrests control from another. Not the idea, but the mechanism, stability
and sharing issues. Any ideas?

I was basing my words on the ideas in what I actually presented at the Madrid
AGARD meeting, and described in a posting around July 7. The idea is that
if two ECSs conflict, the higher-gain one will have a smaller error. If an
ECS has been controlling reasonably well, but suddenly finds that it can
no longer do so because its corrective actions are resisted, it has sufficient
information to determine that some other controller is trying to control the
same CEV (Complex Environmental Variable). It may be a characteristic of
some (all?) ECSs that they relinquish control by reducing their gain under
such circumstances. An alternate response would be to increase gain, and
try to beat the interloper.

Both strategies can be observed in conversation. If one person interrupts
another, the first may relinquish control or may try to override the
interruption. Usually, one or the other backs down. So it may be with
ECSs suddenly subject to competition, and in the case of the aircraft, I
suggested that such a mechanism (relinquishing, that is, not fighting back)
be deliberately built in to automated subsystems.

Martin