[Hans Blom, 961007]
(Shannon Williams (961006.1130 CST))
Shannon, this line of yours triggered this response:
BTW- this would mean that learning only occurs when you make a
mistake.
In the PCT approach, this seems to be true. Learning in PCT seems to
be recognized as occurring when a change in one or more (input or
output function) parameter values is seen. As long as no mistakes are
made, there is obviously no need to modify any parameter values. Only
if an error occurs will a change be required.
This is different from the learning paradigm where not only the
parameter value itself, but also its accuracy (standard deviation) is
computed. In this case, learning is usually identified with a
decrease in uncertainty of the parameter(s). And (if there is no
forgetting or resetting), this uncertainty decreases whether the
observation is a "mistake" (negative example) or not (positive
example). In a positive example, the uncertainty decreases, even
though the parameter value may remain exactly the same. An example of
a converging sequence where the parameter value remains the same:
100+/-50, 100+/-30, 100+/-10, 100+/-5, 100+/-1, ... , 100+/-0.00001
In a negative example (an error occurs), the parameter value will
change, yet the uncertainty will also decrease. An example of a
converging sequence where the parameter value changes:
100+/-50, 80+/-30, 60+/-10, 70+/-5, 75+/-1, ... , 76.5+/-0.00001
Convergence to a final parameter value has been reached when the
uncertainty has become so small that the parameter's value is locked
at a certain fixed value and hence cannot change anymore. If one only
computes the parameter's value and not its accuracy, as in the PCT
scheme of things, one is not able to say anything about convergence.
So what learning "is" seems to depend on the way in which one
represents "knowledge", in- or excluding uncertainty. To me it makes
a lot of sense to be able to tell of a belief (perceptual variable or
"construction") how preliminary it is or how fixed and immutable it
has become.
Greetings,
Hans