Using the neural network

[From Shannon Williams (96011.02:30)]

Bill Powers (960113.0530 MST)]

If you subtract a delayed version of an input signal from its current
value, the resulting output signal represents the rate of change of the
input signal.

You are thinking: output = (input2-input1)/time

I am thinking: input1 = reference
               input2 = input

     where: [input] ---> [neural network] ---> [output]
                                   /|\ |
                                  [adjust] \|/
                                  [ment ] <---------C
                                                    /|\
                                                     >
                                                     R

               The output is LEARNED by making it equal the reference
               for whatever is input.

Your alphabet-learning system is pretty sketchy. [] Have you realized
that your scheme won't work if the alphabet isn't presented in
the normal sequence?

The neural network will learn to predict only when prediction is viable.
If you teach it that A follows B, then that is what it learns. If you do
not give it any consistant input, then it does not learn anything. It only
recognizes the order/predictableness of its universe.

You must have some idea of what you mean by all this, but so far you
haven't got it across.

Don't blame it all on me.

How can an output be generated to avoid situations that are unpleasant?

This question should not be causing problems for you. This is a question
you must ask yourself even in your current version of PCT. The question
for you is: How do we learn which outputs we need to generate to control
a perception?

In answer to your question: We try an output. If it helps, we try it
again. If it doesn't help, we try something else.

Is there any one output that can do that?

No.

What defines a situation, to the behaving system, as unpleasant?

Intrinsic errors:

1) We don't want to feel hungry.
2) We want to be able to predict
3) We don't want to be cold.
4) etc.

···

-------------------------------------------------------------------------

One thing to remember is that the input to the neural network is a pattern.
This means that you can map a behavior to a pattern. If that pattern
later fails to remove an error when the behavior is output, then you can
alter your network to generate another output. BUT- the effect of what you
are doing is this:

at first: neural input = 0X11XX01 ==> neural output = 11111111

after adjustment: neural input = 00110X01 ==> neural output = 11111111
                               = 01110X01 ==> neural output = 11101110

-Shannon

<[Bill Leach 960115.22:06 U.S. Eastern Time Zone]

[Shannon Williams (96011.02:30)]

   Bill Powers (960113.0530 MST)]

... for you is: How do we learn which outputs we need to generate to
control a perception?

In answer to your question: We try an output. If it helps, we try it
again. If it doesn't help, we try something else.

You are describing the reorganization system that Bill P. talked about in
1975 (B:CP).

Bill P.

What defines a situation, to the behaving system, as unpleasant?

Intrinsic errors:

1) We don't want to feel hungry.
2) We want to be able to predict
3) We don't want to be cold.
4) etc.

In PCT we don't currently attempt to define the intrinsic variables in
specific terms. First, it is unlikely that the intrinsic variables can
remain either conflicted or even in a state of high error

Second, it is quite likely that no one has yet thought of any variable
that actually might be "intrinsics". Virtually all postulated intrinsics
have be disgustingly easy to "shoot down" as candidates.

If, for example, "We don't want to feel hungry." is actually an intrinsic
variable, then if a person felt hungry they would kill if necessary to
eat. The fact is that there are numerous examples where a person has
chosen to starve to death rather than kill someone else (you don't really
need many examples, probably only one is sufficient).

-bill