Reorganization

[From Bruce Gregory (2010.01.28.2220 UT)]

http://www.newscientist.com/article/dn18403-panic-walking-gets-robot-out-of-sticky-situations.html

Hi all,
This makes me think of
http://www.archive.org/stream/designforbrainor00ashb#page/80/mode/2up
page 80 and further.

Arthur

-----Oorspronkelijk bericht-----

···

Van: Control Systems Group Network (CSGnet)
[mailto:CSGNET@LISTSERV.ILLINOIS.EDU] Namens Bruce Gregory
Verzonden: donderdag 28 januari 2010 23:21
Aan: CSGNET@LISTSERV.ILLINOIS.EDU
Onderwerp: Reorganization

[From Bruce Gregory (2010.01.28.2220 UT)]

http://www.newscientist.com/article/dn18403-panic-walking-gets-robot-out-of-
sticky-situations.html

[From Bill Powers (2010.01.29.0839 MST)]

Hi all,
This makes me think of
Design for a brain; the origin of adaptive behavior : Ashby, William Ross : Free Download, Borrow, and Streaming : Internet Archive
page 80 and further.

That's where my reorganization theory came from: Ashby's homeostat.

In 1960, when this second edition of Design for a Brain was published, I was deep into a new career, in graduate school at Northwestern and working part-time at the Dearborn Observatory to help support my family. I completely missed this edition! It looks totally different from what I can see in the reference above. Ashby seems to have focused entirely on what he called "adaptation," which is a mixture of what we call reorganization and control. I have to get hold of it to see what else changed.

Thanks!

Best,

Bill P.

···

At 10:09 AM 1/29/2010 +0100, Arthur Dijkstra wrote:

[From Rick Marken (970131.0920)]

Hans Blom (970131b) --

Rick, you would have solved my problem if you could describe
explicitly, in an algorithm or in formulas, just _how_ you fiddle
those knobs, depending on _what_ exactly. The recipe "just do it
until it works" is far too fuzzy to be a basis for an implementation.

Actually, I did describe it; and it is implemented in Bill's
artificial cerebellum.

I fiddle with the "knobs" _randomly_; the fiddling depends on (and
influences) my perception of the "quality of control" exhibited by
the control system. My perception of "quality of control" is, thus, a
controlled variable (as it is for the PCT reorganizing system). Exactly
what variable is controlled when I control the "quality of control"
exhibited by a control system I don't know. I probably control different
perceptions of quality of control all the time -- average size of the
error signal over time, absolute size of the error signal, etc.

In order to control my perceptions of "quality of control" I _randomly_
change the settings of whatever "knobs" I have access to. I keep
changing these settings until my perception of "quality of control"
starts moving toward my reference for that perception; I stop changing
settings as long as quality of control is improving or remains near my
reference; I start changing settings again when quality of control
deteriorates. So I vary my _rate of random fiddling_ based on the
difference between actual and desired "quality of control" exhibited
by the control system. This process gets the "quality of control" I
observe as close to my reference as I can get it, though not necessarily
as close to my reference as it could possibly get. (By the way, I
typically fiddle with the "knobs" one at a time so my process is not
completely random).

The "knobs" to which I have access typically include only output and
input amplification, output slowing and rate of leakage (for an integral
controller). I also have access to the "knobs" that change the form of
the output function (proportional or integral), the form of the input
function and the connections between systems in a hierarchy. It is
unlikely that you would ever want to change the input function in your
engineering applications since you are designing systems to control
specified states of the world that are defined by those input functions
(that restriction doesn't apply to living systems, of course). But you
could probably use this "reorganization" algorithm to get your control
systems working about as well as they possibly can given the constraints
of their design and the environments in which they will have to
function. Of course, systematic "knob fiddling" based on control
theory couldn't hurt either.

Best

Rick

[Martin Taylor 960125 16:45]

Bruce Abbott (960125.1600 EST) to Bill Powers (960124.1000 MST)

You say that reorganization involves error in _intrinsic_ variables; this is
the error found at the highest level of the hierarchy, correct?

Not according to Bill P. That's my own particular proposal, which I think
he still doesn't like. But then, Bill is always very careful to distinguish
what is speculative from what is probably true from what is demonstrable,
and any specific reorganization structure is clearly speculative.

In my proposal, the only variable that actually induces reorganization
(i.e. what the _separate_ reorganization system is controlling for) is
error within the perceptual hierarchy. The variables we call "intrinsic"
such as blood sugar or CO2 level (if they are ones) are, in my proposal,
only incidentally associated with the reorganizing system. Like any other
controlled perception, if their control system experiences large OR persistent
error, they "shout to the reorganizing system" "Hey, reorganize me." More
properly, the reorganizing system has as its perceptual signal a function
a*e^2 + b*(de/dt)^2, where e is the error in some part of the perceptual
control hierarchy including but not limited to the immediate control of
intrinsic variables, and a and b are positive constants.

In this proposal, large perceptual error or sudden growth of error anywhere
in the hierarchy leads to increased speed of reorganization in that part of
the hierarchy, and when good quality control is achieved, reorganization is
quickly shut down.

Bill's system has _both_ the "intrinsic variables" and error in perceptual
control as controlled perceptions in a reorganization control system separate
from the main hierarchy. In his system, if an intrinsic variable deviates
from its reference, reorganization tends to happen, whereas in mine, if an
intrinsic variable deviates from its reference, an appropriate output happens
that affects the reference signal values of lower level ECUs. But in both
systems, a rapid drop of error in some part of the perceptual control
hierarchy will lead to an equally rapid drop in the reorganization rate
(at least in that area of the hierarchy).

There really isn't a time-scale problem in either version of the reorganizing
system, so far as I can see, that would justify your comment:

If the rats are first given the
opportunity to associate the sound of the feeder with the appearance of a
food pellet in the cup, acquisition of lever-pressing can be quite rapid --
a matter of two or three pairings of lever-press and food delivery. This
time-scale seems too short for the reorganizing system, in that intrinsic
variables would hardly have been affected by then. I'm not saying that the
proposed reorganization system is invalidated by this observation -- it may
be the only available method under some conditions -- but it certainly
suggests that a faster, more efficient mechanism may be at work in this case.

One-trial learning is quite compatible with reorganization in either Bill's
structure or mine. If a "pairing" happens between a reduction in error of some
control system and a particular action, the action must at that moment have
been linked as part of the output mechanism of the control system that
experienced the drop in error. Standard reorganization theory (of either
kind) says that whatever links existed at that moment will be likely to
remain unchanged (except possibly by later reorganization consequent on
some other perceptual error).

Gradual learning is also compatible with standard reorganization theory,
but in this case the reorganization method is not the linking of new action
outputs, but the topologically continuous (e.g., Hebbian) changing of
linkage weights. If you remember my recent reposting of the "12 kinds of
reorganization" message, both topologically continuous and topologically
discrete changes are possible, and I suspect that both happen.

I think we need more analytic discussion of reorganization. Real experiments
would be better, but even thought experiments can show up problems and
possibilities.

Martin

[Hans Blom, 970204b]

(Rick Marken (970131.0920))

Rick, you would have solved my problem if you could describe
explicitly, in an algorithm or in formulas, just _how_ you fiddle
those knobs, depending on _what_ exactly. The recipe "just do it
until it works" is far too fuzzy to be a basis for an
implementation.

I fiddle with the "knobs" _randomly_; the fiddling depends on (and
influences) my perception of the "quality of control" exhibited by
the control system.

I would hardly call that "randomly": your fiddling is goal-directed
-- its goal is maximizing the "quality of control". At another level,
however, you are correct: small random changes allow one to discover
a gradient.

My perception of "quality of control" is, thus, a controlled
variable (as it is for the PCT reorganizing system). Exactly what
variable is controlled when I control the "quality of control"
exhibited by a control system I don't know.

Great! Reminds me of a discussion we had, long ago, about "the
control of Q", where I discussed how the engineering design process
starts with an explicitation of what is to be understood as the
"quality of control" for a particular control task, from which
measure the actual controller was to be derived. So we find a common
link. Exciting!

This reminds me also of another, more recent discussion, in which I
equated this notion "quality of control" with a "single top level
goal" and where I posed that if one could discover -- by testing and
observation -- what one means by this notion of "quality of control",
one also knows _everything_ (all lower level goals) that one controls
for. Kind of a comprehensive Test for All the Controlled Variables
simultaneously. Very difficult, as you note, but very much imaginable
-- once you have reached the maximum "quality of control", all lower
level variables have their correct values as well.

I probably control different perceptions of quality of control all
the time -- average size of the error signal over time, absolute
size of the error signal, etc.

Right. There are two things that you control for simultaneously: a
single maximum "quality of control", but also a lot of lower level
perceptions. Different hierarchical levels. Except for your
"randomly" (you describe a hill-climbing procedure, where your small
random setting changes yield an estimate of the gradient towards the
top of the hill, and where you take that step if it takes you uphill
but not if it leads downward), you describe the process accurately:

In order to control my perceptions of "quality of control" I
_randomly_ change the settings of whatever "knobs" I have access to.
I keep changing these settings until my perception of "quality of
control" starts moving toward my reference for that perception; I
stop changing settings as long as quality of control is improving or
remains near my reference; I start changing settings again when
quality of control deteriorates. So I vary my _rate of random
fiddling_ based on the difference between actual and desired
"quality of control" exhibited by the control system. This process
gets the "quality of control" I observe as close to my reference as
I can get it, though not necessarily as close to my reference as it
could possibly get. (By the way, I typically fiddle with the "knobs"
one at a time so my process is not completely random).

Thanks a lot, Rick. We seem to be in close agreement about the
"facts", if not exactly on their interpretation...

Greetings,

Hans

[From Bruce Nevin (2003.03.08 21:38 EST)]

Bill Powers (2003.03.06.1112 MST)--

Reorganization, in the miminal form I have picked for it, does not "correct
error" in the usual sense: it simply starts a process of changing
organization of the brain which randomly alters the interactions of the
baby with its environment.

Alternatively: it stops suppressing the branching and connecting activity that is characteristic of nerve cells in vitro. Either way, I wonder if error causes as a byproduct some change in the cellular environment around the neurons of the control loop whose error signal is not being held at a low level. A neurochemical change in the cellular environment could slowly spread from the affected cells so that more systems, those using collocated cells, would be involved in reorganization the longer error persists uncorrected.

From the cell's point of view, maybe the function of the loop is the control of the error signal (i.e. of its effects in the cellular environment).

Obviously loop gain is involved, the lower the gain the less error matters, and as I understand B:CP.278 the internal determinant of loop gain is the output function. One possibility is that as the cells in the output function of a loop within the hierarchy produce their maximum output continuously (unable to correct error) they produce some neurochemical byproduct in the cellular environment (or cause it to be produced), in the presence of which nerve cells start changing their connections. If so, the first cells to reorganize would be in the output function. Some possibly testable guesses.

This might be the basis for the "giving up" that so exercised us a while back.

         /Bruce Nevin

···

At 01:57 PM 3/6/2003, Bill Powers wrote:

[ From Marc Abrams (2003.03.09.0412) ]

[From Bruce Nevin (2003.03.08 21:38 EST)]

Alternatively: it stops suppressing the branching and connecting activity
that is characteristic of nerve cells in vitro. Either way, I wonder if
error causes as a byproduct some change in the cellular environment around
the neurons of the control loop whose error signal is not being held at a
low level. ...

Nice post. I have some of the same questions and feel the same way you do.

Any hint about how you might go about testing some of this?

Marc

[From Bill Powers (2003.03.09.1049 MST)]

Bruce Nevin (2003.03.08 21:38 EST)]
>Alternatively: it stops suppressing the branching and connecting activity

that is characteristic of nerve cells in vitro.

Then you would have to say that intrinsic error _suppresses_ the output of
the reorganizing system, wouldn't you? Seems simpler to assume that the
branching and connecting is a sign of intrinsic error, caused by being _in
vitro_.

Either way, I wonder if
error causes as a byproduct some change in the cellular environment around
the neurons of the control loop whose error signal is not being held at a
low level.

This is a good idea for reorganization based on the existence of neural
error signals in the hierarchy. It could explain the specificity of that
kind of reorganization (why we don't reorganize systems that are working
well). It doesn't, however, address basic errors like hunger or pain, or in
general the case where that which gets reorganized (control systems in the
hierarchy) has no relation to the reason for reorganization (error in some
biochemical or autonomic control system not part of the behavioral hierarchy).

>From the cell's point of view, maybe the function of the loop is the
>control of the error signal (i.e. of its effects in the cellular
environment).

That's worth thinking about. What's in it for the neuron?

Generally, I endorse all your suggestions about how the biochemistry of the
brain might influence reorganization based on error signals in the
hierarchy. Tell you what, why not send your suggestions to the neurological
labs of the CSG? :=/

Best,

Bill P.

From [ Marc Abrams (2003.03.08.0434) ]

Rick’s reply to my post got my juices flowing. I can’t go back to sleep. :slight_smile: Please forgive if this comes of as a rant. It is not my intent :-).

First, Bill has provided me with a tracking task and a Vensim model to investigate the notion and concept of “Learning” ( Reorganization ). Anyone interested in this PC ( Bill, does it work on a Mac? )version of the tracking task and Vensim model please contact me and I will be happy to provide it to you. ( note: you need Vensim pro in order to utilize the Vensim model provided by Bill ) I’m sure if we could get a small group together Vensim ( Bob Eberlein ) would be happy to provide us with a nice licensing deal for research.

On the hierarchy;

I believe we have goals. But I also believe that these goals are “driven” by our need to reduce or eliminate error with regard to those goals. it is more important to reduce the perceived errors/anxiety then to actually attain the goal. This view explains much human behavior ( actions ) Think about it. When we talk about “striving” for goals, I believe, we are talking about having to continually adjust for error/anxiety. ALL “goals” are subject to lower level actions (behavior) that try to reduce error with regard to that goal. The actual attainment of that goal is of no consequence to lower level control systems. They simply “want” to provide as much error free perceptions as possible with _regard to _ that higher level goal. Sometimes those perceptions come from imagination/memory and not from our sensed data.

I hope to show this through experimentation & modeling.

Any comments, suggestions, opinions?

Marc

Hi Marc

I really like what your doing and seems to have a lot of merit.

In relation to re-organisation I might have missed the conversation which
may have related to this - but my impression from past writing on
re-organization and sensed error was that it usually was sensed in the form
of emotion. Would you class anxiety as a sensed emotion, or is your idea
that anxiety comes before emotion? It may just be semantics but interested
in how you conceptualize it.

Cheers Rohan

···

At 04:53 AM 10/03/2003 -0500, you wrote:

From [ Marc Abrams (2003.03.08.0434) ]

Rick's reply to my post got my juices flowing. I can't go back to sleep.
:slight_smile: Please forgive if this comes of as a rant. It is not my intent :-).

First, Bill has provided me with a tracking task and a Vensim model to
investigate the notion and concept of "Learning" ( Reorganization ).
Anyone interested in this PC ( Bill, does it work on a Mac? )version of
the tracking task and Vensim model please contact me and I will be happy
to provide it to you. ( note: you need Vensim pro in order to utilize the
Vensim model provided by Bill ) I'm sure if we could get a small group
together Vensim ( Bob Eberlein ) would be happy to provide us with a nice
licensing deal for research.

On the hierarchy;

I believe we have goals. But I also believe that these goals are "driven"
by our need to reduce or eliminate error with _regard to_ those goals. it
is more important to reduce the perceived errors/anxiety then to actually
attain the goal. This view explains much human behavior ( actions ) Think
about it. When we talk about "striving" for goals, I believe, we are
talking about having to continually adjust for error/anxiety. _ALL_
"goals" are subject to lower level actions (behavior) that try to reduce
error with _regard to_ that goal. The actual attainment of that goal is of
no consequence to lower level control systems. They simply "want" to
provide as much error free perceptions as possible with _regard to _ that
higher level goal. Sometimes those perceptions come from
imagination/memory and not from our sensed data.

I hope to show this through experimentation & modeling.

Any comments, suggestions, opinions?

Marc

from [ Marc Abrams (2003.03.09.1815) ]

Hi Rohan,

I really like what your doing and seems to have a lot of merit.

Thanks. I think it's really going to be a lot of fun exploring this. I am
looking forward to it.

In relation to re-organisation I might have missed the conversation which
may have related to this - but my impression from past writing on
re-organization and sensed error was that it usually was sensed in the

form

of emotion.

I think you might be right. It's my understanding ( Bill,please correct me
if I am wrong ) that we cannot percieve "error' directly. We percieve it
through what we call emotions. ( i.e. a "feeling" of either elation or
dejection, or something in between :wink: ). I believe we _always_ have aome
level of emotions at work, because there is _always_ _some_ error present
with _regard to_ any goal we may have.

Would you class anxiety as a sensed emotion,

Yes. At least that is my current feeling.

or is your idea
that anxiety comes before emotion? It may just be semantics but interested
in how you conceptualize it.

Hmmm. :-), Interesting question. How would it come before? What form would
it take? Are you possibly suggesting that "anxiety" or error is a
"pre-cursor" to emotion? Maybe. How could we test for that?

Marc

[From Bruce Nevin (2003.03.10 09:41 EST)]

Bill Powers (2003.03.08.1649 MST)--

Thomas E. Hancock --

Bill, you said:

"In the higher-level systems, variables change more slowly than at lower
levels, so we may see a given reference setting remaining nearly the same
for long periods, especially when no unexpected changes are taking place
in the environment."

[...]
It's true of machine systems and maybe has to be true in any hierarchical
system where higher subsystems use lower ones as part of their operation.
In that case all the delays of the lower systems would still be there, plus
any additional ones due to the added levels.

I think it does make some sense in experience, too. Religious systems are
probably at the highest, system-concept, level, and we can ask, how rapidly
do people change their religious reference signals?

Each system in the hierarchy receives its reference input from the error outputs of systems at the next higher level. Except at the highest level.

There has been some discussion of references at the highest level depending directly upon the state of intrinsic variables. However, error in intrinsic variables must be corrected fairly quickly if the organism is to survive.

I think rather that many of the controlled variables and their reference values at the highest levels are set by the convergent process of creating and maintaining cultural norms in the course of learning and controlling within a culture. There is some evidence for this in non-human animals also. This accords well with the conservatism and slow rate of change of references at the highest levels.

         /Bruce Nevin

···

At 06:52 PM 3/8/2003, Bill Powers wrote:

At 06:45 PM 3/8/2003, Tom Hancock wrote:

[From Bruce Nevin (2003.03.10 10:31 EST)]

Bill Powers (2003.03.09.1049 MST)–

Bruce Nevin (2003.03.08 21:38 EST)]

Alternatively: it stops suppressing the branching and connecting
activity
that is characteristic of nerve cells in
vitro.

Then you would have to say that intrinsic error suppresses the output
of

the reorganizing system, wouldn’t you? Seems simpler to assume that
the

branching and connecting is a sign of intrinsic error, caused by being
_in

vitro_.

If this were a viable line of thought, it would not be intrinsic error
that suppressed reorganization, but something else that did so, and
intrinsic error that alleviated that suppression. A trivial and
immaterial distinction.
But I think I’ve hooked a red herring here and dragged it across the line
of discourse. In development of the embryo and in subsequent growth and
development, something indicates to a given cell of the developing
organism that it has reached some state that is desirable for the
integrity of the organism. The interest of the cell is of logical type A,
the interest of the organism of which it is a part is of logical type B.
Signals and other controlled variables of type B are not perceived at
level A as such, and vice versa, and this is necessarily so lest control
at one level conflict with control at the other. This signal, presumably
a reference level for some variable controlled by the cell, brings its
reorganizing process to a halt. It’s wrong to think of it as doing so by
suppressing a natural activity of the cell to branch and seek
connections, just as it is wrong to suppose that aligning a cursor with a
mark suppresses the activity of the hand manipulating the mouse.
What we want to tap here is the activity of the cells to
reorganize (their shape, their connectivity into their environment,
including connections to other cells) until they attain some internally
maintained reference. This reference corresponds to an externally
observable stability of their environment. When the control loop (at
logical typing level B) has output at maximum and error persists,
something happens to a variable that the cell perceives and is
controlling (at logical typing level A). As an outcome of evolution
(logical typing level C), this kind of cell reorganizes (A) until that
variable is restored to the preferred state, and as an outcome of
evolution (C) a consequence of its doing so is that its cellular
environment persists as part of a functioning, living organism (B), and
consequently persists as a stable source of nutrient and as a protection
against disruption of the cell’s integrity (A).

This means that the reorganizing is not done by some separate system with
sensors, comparators, input and output functions, all connected up to
control error, but rather by an evolutionary adaptation of the same
processes by which cells come to participate in multicellular organisms
in the first place.

Either way, I
wonder if

error causes as a byproduct some change in the cellular environment
around

the neurons of the control loop whose error signal is not being held at
a

low level.

This is a good idea for reorganization based on the existence of
neural

error signals in the hierarchy. It could explain the specificity of
that

kind of reorganization (why we don’t reorganize systems that are
working

well). It doesn’t, however, address basic errors like hunger or pain, or
in

general the case where that which gets reorganized (control systems in
the

hierarchy) has no relation to the reason for reorganization (error in
some

biochemical or autonomic control system not part of the behavioral
hierarchy).

Here we’re talking about special-purpose control systems acquired through
evolution (pain reflex) or learning (what to do about hunger). Setting
aside for a moment how they are developed, note that this hypothesis says
that error in a given system begins spreading some signal substance in
the environment of cells in the output function. Adjacent cells begin
doing likewise. This includes first of all those cells that are connected
directly to cells in the output function of that loop. In this way, news
of the need to reorganize that begins at a low level of the hierarchy
spreads first up the hierarchy of control leading to that level, and from
each higher-level system that is affected spreads down to its other
lower-level effectors.

For those special-purpose systems that we set aside to be learned
requires only some seed-point of connectivity to be innate: something
that connects hunger to activity of the mouth and gullet, and in
evolutionary terms these have been established for a very long time.

From the cell’s point of view, maybe the
function of the loop is the

control of the error signal (i.e. of its effects in the cellular

environment).

That’s worth thinking about. What’s in it for the
neuron?

The key question.

Tell you what, why not send your suggestions
to the neurological

labs of the CSG? :=/

Consider it done.

    /Bruce

Nevin

···

At 12:57 PM 3/9/2003, Bill Powers wrote:

[From Bruce Nevin (2003.03.11 13:10 EST)]

I just became aware of work on cell-to-cell communication among bacteria, beginning with the discovery of 'quorum sensing' by Bonnie Bassler, now at Princeton. Lots of chemical signals pass between unicellular critters, and there appears to be considerable social cooperation among them. The same mechanisms are thought to underlie the evolution of and the present mechanisms of embryology. Brief overview at http://www.macfound.org/programs/fel/2002fellows/bassler_bonnie.htm

  /Bruce Nevin

[From Mike Acree (2003.03.12 10:45 PST)]

Bruce Nevin (2003.03.11 13:10 EST)--

I just became aware of work on cell-to-cell communication among bacteria,
beginning with the discovery of 'quorum sensing' by Bonnie Bassler, now at
Princeton. Lots of chemical signals pass between unicellular critters, and
there appears to be considerable social cooperation among them. The same
mechanisms are thought to underlie the evolution of and the present
mechanisms of embryology. Brief overview at
MacArthur Foundation

Reading this reminded me of an article by Francisco Varela that I read several years ago, with the very Varelian title "The Semiotics of Cellular Communication in the Immune System." Looking for it just now on the web, I couldn't find it, though I did discover that Varela had died last May. Evidently never recovered from the hepatitis C he contracted in Costa Rica after fleeing Pinochet. Very sad.

[From Boss Man (2004.05.10)]

In a private post Matin gently (how else?) chided me for a now obvious weakness, an apparent
inability to resist the urge to jab at those who, for whatever reasons, lead with their chins. "Jim
Beardsley" has kindly offerred me temporary respite at his camp in the mountains separating
(joining?) Pakistan and Afghanistan, where I plan to work on this flaw.

Perhaps someone who is more familiar than I with the CSGnet archives can point me to an
example where efforts at such higher-level reorganization have proved successful. In the
meantime, God bless.

Where are everybody. I have just got a few letters since early in june. I learn much reading them and other PCT litterature on the net and soon I’ll start writeing letters myself again. This time only some words about reorganization.

Bill or you other.

In B:CP page 180 line 7 you write " This is what I mean by reorganization - not a change in the way existing components of a system are employed under control of recorded information, but a change in the properties or even the number of components- …"

I found it nice reading “Nature” 6. mai 1999 where the norwegian professor Per Andersen wrote about the two germen scientists Engert and Bonhoeffers results. They had shown that when a dendrit from one cell is ending a stream of impulses against an other cell, then it grow out a new recipient in the other cell. And this is done in 20 minutes…

I find look at this as a part of a visual proof that the number of components is increased wen reorganization takes place. Just as you wrote Bill.

Is Engert and Bonhofers experiment old news? Do I understand reorganization correct?

Bjoern

[From Rick Marken (990622.1410)]

Bjoern Simonsen (990622) --

Where are everybody.

I've been trying to improve my baseball catching simulation.
I hope to have a new version of the program up on the net soon
(at http://home.earthlink.net/~rmarken/ControlDemo/CatchXY.html).
As usual, the hard part is simulating the environment (the
controllee), not the fielder (controller).

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates mailto: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Bill Powers (990622.1553 MDT)]

Bjoern Simonsen (990622) --

...the two germen scientists Engert and Bonhoeffers results. They
had shown that when a dendrit from one cell is ending a stream of impulses
against an other cell, then it grow out a _new_ recipient in the other
cell. And this is done in 20 minutes.. I find look at this as a part of a
visual proof that the number of components is increased wen reorganization
takes place. Just as you wrote Bill. Is Engert and Bonhofers experiment
old news? Do I understand reorganization correct?

Yes, I saw the article. It's nice to get confirmation, but in this case we
know that _something_ like this has to happen for the brain to change its
organization. I think most researchers agree that new synapses have to
form, old ones have to disappear, and the weightings (sensitivity) of
existing synapses have to be changeable. The short time-scale was
surprising, as you note.

Best,

Bill P.