Brain Research All Wrong

Below is an e-mail that came to me at work. Ordinarily, I don't pass along
such messages, let alone to the CSG list. However, given the interest of
this list in behavior, perception, and hierarchical control structures, I
thought it might be of some interest to at least a few list members...

Fred Nickols
nickols@worldnet.att.net

···

------------------------------------------------------------

The attached press release from INFORMS caught my attention
--what do you think the consequences might be ?

Walter Derzko
Director Brain Space
(formerly the Idea Lab at
the Design Exchange)
Toronto
(416) 588-1122
wderzko@pathcom.com

================================================

Brain Space # 98-336 Prevailing Brain theory challenged

The Institute for Operations Research and the Management
Sciences (INFORMS)

For Immediate Release

BRAIN SURGEON NOT REQUIRED--Expert in Artificial
Intelligence Challenges Dominant Theory About the
Human Brain

LINTHICUM, MD, May 27 � Asim Roy is neither a brain surgeon
nor a neuroscientist. But the Arizona State University Professor of
Operations Research and Information Systems maintains that
those developing the next generation of artificial intelligence are
ignoring an alarming fact: Their basic presumption, modeled on
the human brain, is false.

"The next scientific revolution that will introduce learning robots,
seeing machines, and talking machines will be based on scientists'
understanding of how the human brain learns," he explains. "The
trouble is, a wide body of science happens to be wrong. And unless
scientists face the facts, progress on these marvelous inventions
will slow to a crawl."

At stake, he maintains, is how powerful and independent the next
generation of robots will be. Prevailing theory, he maintains, will
leave us with robots that require "babysitting" - an inordinately
large amount of human input to accomplish their tasks. What's
necessary, he says, is machines that are autonomous.

Drawing on an unusual background for research in this field -
operations research, which is better known for developing Wall
Street trading models and airlines' yield management systems
- Dr. Roy has created mathematical models to fill the gap.

Ongoing Debate

Dr. Roy started questioning the classical theories of brain-like
learning two years ago. The questions turned into a crusade.
Since then he has argued with scholars in imposing
subspecialties like cognitive science, computational
neuroscience, and artificial neural networks.

The exchanges have taken place over the Internet and in two
open debates, first at the International Neural Network
Conference (ICNN'97) in Houston in April, 1997 and earlier
this month at the World Conference on Computational
Intelligence (WCCI'98) in Anchorage, Alaska.

Only recently has Dr. Roy seen other scientists - most
notably Professor Christoph von der Malsburg of Ruhr-
University in Germany, a pioneer in the field - acknowledge
his position.

A classic, flawed theory

Prevailing thought draws on the teaching of Donald Hebb
of McGill University, Montreal, a pioneer theoretician who
postulated a mechanism by which the brain learns to
distinguish objects and signals, add, and understand
grammar. According to Hebb, learning involves adjusting
the "strength of connections" between cells or neurons
in groups of cells known as neural networks.

Hebb's followers extended his idea about brain-like
intelligent learning systems with 2 concepts:

1.Autonomous systems.

Each neuron is a self-adjusting cell that changes the
strengths of its connections to other neurons when learning
so that it makes fewer errors when it repeats a task. These
neurons are viewed as "autonomous or self-learning systems.
" Scientists used this idea to derive "local learning laws,"
or mathematical formulas believed to be used by neurons.

2.Instantaneous learning.

These scientists also presumed that learning in the brain is
"instantaneous" - as soon as something to be learned is
presented, the appropriate brain cells use their "local learning
laws" to make instant adjustments to the strength of their
connections to other neurons. When learning is complete,
the brain discards its memory of the learning example.

This theory of "memoryless learning" excited scientists
and engineers worldwide because it allowed them to envision
simple brain-like learning machines that wouldn't need huge
amounts of computerized memory.

Stumbling block

The major stumbling block for future technology, says Dr.
Roy, is that none of these learning methods reproduce

the external characteristics of the human brain, principally
its independent way of learning. Therefore, methods based
on these classical ideas require constant intervention by
engineers and computer scientists - providing network
designs, setting appropriate parameters correctly, and so
on - to make them work. This drawback is severe, he
maintains.

Instead, says, Dr. Roy, scientists must admit that their
constructs diverge from the human brain and return to
the original model. Drawing on the way the brain actually
works, he has used operations research to create
autonomous learning algorithms that are more human-like
because they don't require ongoing input.

Dr. Roy is confident his challenge will prevail. "The best
model those who study artificial intelligence have is still
the human brain," he says. "Up until now, we've done an
adequate job copying its workings. We have to do a
better job."

Dr. Asim Roy is the author and co-author of numerous
articles on artificial intelligence, including "A Neural
Network Learning Theory and a Polynomial Time RBF
Algorithm," which appeared in the IEEE Transactions
on Neural Networks; "Iterative Generation of Higher-
Order Nets in Polynomial Time Using Linear Programming,
" which also appeared in that journal; "An Algorithm
to Generate Radial Basis Function (RBF)-like Nets
for Classification Problems," Neural Networks;
"A Polynomial Time Algorithm for the Construction
and Training of a Class of Multilayer Perceptrons,
" Neural Networks; "A Polynomial Time Algorithm
for Generating Neural Networks for Pattern Classification
- Its Stability Properties and Some Test Results," Neural
Computation; and "Pattern Classification Using Linear
Programming," ORSA Journal on Computing.

                                      ***
Contact Barry List, PR Director, (410) 691-7852;
(800) 4INFORMs; (410) 358-7162h; barry.list@informs.org

The Institute for Operations Research and the Management
Sciences (INFORMS) is an international scientific society
with 12,000 members, including Nobel Prize laureates,
dedicated to applying scientific methods to help improve
decision-making, management, and operations. Members
of INFORMS work primarily in business, government, and
academia. They are represented in fields as diverse as
airlines, health care, law enforcement, the military, the
stock market, and telecommunications.

[From Bruce Abbott (980604.1825 EST)]

Fred Nickols quoting article about Roy --

Hebb's followers extended his idea about brain-like
intelligent learning systems with 2 concepts:

1.Autonomous systems.

Each neuron is a self-adjusting cell that changes the
strengths of its connections to other neurons when learning
so that it makes fewer errors when it repeats a task. These
neurons are viewed as "autonomous or self-learning systems.
" Scientists used this idea to derive "local learning laws,"
or mathematical formulas believed to be used by neurons.

I'd be interested to know in whose field this view represents "the dominant
theory about the human brain." It isn't in psychology. AI, perhaps?

As written, the quoted paragraph raises several vexing questions: How does
a neuron "repeat a task?" On what basis are the "strengths of its
connections to other neurons" changed? How does a neuron learn? For a
neuron, what is an error? The whole sentence appears to confuse the
neuron's behavior with that of the system of which it is a part.

To be fair, the article reads like copy from a magazine rather than a
scientific report; I'd guess that it was written by some science writer, who
has taken literary license with the proposal, rather than by any of the
researchers themselves. It would probably be a miskake to take the
description literally.

Regards,

Bruce

[From Bruce Nevin (980605.1414 EDT)

Bruce Abbott (980604.1825 EST) --

Hebb's followers extended his idea about brain-like
intelligent learning systems with 2 concepts:

[...]

Each neuron is a self-adjusting cell that changes the
strengths of its connections to other neurons when learning
so that it makes fewer errors when it repeats a task. These
neurons are viewed as "autonomous or self-learning systems.
" Scientists used this idea to derive "local learning laws,"
or mathematical formulas believed to be used by neurons.

I'd be interested to know in whose field this view represents "the dominant
theory about the human brain." It isn't in psychology. AI, perhaps?

Sounds to me like AI types taking connectionism literally ("this looks to
us like how the brain must work because it sounds to us like what Hebb was
talking about and we can program it and of course the brain is a computer")
and a journalist believing them.

  Bruce Nevin