Brains & Analog Computers

[From Fred Nickols (2001.10.06.0822)] --

Rick Marken (2001.10.04.1115)]

Bruce Gregory (2001.1004.1341)

> I suspect [the brain] is more like an analog computer that must
> be rewired to carry out each new program.

I agree. In fact, PCT views the brain as an analog computer (see the
chapter in B:CP on "Premises"). But the programs that are running on an
analog computer (a particular "wiring" implementation thereof) can be
easily distinguished from the hardware (wiring) itself. In this sense
programs (or, more generally, computations) are to analog hardware what
the mind is to the brain.

I don't think it is the case that an analog computer must be rewired to
carry out each new program (depending, of course, on what you mean by program).

My first computer was the MK1A Fire Control computer, an electro-mechanical
computer used to solve the fire control problem (i.e., the aiming of guns
to shoot at moving or stationary surface or air targets). It was
hard-wired, to be sure, but it could solve a wide range of problems in
three basic classes: surface targets (e.g., other ships), air targets
(e.g., airplanes) and land targets (as in the case of shore
bombardment). It accepted various inputs and produced a set of outputs
known as "gun orders." Inputs such as the position of the target (range,
bearing and elevation) were sometimes continuous as when they came from the
radar atop the fire control director and sometimes discrete as when the
coordinates of a land target were initially set or when air temperature and
humidity were set. Other inputs, such as the roll and pitch of the ship,
ship's course and speed, true bearings and so on were continuous as well.

So, I think the old MK1A was restricted to a few classes of problems but
not to a single problem. No two targets ever behaved alike in my
experience, especially the ones that were manned. Rewiring the MK1A to
accommodate each and every new specific problem was clearly out of the
question.

That said, I'm not sure about the parallel Rick is drawing (i.e., programs
are to analog hardware what the mind is to the brain). I've always thought
the mind was a construct that included self-awareness. To me, the mind is
the seat of "me." Does that mean I'm just a collection of
"programs"? Hmm. Maybe that's exactly what I am. Oh well, this is all
too weighty for me.

Regards,

Fred Nickols
The Distance Consulting Company
"Assistance at A Distance"
http://home.att.net/~nickols/distance.htm
nickols@att.net
(609) 490-0095

[From Fred Nickols (2001.10.06.0822)] --

My first computer was the MK1A Fire Control computer, an electro-mechanical
computer used to solve the fire control problem (i.e., the aiming of guns
to shoot at moving or stationary surface or air targets).

I'd be interested in hearing more about this. Do you still have manuals,
etc??? Who built the system? What years?

I've always thought
the mind was a construct that included self-awareness. To me, the mind is
the seat of "me." [ Or am I ] just a collection of "programs"?

When something is described as "just" isn't it usually the intent to minimize
the significance of whatever is being described? In the case of programs I
don't think we know at present "just" what programs are or will be ultimately
capable of doing. So, I don't think attempting consider the mind or the self
in terms of programing neccesarily involves adopting an attitude that detracts
from the worth of human beings.

Best
  Bill Williams

···

______________________________________________________________________
Do you want a free e-mail for life ? Get it at http://www.email.ro/

[From Rick Marken (2001.10.07.0915)]

Fred Nickols (2001.10.06.0822) --

That said, I'm not sure about the parallel Rick is drawing (i.e., programs
are to analog hardware what the mind is to the brain). I've always thought
the mind was a construct that included self-awareness.

I think the parallel was drawn (by Bill Powers originally) to show that
there is nothing supernatural about the distinction between the material
brain (analogous to computer hardware) and the non-material mental
processes carried out by that brain (analogous to computer software).

Best regards

Rick

···

--
Richard S. Marken
MindReadings.com
marken@mindreadings.com
310 474-0313

[From Bruce Gregory (2001.1007.2055)]

Fred Nickols (2001.10.06.0822)

So, I think the old MK1A was restricted to a few classes of problems but
not to a single problem. No two targets ever behaved alike in my
experience, especially the ones that were manned. Rewiring the MK1A to
accommodate each and every new specific problem was clearly out of the
question.

I was thinking more of machines that must be rewired to solve a different
_class_ of problems, unlike digital computers which can, in principle,
solve any class of problem without requiring hardware modification.

Bruce Gregory is an ex-patriot.
He lives with the American
poet and painter Gray Jacobik
and their canine and feline familiars in
Pomfret, Connecticut.

[From Bruce Nevin (2001.10.08 15:42 EDT)]

The relation between logic and how people think involves a lot of unresolved issues. Are Boole's 'laws of thought' rather 'laws for thought'? Are the fallacies (or some of them anyway, such as ad verecundiam, accepting something on authority) really logical fallacies or do they refer rather to the processes by which we select among candidate premises from which we might subsequently draw our conclusions? Such processes may be faulted for being fallible --especially as on that fallibility the security of all our logic rests -- but can they be faulted for failing to be logical?

      [Bohr] never trusted a purely formal or mathematical argument.
  "No, no" he would say, "You are not thinking, you are just
  being logical."
    -- Howard Margolis (1993),_Paradigms and Barriers_,
      Chicago: U of Chicago Press. p. 201

[From Bill Powers (2001.10.08.1448 MDT)]

Bruce Nevin (2001.10.08 15:42 EDT)--

The relation between logic and how people think involves a lot of
unresolved issues. Are Boole's 'laws of thought' rather 'laws for thought'?
Are the fallacies (or some of them anyway, such as ad verecundiam,
accepting something on authority) really logical fallacies or do they refer
rather to the processes by which we select among candidate premises from
which we might subsequently draw our conclusions? Such processes may be
faulted for being fallible --especially as on that fallibility the security
of all our logic rests -- but can they be faulted for failing to be logical?

Another way I have described level 9, the "logic level" for short, is as
the level where symbols are handled according to rules, which would include
not only Boolean logic but any other form of logic, as well as grammar,
mathematics, and programs (in the computer sense). There is nothing about
this proposal that says any particular form of rule-driven symbolic
processes must exist. The sole concern for PCT is whether such a level
exists, regardless of what it is used for.

There is, for example, a rule that says "Post hoc, ergo propter hoc",
meaning "After which, therefore because of which." The objection to this
rule is not that it isn't a valid example of a rule that could be applied
at the ninth level, but that there are easily-discovered counterexamples
which show that this rule is not reliable. The objection is practical, not
theoretical. You need a level where rules are applied to symbols to operate
by inconsistent logic as well as consistent logic.

Best,

Bill P.