Deep Blue See Baby

[From Rick Marken (970513.1200 PDT)]

Any thoughts, from a PCT perspective, on the Deep Blue defeat of
Kasparov? There was a discussion about it on PBS' The News Hour
but none of the discussants looked at it from the perspective that
seemed most intersting to me: levels of perceptual control.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: marken@leonardo.net
http://www.leonardo.net/Marken

[From Stefan Balke (970515.1045 CET)]

Rick Marken (970513.1200 PDT) --

Any thoughts, from a PCT perspective, on the Deep Blue defeat of
Kasparov? There was a discussion about it on PBS' The News Hour
but none of the discussants looked at it from the perspective that
seemed most intersting to me: levels of perceptual control.

That seems to be interesting: a mega chess computer operates on the basis of
levels of perceptual control, but how? What levels are involved? What is
controled? What is the function of the memory, does a computer learn from
own experiences or from programming? What do the developer of Deep Blue say
about that? Do they confirm the PCT view. Many questions ....

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: marken@leonardo.net

Rick, what is Life Learning Associates? I visited your page a time ago
(unfortunately I can't run java, otherwise I would have been there more
often :slight_smile: ) and remember that you announced workshops or something like
that. What is the purpose and concept of LLA? Do you want to bring PCT ideas
into the practice?

Best, Stefan

[from Jeff Vancouver 970515.1200 EST]

[From Rick Marken (970513.1200 PDT)]

Any thoughts, from a PCT perspective, on the Deep Blue defeat of
Kasparov? There was a discussion about it on PBS' The News Hour
but none of the discussants looked at it from the perspective that
seemed most intersting to me: levels of perceptual control.

From what I have read there has been no attempt by the IBM team to

emulate human thinking when creating Deep Blue. Thus, this is a
technological feat, not a scientific one. It might have some scientific
implications and it may be that the programmers did use clues from AI
research on humans to create their programs, but the goal is different
(which is really to sell IBMs). This is why I got out of AI, because I
thought that practical applications was all that it was about. I was
wrong, there is some good science in AI. Deep Blue is just not it.

Jeff

[From Bruce gregory (970515.1415 EST)]

Rick Marken (970513.1200 PDT)]

Any thoughts, from a PCT perspective, on the Deep Blue defeat of
Kasparov? There was a discussion about it on PBS' The News Hour
but none of the discussants looked at it from the perspective that
seemed most intersting to me: levels of perceptual control.

Deep Blue is a simulation of a human playing chess. By beating
Kasparov, Deep Blue demonstrated that it is a "good" simulation.
I might conjecture that Deep Blue is guarding its perception
that its king is not threatened by its opponent. This convecture
would no doubt be born out by Deep Blue's response to a
"threatening move" on my part. I suspect that Kasparov attempted
to discover which perceptions Deep Blue was controlling, just as
he would with a human opponent. Because we know the approach
taken by Deep Blue's designers, we know that its approach to
controlling its perceptions is very different from Kasparov's
(MCT vs PCT, I'd guess...)

Bruce

[From Rick Marken (970515.1320 PDT)]

Thanks to Stefan Balke (970515.1045 CET), Jeff Vancouver (970515.1200
EST) and Bruce Gregory (970515.1415 EST) for their thoughts about Deep
Blue. I liked all of your comments. I'll just do a brief (I hope)
riff on what I was thinking about when I mentioned an interest in
levelsof control in chess.

Playing chess is clearly a control task; the players try to achieve
their goals (the ultimate goal being to place the other player in
checkmate) in the context of physical and social constraints (the latter
being the agreed on rules of the game) and independent disturbances (the
most important being the other player's moves).

It doesn't seem surprising to me that it is possible to design a
computer program that can beat a human chess expert. A computer seems
like the kind of machine that can carry out (far more quickly and
efficiently than a human) the computations that a human carries out when
playing chess. What a human computes (at least what _this_ human
computes) are various measures of the "state of the game" after many
different possible moves ahead. These _imagined_ perceptual measures of
game state are compared to the player's reference states for these
perceptions. I think that when I play I often select moves that are the
"first step" in an imagined sequence of moves that end up producing
(imagined) perceptions of future states of the game
that are most like my references for those perceptions. So I make
the move that ends up with me taking your queen and forking your bishop
and rook in two or three moves, for example.

Chess is interesting because, even with a super fast computer, it
is probably impossible to search _all_ possible follow-ups to every
move. There has to be some strategy for examining the problem space.
And there must also be some choice made regarding the aspects of the
game states (what we would call the "controlled perceptual variables")
that are used as a basis for determining whether to select one move path
or another. Some of the perceptual variables controlled
by these chess programs must be pretty simple: relative value of
pieces lost in exchanges, number of pieces lost in exchanges, number and
value of pieces "threatened", etc. But some of the perceptions
that are controlled are probably pretty complex, like "control of
center" or "piece development", which seem like principle level
perceptions (there are many move contingencies that can produce
game states that match the references for these perceptions)>

I think chess programs control several levels (in the PCT hierarchy
sense) of perception simultaneously; configurations, relationships,
sequences, programs, and principles. I don't think the programs are
controlling system level perceptions, such as the perception of being in
a chess match with another player. I think this is what might be one
thing that gave Kasparov some of his problems. Perhaps he is used to
dealing with systems whose reference levels for the variables (in
particular, the principle perception) they control come from a
higher level of control -- control of system concepts. Perhaps he
is expecting the machine to alter its references for certain lower level
perceptions of the game (including principles like "control of center")
in the same way people do? If so, I think Kasparov might,
indeed, get better against this non-living control system (Deep Blue)
once he has more experience playing against it.

Stephan asks:

Rick, what is Life Learning Associates?

A desperate attempt to make it look like I have a life outside of
CSGNet;-)

I visited your page a time ago...and remember that you announced
workshops or something like that. What is the purpose and concept
of LLA? Do you want to bring PCT ideas into the practice?

Yes. "Life Learning Associates" is going to be my forum for teaching PCT
to the "masses". I don't want to teach "PCT for managers" or "PCT for
teachers" or "PCT for clinicians" or "PCT for engineers" or whatever. I
want to give seminars in basic PCT to people who see some merit in
understanding how people (including themselves) actually work. Whether
those people are managers or workers or educators or
students or retires army officers or whatever doesn't matter.

Of course, at present I see virtually _no_ interest in such seminars.
I once called William Glasser and offered to give such a seminar to
his Reality Therapy people, thinking he might be a good candidate since
Reality Therapy was, at that time, purportedly based on PCT;
but he wasn't interested.

So, until there is a ground swell of interest in learning PCT --
just PCT -- I think I'll keep my day job;-)

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: marken@leonardo.net
http://www.leonardo.net/Marken

[From Bill Powers (970515.1637 MDT)]

Bruce Gregory (970515.1415 EST)--

Deep Blue is a simulation of a human playing chess. By beating
Kasparov, Deep Blue demonstrated that it is a "good" simulation.
I might conjecture that Deep Blue is guarding its perception
that its king is not threatened by its opponent. This convecture
would no doubt be born out by Deep Blue's response to a
"threatening move" on my part. I suspect that Kasparov attempted
to discover which perceptions Deep Blue was controlling, just as
he would with a human opponent. Because we know the approach
taken by Deep Blue's designers, we know that its approach to
controlling its perceptions is very different from Kasparov's
(MCT vs PCT, I'd guess...)

I wonder if the IBM programmers really constructed the DB program this way.
It seems more likely that they programmed the computer to do a lot of
transition probability calculations -- when Kasparov does this in that
situation, his next move is likely to be .... And of course the program does
a lot of looking ahead just according to the rules of the game, looking for
adverse outcomes and winning moves. Part of the problem is figuring out
whether an opponent's move does indeed constitute a threat -- six or 12
moves ahead. And whether the opponent is likely to _know_ it is a threat, I
suppose.

The question is, does Kasparov actually work that way, too? Or does he also
use some sort of analog "sense of the board" perceptions, that tell him of
threats just as a sort of black cloud hanging over parts of the playing
field? Perhaps Kasparov, in addition to doing logical thinking and
projections, also runs hundreds of analog processes at the same time, making
up for his human limitations in doing digital processing by controlling a
large number of analog perceptions in parallel. It would be interesting to
know what he says about it.

There's been a lot of hoo-haw about "machine intelligence" in the Deep Blue
tournament. People seem to forget about the programmers. The machine isn't
playing chess; the programmers are, using the machine. If the programmers
simply sat on one side of the board applying all the methods and algorithms
they have thought up, moving the pieces by hand, they would play just like
Deep Blue -- but a lot slower. They'd have to have all their reference
materials handy, and they would have to write down a lot of stuff that they
would otherwise forget, and calculate what their algorithms say should be
done, using pencil and paper. They'd probably need about a year for each
move. The computer does all this a lot faster, of course, but it's not doing
anything the programmers wouldn't do, given the time -- in fact, it does
_exactly_ what they would do, neither more nor less.

When you look at it this way, it's obvious that Deep Blue is NOT a
simulation of Kasparov. It's merely a way of playing chess that the
programmers have invented. It makes up for the extreme inefficiency of
whatever method it is by carrying out the method VERY FAST, with perfect
memory and errorless calculations. Kasparov doesn't need a perfect memory or
instantaneous errorless calculations or complex rules and algorithms,
because he's using a different method -- one that a single unaided person
(of great ability) can carry out all by himself.

Deep Blue is a simulation of its own programmers, not of Kasparov. Maybe
they have come up with a workable way of playing chess -- but it's not a
method you would allow a human being to use at a chess tournament. You need
a big fast computer to play that way. I don't think the other players would
allow it to be used, any more than a football player would be allowed to
show up for the game in a tank.

Best,

Bill P.

[From Bruce Gregory (970516.1020 EDT)]

Bill Powers (970515.1637 MDT)

Deep Blue is a simulation of its own programmers, not of Kasparov.

I fear that we are both getting a little loose with our
terminology. Deep Blue isn't really a simulation of anything. (I
doubt its programmers play chess the way deep blue does.) It is,
as you say, a tool that carries out a specific set of
strategies. It works because of its immense speed. It is, I
think, quite similar to an MCT model or any model one that uses
inverse kinetmatics. As you have often pointed out, the success
of the outcome tells us nothing about how Nature works.

Regards,

Bruce

[From Bruce Gregory (970516.1235 EDT)]

Rick Marken (970515.1320 PDT)

I think chess programs control several levels (in the PCT hierarchy
sense) of perception simultaneously; configurations, relationships,
sequences, programs, and principles. I don't think the programs are
controlling system level perceptions, such as the perception of being in
a chess match with another player. I think this is what might be one
thing that gave Kasparov some of his problems. Perhaps he is used to
dealing with systems whose reference levels for the variables (in
particular, the principle perception) they control come from a
higher level of control -- control of system concepts. Perhaps he
is expecting the machine to alter its references for certain lower level
perceptions of the game (including principles like "control of center")
in the same way people do? If so, I think Kasparov might,
indeed, get better against this non-living control system (Deep Blue)
once he has more experience playing against it.

Nice post. I seem to recall Simon (or associate) estimating that
the number of positions recognized by a chess master is of the
order 50,000. He noted that this is of the same order as the
number of words in a well-educated person's vocabulary and
suggested that this might not be just a coincidence.

Regards,

Bruce

[Hans Blom, 970520b]

(Bill Powers (970515.1637 MDT))

I wonder if the IBM programmers really constructed the DB program
this way. It seems more likely that they programmed the computer to
do a lot of transition probability calculations -- when Kasparov
does this in that situation, his next move is likely to be ....

No, probabilities are not involved. They are not needed in a game
like chess where full information is available, nothing is hidden and
nothing has to be assumed. The basic mechanism is to pre-compute a
sequence of "imagination mode" moves that improves our position as
much as possible. What is critical is a "position evaluator" which
ranks every position as to its quality. Gradients are not used, since
we may have to traverse local extrema: frequently a "bad" move (e.g.
giving away a pawn) can lead to overall improvement later on. The
basic mechanism is construction of a "game tree": given the current
position, which moves can I make. And given the position after a
particular move, which moves can the opponent make. Etc. This tree
has a large branching factor, alas, so we cannot construct all of it.
Usually the full tree is constructed for a certain number of plies
(half-moves) deep only, but to a greater depth where the quality is
high ("promising" positions). Then, assuming that the opponent always
makes the best possible move, we pick that first move that takes us
one step into the direction of the best possible ultimate position.
Very much like "planning" in an MCT control system, except that we
assume that the "disturbance" is the worst possible (which is a
reasonable assumption when playing with a world champion).

Probabilities need to be computed in games like bridge, where not
everything is in open view. Now reasoning must extend over what cards
opponents might have or are most likely to have. In a game like
chess, using probabilities, although it might be a lot faster, would
lead to suboptimal performance. And that is not Deep Blue's goal.

And of course the program does a lot of looking ahead just according
to the rules of the game, looking for adverse outcomes and winning
moves.

Yes. In chess, prediction is -- in theory -- easy because the "world"
is fully deterministic. What is difficult is the choice of which part
of the game tree to build and how to choose the position evaluation
function. Only in end games with only a few pieces left is the
position evaluator easy, because now it is practical to compute the
whole game tree, from the current position on.

There's been a lot of hoo-haw about "machine intelligence" in the
Deep Blue tournament. People seem to forget about the programmers.
The machine isn't playing chess; the programmers are, using the
machine. If the programmers simply sat on one side of the board
applying all the methods and algorithms they have thought up, moving
the pieces by hand, they would play just like Deep Blue -- but a lot
slower.

But that is not allowed! Chess games have their time limits! So it
cannot be the programmers who play. They just concoct the high level
methods (the "implicit" or general knowledge) and they let the
machine use those methods to generate a particular move (the
"explicit" knowledge, which is the implicit knowledge applied to a
specific case). There is no way a programmer could, within say the
meager eighty years of his life, pursue even one of Deep Blue's games
in all the details that Deep Blue considered. In fact, for the
programmer Deep Blue is a simulation: give it a number of mechanisms
and see how it behaves. Deep Blue is, in practice, far too complex
for an explicit analysis of the finest details of a game.

One requirement of a control system is usually that it delivers its
decisions in time. Unlike a God, it does not have infinite computing
resources (it cannot construct the full game tree; if that were
possible, the game itself would be solved, because in every position
we would know the best next move), nor an eternity during which to
consider things. This high level requirement cannot be incorporated
into a PCT controller, as far as I can see, but it is in Deep Blue.

Greetings,

Hans