[From Oded Maler 920814]
(Bill Powers (920813.1400)) -
I agree with most of what you say. There's a sub-sub-branch of AI
called "Reactive planning" dealing with planning in a non-static
world. It seems that *in principle* they might pass you legitimacy
tests even though their building blocks are abstract actions.
Of course, no one knows yet how to implement it for large-scale
problems since servoing in this abstract space is not understood
("If you can't beat it - smash it"..) as we agreed a long ago
while discussing "qualitative" control.
I still reiterate the rhetoric question whether for some reason
it is harder in a non-grounded abstract space than in an abstract
space where the percepts and actions are realized by lower-level
ones. And if so, how?
(Rick Marken (920812)) -
Ok. You make your position very clear. So the next time something like
Beer's work or another is discussed, your claim should not be that
they are wrong but that they are working on non-interesting problems
(for you).
Perhaps my knowledge of PCT is so preliminary so I would be less
determined and not be so sure which behavior is generated for the
"right underlying reasons" and which is not. It is also not
self-evident at all that a world where all people know what control is
will be a better one - but anyway I'm not interested in Ideology
(unless I'm in a historical museum).
Best regards
--Oded