The Frame Problem Revisited

[From Erling Jorgensen (2005.08.26 1900 EDT)]

Jeff Vancouver (2005 Aug 19, 11:00 am)

Hi Jeff,
I have been contemplating the issue you raised of
"the frame problem" from a PCT perspective. I believe
some of Bill Powers' essays in Living Control Systems II
might be pertinent citations for you to consider. I
also believe some of Sperber & Wilson's writings on
"Relevance Theory" might also be pertinent sources for
you to look at.

Sperber & Wilson appear to have devised quite a
sophisticated, and PCT-compatible, treatment of the
pragmatics of cognition and communication. I believe
I detect what might be controlled variables in their
notions of "cognitive effect" and "processing effort",
and in their composite variable of "cognitive utility"
of "trying to maximize the expected effect/effort
ratio." (sperber & Wilson, 1996). These concepts
define "relevance" and form the basis for what they
call Relevance Theory.

Their notion of a search for the relevance of a
communication, which shuts down when a good-enough
solution is found, appears to have the signature of
an asymptotic curve, such as is found in negative
feedback control.

This is just an early impression, but I found Sperber
& Wilson's Relevance Theory approach to communication
quite reminiscent of aspects of Martin Taylor's
Layered Protocol Theory. So there may be references
to mine in Martin's work as well, for your project of
a control theory account of the frame problem.

Just as behavior is the control of perception in the
PCT formulation, I believe Martin's LPT is treating
language as the control of belief, in an exactly
analogous way. It, too, seems to involve the
successive-approximation asymptotic curve of negative
feedback processes, in its recursive treatment of the
intentions of dialogic parties. As each party
resolves (i.e., controls adequately) their layered
beliefs about a series of communications, the
conversation winds down (i.e., comes to equilibrium)
and moves on to something else.

Some of the works of Bill's that I think relevant are:

Powers, William T. ([ca.1989]/1992). The epistemology
of the control system model of human behavior, (pp.
223-232). In _Living Control Systems II: Selected
Papers of William T. Powers_, Gravel Switch, KY: The
Control Systems Group.

This paper also refers to a relevant section of one of
the original papers by Powers, W.T., Clark, R.K., &
McFarland, R.L. (1960). A general feedback theory of
human behavior: Part I. _Perceptual and Motor Skills_
11(1), August, 71-88.

Powers, William T. ([1959]/1992). Some implications of
feedback theory concerning behavior, (pp. 1-10, esp.p.8).
In _Living Control Systems II: Selected Papers of
William T. Powers_, Gravel Switch, KY: The Control
Systems Group.

Some of the other sources are as follows:

Sperber, D. & Wilson, D. (1996). Fodor's frame problem
and relevance theory (reply to Chiappe & Kukla). In
_Behavioral and Brain Sciences_ 19:3, 530-532. Available
at: <http://cogprints.org/2029/00/frame.htm&gt;

Wilson, D. & Sperber, D. (no date given). Relevance
theory. In G. Ward & L. Horn (eds.), _Handbook of
Pragmatics_. Oxford: Blackwell, 607-632. Available at:
<http://www.dan.sperber.com/relevance_theory.htm&gt;

Shanahan, Murray, "the Frame Problem", _The Stanford
Encyclopedia of Philosophy_ (Spring 2004 Edition),
Edward N. Zalta (ed.), Available at:
<http://plato.stanford.edu/archives/spr2004/entries/frame-problem/&gt;

Frame Problem (2005). _ISCID Encyclopedia of Science
and Philosophy_. Available at:
<http://www.iscid.org/encyclopedia/Frame_Problem&gt;

McCarthy, J. & Hayes, P.J. (1969). Some philosophical
problems fromt he standpoint of Artificial Intellilgence.
Available at:
<http://www-formal.stanford.edu/jmc/mcchay69&gt;

I don't have the exact references right in front of me
for Martin's works. You could start with his web site at:
<http://www.mmtaylor.net/PCT/index.html&gt;

Your query has gotten me thinking about the unique
contributions of PCT to understanding something like
the frame problem. I'll send my thoughts on that
matter in a separate post.

All the best,
Erling

[From Erling Jorgensen (2005.08.26 1910 EDT)]

Jeff Vancouver

Jeff,

After I wrote and played around with this essay, I
decided to add headings to help organize the arguments.
I am not sure if they add or subtract from the whole,
but perhaps they can help pique and carry the reader's
interest. This then is my:

"PCT Approach to The Frame Problem."

I. Situating the Frame Problem: "It's not where you
think it is."

As I have tried to understand "the frame problem," from
sources available on the Web, it seems to betray a key
problem in how the issue is formulated. It is there in
how you ask the question, in one of your posts:

Jeff Vancouver (2005.08.23.1145 EST)

That is, somehow our minds know what changes or
differences to ignore (and presumably what to send up
the hierarchy). How do we do this?

This gives a picture of first taking everything in, and
then deciding (in some way) to disregard or inhibit a
portion of it. It is in that subsequent "decision," it
seems to me, that the frame problem is thought to reside.

I think Perceptual Control Theory comes at this issue
very differently. It begins with some elemental property
of the environment that can be sensed -- some form of
environmental energy, that can be transformed into some
form of neural signal. These are the "Intensities" of
the proposed PCT Hierarchy. It then starts constructing
various new forms of invariants, with what it has to
work with.

In other words, the "framing decision" happens on the
perceptual side of the loop, as perceptual input
functions (PIFs) are constructed, not on the (later)
output side of the loop as actions are contemplated.
In fact, that very first step of transforming light or
sound (or some other form of environmental energy) into
a rate of neural firing, is itself a framing step. By
necessity (which means, by virtue of the architectural
way we are constructed), we only perceive some sub-set
of what is theoretically possible to perceive. "How is
this determined?" Presumably, it is some kind of
evolutionary process, with different species settling
into different solutions (read, niches) of what they
will perceive and capitalize on in their environment.

II. The Computational Problem: "It is worse than you
think."

What I am now calling the framing decision is taken up
again and again, as each new type of perceptual invariant
is constructed. These are (in developmental order) the
so-called "Sensations", "Configurations", "Transitions",
"Events", "Relationships", "Categories", "Sequences",
"Programs", "Principles", and "System Concepts" of Bill
Powers' heuristic proposal, for how hierarchical
perceptions may be arranged in human control systems.

This suggests that "the computational aspect of the
problem" (and it is a sizable one!) is located somewhat
differently for PCT than for AI modeling or for
philosophical epistemology. It arises at the very
outset, when perceptual input functions are constructed,
not at some later stage, when data structures are updated
or action outputs are contemplated or generated.

It must be admitted that PCT has no magical solution to
this problem of constructing realistic (or robotically
useful) PIFs. This has been some of the bottle-neck for
PCT research. We know that the organizational layout of
a control loop controls robustly; it does what it is
designed to do, and it does it very well. We also know
that the transform functions can be very complicated,
and it can still work extremely well.

Examples would be as follows. Tom Bourbon has
experimented with environmental feedback functions
being routed through other controllers before the
results are monitored by the original control systems,
and the perceptions still get controlled. Bill Powers
has experimented with hundreds of elemental control
systems sharing the same environment with limited degrees
of freedoms, and still being able to arrive at stable
control. Both Bill Powers and Rick Marken have
demonstrated that the principle of hierarchical control
by systems at several levels at once is not a problem
for properly designed control systems. Richard Kennaway
has experimented with applying the hierarchical control
architecture to problems of a walking robot. These are
all very striking demonstrations of principle for the
PCT approach.

The piece that is hard in PCT research is to construct
perceptual input functions that make the perceptual world
of the experiment look recognizable and therefore
interesting to ones reading the research. Perceptions
such as force, acceleration, velocity, and position do
not look very compelling. Yet they have been combined
into a sophisticated solution of controlling a difficult
engineering problem such as an inverted pendulum. It is
out of such efforts that more elaborate robot simulations
will be devised. But so far, the results do not look
jazzy enough to be called "intelligent." Nor do they
include many of those layers of perception listed above,
which are proposed to comprise the richness of human
control.

III. Tools For Limiting the Damage: "Let's not make it
harder than we have to."

A. Action Is Not Primal

A slightly different version of the frame problem (from
"knowing what to ignore") is implied in some of the
philosophical literature. There the problem seems to
be one of how to infer the consequences of action, and
what information to base that upon.

Again, I believe PCT takes a very different approach.
PCT does not start with inferences and then decide on
the proper action. PCT builds into its very
architecture a way to monitor the consequences of
action. That is the concept of "perception," as PCT
uses the term. Consequences are not inferred, they
are directly checked out.

But, ah -- say the philosophers -- there are any number
of consequences of action. Conceivably, everything is
related to everything else. How does one limit the
scope of what is to be monitored?

Here again, I believe PCT approaches from the other
side of the equation. PCT does not begin with action.
Action is only something that is needed to get one's
perceptions coming in the right way. Perception is
everything, at least to a PCT first-approximation as
to what matters. Action comes later, and if current
actions do not alter perceptions in the right direction,
different actions are tried.

This issue of "altering perceptions in the right
direction" brings in the PCT second-approximation as
to what matters, namely, reference standards. What
matters is first of all perception, and second of all
a perception tracking the state of its preferred
reference. Once again, action is a derivative necessity,
to try to make those things happen in the right way.

B. Defining What's Relevant

Yes, but -- back to the philosophers -- aren't there
still all kinds of consequences of attempted actions,
which are causing all kinds of differences in the
environment?

Here is part of the robust beauty of PCT, as I see it.
PCT avoids the computational intractability of this
problem -- (although, it has a significant computational
problem of its own; see above) -- by restricting what
gets monitored to whatever perceptual input functions
the system has so far constructed. If there is no PIF
for a certain kind of difference in the environment,
it is as if that difference did not exist.

Let me give an example. The operation of electrical
motors generates radio wave interference, (according to
my limited understanding of such matters). The vast
majority of the time, that is not a problem to me
whatsoever, simply because my body has not constructed
sensors for picking up radio waves. Such "interference"
does not exist (for my body), because I have nothing
for it to interfere with. My computational problem of
what I monitor is greatly simplified, thereby, and I
put my efforts into controlling what I am constructed
to monitor.

In other words, PCT's solution to the frame problem
is not a philosophical one, but a pragmatic one. We
perceive what we perceive. If we have actions that
substantially can affect that, then we (generally) are
able to also control what we perceive. Perceptions
that can be controlled are "what matter" to control
systems, by definition. When that definitional project
is pursued with a vengeance -- as it seems to have been
by life forms on earth for a few billion years now --
then quite elaborate symbiotic and ecological networks
can be built up, as micro-stabilities of perceptual
control are mutually exploited by collaborating
organisms.

A key point here is that it is not necessary to "know"
where such symbiotic stabilities arise. PCT has a
concept of "disturbances," which include all outside
influences -- beneficial or detrimental -- that are
affecting the controlled variable of a control system.
PCT implicitly recognizes McCarthy & Hayes' (1969)
classic formulation of the frame problem, namely,
"the impossibility of naming every conceivable thing
that may go wrong." And PCT rolls that issue into
its equations.

PCT collectively labels such eventualities
"disturbances to controlled variables." And I
believe PCT has an effective way around this potential
framing difficulty by restricting where it monitors
all those things that may go wrong. A control system
simply and solely monitors the state of its controlled
variable, as constructed by its perceptual input
function. That is the only world it knows or needs
to know. All influences, whether disturbing ones or
helpful ones, are summed together and rolled into that
one measure (for each elemental control system).

There is no need to pre-specify or know ahead of time
which preconditions could keep an action from having
its intended result. They are present in the equations
of a control system in the operationalized notion of
the disturbance, and they are checked directly, in
real time. That is to say, the results of any
disturbances -- not the disturbances themselves -- are
monitored directly, by how the PIF is constructed.

Furthermore, the output function of a control system
provides a way to attempt to counteract the effects of
any disturbances. Disturbances in themselves do not
need to be known. It is sufficient to monitor the net
effect of any disturbances, combined with the
counteracting effect of the output function.

This vastly simplifies the problem of framing, and it
avoids the need for "frame axioms" about which aspects
can be presumed to remain the same when something
changes. All of that is monitored in real time, at
one single point in each elemental control system,
namely, the current state of the perceptual input
function.

IV. Summary: "Some maps are better than others."

One way to sum up a PCT approach to the frame problem
is to adapt Gregory Bateson's formulation of "a
difference that makes a difference." The two uses
of "difference" in this phrase can be mapped onto
PCT's notions of "perceptions" and "references,"
respectively.

The only differences that a control system "knows,"
according to PCT, are whatever ones are embodied in
its perceptual input function. This means, that
different hierarchical levels (if there are such)
compute different invariants (i.e., what will not be
perceived as a difference), and thus perceive different
"differences." The notion of a "frame" is absorbed
into the PCT concept of the perceptual input function.
Environmental energy is framed according to whatever
PIFs are constructed. If something does not show up
in a PIF, in effect it is "attached to a frame" that
has not changed because it has not been perceived.
The input side of a control system only deals with
differences it can perceive.

There is also the notion, however, of "making a
difference." I believe this maps most cleanly onto
the PCT notion of "reference standards." If there is
no preferred reference for a perception, the perception
may have changed but who cares? It would not "make a
difference" to the organism. Again, the issue is not
a philosophical one, of whether there is a rational
basis for ignoring or preferring a certain difference.
The issue is a pragmatic one, of whether a given
reference might contribute to bringing about a
perception that might "make a difference" to that
organism, with that hierarchy of reference standards
currently operative.

Essentially, I believe PCT provides a better map, for
how to navigate through and around "the frame problem."
For one thing, it situates the issue differently, as
one of perception, not one of inference or action.
Framing is inherent with how all perception is
constructed, but it does not thereby render organisms
overloaded and immobile. If they are constructed as
perceptual control systems, they will act to control
their perceptions as they are, no matter how simple
or complex.

There is indeed a computational bottle-neck for even
a PCT approach to the frame problem. But here, too,
it is not where AI researchers or epistemological
philosophers have located it. It occurs early, in
the perceptual input side of the design of control
systems, not late in the computational output and
functioning such systems.

From a research standpoint, there may be algorithms

that could construct sophisticated perceptual input
functions, which could then be controlled according to
the basic design of a negative feedback control loop.
The bigger problem might well be whether such
"perceptions" would be recognizable as such to us
with our human perceptual input functions. We know,
for example, that bees and homing pigeons and migrating
birds control various perceptual aspects of their
environments, but how such perceptions are constructed
is still beyond our technology.

The key point is that each new type of perceptual
input function, whether in other species, or within an
internal human hierarchy, essentially creates a new
perceptual world. If anything, that compounds the
computational problem, whether for AI or for PCT.

I believe the good news, from a PCT standpoint, is
that the basic equations of a control system
organization appear to be very robust. That would
suggest that it should be possible to test them out
in various situations with all sorts of "arbitrary"
perceptual inputs, including those in hierarchical
networks, not just with those deemed perceptually
"realistic."

In the meantime, PCT provides some compelling ways
to re-frame the framing dilemma. For one thing,
disturbances (i.e., things that may go wrong) do not
have to be dealt with independently. They can be
monitored at one point in the system, namely, in the
net impact they have on the controlled variable, as
calculated by a system's perceptual input function.

Similarly, the effects of actions can be monitored
the exact same way. Their consequences do not need
to be inferred -- even though there are ways even
within PCT to model an inferential planning process.
Action can simply be checked out directly, in real time,
to see if it is having an effect in the preferred
direction on a controlled variable. If not, something
else can be attempted, even if there is no way to
pre-compute its anticipated effect.

Essentially, PCT redefines what is most relevant to
consider, in determining how living systems function.
While it does not solve all the computational problems,
it does provide decisive place-holders for determining
what is relevant.

Relevance, for a control theory task, is defined in two
ways. First of all, is there something to perceive,
and can it be perceived? If it cannot, it is not (yet)
relevant. Actions may be taken, or contemplated, but
there will be no way of determining their effects,
without some perceptual consequence to monitor.

The second way that relevance is defined is in terms
of reference standards. Is there a preferred reference
for a given perception? If not, then that perception
does not currently matter to the organism. Only
perceptions for which there are corresponding reference
standards constitute "differences that make a difference"
to the organism.

This is a constructivist and pragmatic approach to the
issues raised by "the frame problem." I believe it is
a better approach than that of rationalist philosophy.
It adopts a "satisficing" rather than an "optimizing"
approach to the problem. More importantly, it suggests
that optimizing is not the only game in town. Some
very credible and robust findings can be obtained with
a PCT approach, and so, I believe it definitely deserves
very close consideration.

All the best,
Erling

[From Bill Powers (2005.08.27.0818 MDT)]

Erling Jorgensen (2005.08.26 1910 EDT) --

Thank you for a penetrating, intelligent, and informed essay on the framing problem. You have said everything I would have wanted to say if I had known what the problem was as conceived by philosophers, only you have said it better. I hope Dag will pick this post up and archive it as one of the classical papers on PCT.

I think you should consider publishing this paper in the relevant literature.
You would have to expand on the parts where the reader is expected to be familiar with PCT, of course. I can see the result turning into one of those definitive little books, like "The meaning of relativity."

Best,

Bill P.

[From Rick Marken (2005.08.27.1510)]

Bill Powers (2005.08.27.0818 MDT)--

Erling Jorgensen (2005.08.26 1910 EDT) --

Thank you for a penetrating, intelligent, and informed essay on the framing problem...

I think you should consider publishing this paper in the relevant literature.

I agree. Maybe try submitting to the journal _Philosophical Psychology_ (Taylor & Francis - Harnessing the Power of Knowledge).

Best regards

Rick

ยทยทยท

---
Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Jeff Vancouver (2005.08.20.0930 EST)]

[From Erling Jorgensen (2005.08.26 1910 EDT)]

"PCT Approach to The Frame Problem."

This is a nice essay. I particularly liked the part about the
disturbance not needing to be modeled in the system for the system to
control (just the total result of all the influences is monitored). This
is a fundamental aspect of Bill's version that makes it better than
other versions (i.e., the 1st generation computational models about
which the frame problem was raise are all based on control theory, but
not very good ones compared to Bill's IMHO). Anyway, this does simplify
the problem substantially. I also see that you give evolution some due
(as it would seem to deserve). Finally, I like that you recognize
computational issues still occur regarding reorganizing/developing new
PIFs. I might suggest adding Bill's discussion of the likely local
(neurologically) character of that process. Together I think these
things move the frame problem to a frame challenge for perceptual
control theorists. This is the essence of the point we will take on
this. My colleague ended up writing 16 pages on it for our chapter. I
suspect we will trim it substantially before the final draft. I will
post that we say about it (warning, it might be a paragraph). I will
also look at the links you gave in your previous post. Given that most
is citable, it is likely very useful stuff and what I was hoping for.

Thanks,

Jeff