Brian Smith::The Knowledge Representation Hypothesi

Brian Smith::The Knowledge Representation Hypothesis

PY: I see a convergent type of evolution between the assumptions of this hypothesis and those of PCT. I’ve noticed PCT leans toward robotics, Bill citing the inspiration for PCT as an adaptation of Weiner. The field of AI has also been gently nudged over the years towards accepting the big-picture computational reality behind modeling intellgent behavior. Here are some snippets to look at. Note where I’ve marked TCV.

Any mechanically embodied intelligent process will be comprised of structural ingredients that a) we as external observers naturally take to represent a propositional account of the knowledge that the overall process exhibits, and b) independent of such external semantical attribution, play a formal but causal and essential role in engendering the behavior that manifests that knowledge. -Brian
Knowledge representation has been called the most central problem in artificial intelligence. It is basically the glue that binds much of AI together
In the late 1950s Newell and Simon developed several programs to test the intelligent behavior resulting from heuristic search, The The Logic Theorist, developed by J.C. Shaw based on the concepts of Newell and Simon proved theorems using logic, an elementary method of knowledge representation. Later on, in General Problem Solver or GPS, Newel and Simon continued their efforts to find general principles of intelligent problem solving. The GPS was able to solve problems formulated as state space search. GPS used means-end analysis for conducting search. The earlier methods adopted for problem solving are termed as weak problem-solving methods.

The second phase (1980s) of developments in the area of knowledge representation and problem solving used strong methods for problem solving. In strong methods, problem solvers make certain assumptions about the nature of intelligent systems. These assumptions were formulated by Brian Smith. The important theme of his hypothesis included the assumption that the knowledge would be represented propositionally. Propositional representation means, representing the knowledge explicitly, so that the same could be observed and accessed by any outside observer. TCV
The last theme of knowledge representation is described as agent-based problem solving. In this approach, problem solving is considered as distrubuted, with different agents performing different tasks in the domain of their context. The problem solving task is viewed as works done by individual agents with little or no coordination among them. For example, in interactive game playing, the agent would address a local issue, e.g. defending a move, without any general concern for the overall problem handling.

The need of knowledge representation was felt as early as the idea to develop intelligent systems. With the hope that readers are well conversant with the fact by now, that intelligence requires possession of knowledge and that knowledge is acquired by us by various means and stored in the memory using some representation technique, we can make out that knowledge representation is simply, capturing critical aspects of intelligence activity for use on a computer. Putting it another way,
knowledge representation is one of the many critical aspects which are required for making a computer behave intelligently.

Nowhere is the state/content confusion clearer than in Knowledge Representation, where it is assumed that knowledge is content, or data, in the system. This is the essence of Brian Smith’s Knowledge Representation Hypothesis. This says that a system knows that p if and only if it contains a symbol structure that means p to us and that causes the system to behave in appropriate ways. The symbol structures associated with knowledge are at once meaningful to us and causally efficacious for the system.