Example of how non-quantal data might be modeled

Create a graph database by using a parser to decompose a collection of input sentences and represent their word dependencies as nodes and edges. This is a toy system, so limit input to simple assertions. Include classifier vocabulary (ISA terms, in the traditional knowledge-base concept).

As each dependency subtree is input, a program compares it to the subtree for a ‘belief’ that it holds. If they contradict, i.e. one item contains a negation, a denial, a classifier word with a non-intersecting argument set, or the like, and the other does not, a second program blocks entry of the conflicting assertion into the database, and asserts the ‘belief’ dependency in one or another of the sentences that can express it, i.e. prints on the screen or produces text-to-speech. (Empirically, we know that a given dependency or subtree of dependencies can correspond to a set of superficially different sentences.)

I do not propose to build this model. The exercise is to show that PCT modeling is possible without variable quantities for p, r, etc. when the perceptual variables are discrete.

The program that does the comparing of word dependencies produces a perceptual signal of contradiction to a belief. When that signal enters the input function of the second program it controls it with a reference value of zero by sending a reference signal to a routine that blocks acceptance of the current input, with a branch to a routine that utters the belief in one of the available sentences for expressing it.