[From Bill Powers (2008.04.08.1634 MDT)]
Martin Taylor 2008.04.03.17.03 --
Before the observation, the receiver knows nothing more about the data source than a probability distribution of possible transmission events that could result in observations. After the observation, the recever's problem is to determine, given the observation, what the transmission event was. The best it can do is to assign a new probability distribution. Both probability distributions determine measures of uncertainty _about_ the source.
This points up a sharp difference between the Shannon model (as applied to perception) and the PCT model. Shannon was considering a communication channel along which a wide variety of different messages might be sent. As you say, there is a pre-existing distribution of possible "transmission events" (messages) that might be sent, so the uncertainty is about which of them was in fact sent. Of course in Shannon's job, it could be known which was sent, so a judgment could be made about the correctness of the classification.
In a PCT organism, there is no way to determine what message was actually sent, except by examining another message. Furthermore, under the premises of PCT, a signal in one channel can mean only that one specific message was sent, with the signal strength indicating the degree of resemblance of the input pattern to the exact pattern that produces that one message. Other messages are received in other input channels, so now the identification problem comes down to which channel has the largest signal, and by what margin. This is the "alternate universes" version of perception: they all exist at the same time, some being more probable than others as we perceive them. Edge sharpening can emphasize the contrast, but that also increases the noise level, so there is a limit to what can be accomplished by mutual inhibition.
Here we have to remember the different viewpoints that arise in CSGnet discussions, among them the control system's internal view and the analyst's omniscient view.
I think this is a slight misrepresentation. The analyst is not omnisicient; at best, he thinks he is. A good deal of the disagreement here concerns a confusion between computations the analyst carries out, and computations that he assumes are being carried out by the system being observed. He does not know that the observed system is doing any computations unless he can point to the means by which they are done, or might plausibly be done.
The problem here is much like a problem in linguistics I have commented on. Linguists try to deduce the rules by which valid utterances are generated. But even if they find such rules, there is no reason to think that valid utterances are generated by following those rules or any others. A raindrop obeys the following rule: if no solid object below, keep falling: otherwise, splat! All observations verify that this rule is alway obeyed, except by virga. Yet that rule has nothing to do with the way raindrops behave.
Shannon takes the analyst's view. Taking the analyst's view, view there is no problem in mapping Shannon theory onto PCT.
But there is. Following Shannon as you do attributes computations done by Shannon to the system being observed. Shannon's computations may be perfectly correct, and still have nothing to do with the way the observed system works.
All data paths in the control loop are subject to fluctuations and uncertainty at the receiving end about what was happened at the transmitting end.
True. But how big are those fluctuations as a fraction of the mean value of the perception over a tenth to five tenths of a second? To say that there is uncertainty is not to say that the uncertainty has any importance. If removing the uncertainty altogether would improve control by only 3% of the range of control, how much effort should be wasted on doing that? I think your argument is qualitative, while the real issue is quantitative.
There is always uncertainty, and only if the uncertainty is low enough all around the loop can control be effective.
Precisely. And I claim that the uncertainty is naturally low enough to permit effective control under all but a few rare circumstances, and so needs to be considered only in a few special cases. Evolution has seen to it that neural signals have a dynamic range sufficient to make the noise level of perception negligible under most circumstances.
I expect a nefarious device to arrive in my mailbox in a few days. Fortunately, it's 100 yards from my apartment.
Best,
Bill P.
···
-------------------
Where there is a problem is in mapping between Shannon's uncertainty analysis and the control system's internal point of view -- where does the perception of uncertainty come from, for we all agree that subjectively we perceive uncertainty, even if we can't agree exactly about what? What I think we all agree on also is that perceptions are all we have from which to generate teh derived perception of "uncertainty about a perception" or "uncertainty about the environmental correlate of a perception". I think I began to answer this question in my recent response to Bill P. [Martin Taylor 2008.04.08.11.00]
While I to think that signal compression questions are relevant to certain types of control-system problems (i.e. those that involve signal transmission between components, such as input-function to comparator-function), it has no meaning for the sort of problem which has been raised here.
I agree. Signal compression has nothing to do with this thread. We aren't talking about signal compression. We are talking about uncertainties of perception. That the sensory systems do a lot of signal compression may enter detailed analysis, but it's irrelevant here.
One of the most debilitating mistakes of knowledge-theory has been conceiving of the relationship between environment and agent as one of signal-production by the environment, and the role of the agent being reception and decoding of the signal.
That some people use a scredriver as a chisel or a hammer is no reason to discard screwdrivers from the toolbox. I agree with your statement, but not with your inference from the statement, that "therefore" information-theoretic concepts and analysis are irrelevant to the problem at hand, which hinges on the information-theoretic concept "uncertainty". (Unless, of course, you mean something else when you use the words "knowledge-theory", for if you do, I don't know whether I agree or disagree.)
Avoidance of that confusing (yet seductive) metaphor is one of the big benefits PCT has to offer.
I suppose avoidance of screwdrivers is one of the benefits of joining furniture with tenons and dowels rather than screws.
Martin
--
No virus found in this incoming message.
Checked by AVG. Version: 7.5.519 / Virus Database: 269.22.8/1363 - Release Date: 4/7/2008 8:56 AM