[Martin Taylor 2008.04.08.11.00]
[From Bill Powers (2008.04.05.1050 MDT)]
Bill asked specifically for comments on this post. These comments may serve also as a partial answer to [Tracy B. Harms (2008 4 3 8 30)], which I hope to answer separately.
So this tells us we can be uncertain about a perception of a completely determinate variable. The uncertainty arises not from the presumed thing being perceived, but strictly from the observer's inability to find any way to predict it.
Yes, exactly! No uncertainty about that:-)
And now you agree with me. I am confused. Do you think there is such a thing as uncertainty that can be perceived, or do you think that uncertainty IS a perception?
This is a trick question, a variant on "Do you think there is a real reality to be perceived or do you think reality IS a perception." My answer is "Yes". Uncertainty is a perception, a function of variables outside the process that creates the perception, just as any perception is a function of variables outside the processor that creates the perception.
I remember a thread from a few years ago that contains echoes of the present problem. I think it was Bruce Nevin who proposed (with some support) that there is not a category level of perception and control, but that category perceptions occur at every level, as if in a column running parallel to the hierarchy and conducting category signals upward along with all the other perceptual signals. Thus
You have it one-quarter right. It was my proposal, not Bruce's, and the proposal contained no suggestion of a "column running parallel to the hierarchy and conducting category signals upward along with all the other perceptual signals." Nor did it have any suggestion that "each perceptual signal would be accompanied by a second signal indicating the category to which that perceptual signal belongs." So I guess you have it 1/8 right, not 1/4.
Actually I still think my actual proposal, rather different from the one you paraphrase, is worth pursuing further, but I haven't mentioned it in the intervening years because it seemed to annoy you, and I did not think that to be a good idea.
Now you are proposing that for every perceptual signal at every level in the hierarchy, there is a second signal accompanying it which carries information about the uncertainty in the perceptual signal.
Not quite. Let me rewrite that with a couple of minor but significant wording changes, as follows:
"Now you are proposing that we should consider the possibility that for any perceptual signal at any level, there might be a second signal accompanying it that carries information about the uncertainty associated with the current value of that perceptual signal."
If you simply pause and ask how you would design such an arrangement, I think you would see what is wrong with it.
The implication being that you perceive that I have not paused and considered how such an arrangement might be designed. I'm interested in whether you also perceive any uncertainty about that perception, and if you do, what information might contribute to the uncertainty perception. Might your sources include your historical experiences (memory) of my propensity to offer proposals without having given them any thought? Might it include perceptions of statements I have made that suggest I have issues with how it might be designed (I have made some)?
If, however, you don't have a perception of any uncertainty about the perception that I have not given consideration to the design problem, then your subjective experiences don't correspond to mine very closely. Neither would it be totally consistent with your earlier comment that a belief was a perception that included doubt. That was, after all, the trigger for this whole thread on uncertainty. So I currently believe that you do perceive some level of uncertainty about the perception that I have not thought about design.
OK. To the problem.
Suppose you want to generate a signal indicating the uncertainty in an intensity signal from the retina. You would need an input function that can receive a copy of the intensity signal, and compare that signal with a signal representing the actual intensity of light falling on the retina.
Why do you make this assertion? There are other possibilities, which the actual retina, as well as simulations, appear to use. I will deal only with one version of one type of possibility, and ignore a completely different range that might be more significant at higher levels -- control.
It is possible is that the "real world" contains, as it appears to do, spatial and temporal regions over which the intensities detected by our sensor systems vary only slowly, together with other, tightly constrained, spatial and temporal regions over which the intensity varies rather abruptly. The average value of the signals from neighbouring regions, excluding those from regions beyond a rapid change, could easily serve as a surrogate for your requirement of a signal that recorded the actual intensity.
By comparing the two signals this function could determine the relationship between the intensity signal and the actual intensity.
Yes, and the same goes for comparing with the local average.
Many years ago, I read a series of articles in which some kind of retina-like structure with adaptive lateral connections was exposed to the kinds of spatio-temporal patterns in the (presumed) natural world. It developed structures lik on-centre-off-surround and the reverse, and oriented edge detectors, and so forth. I don't remember the author, but I think the first name was something like Christof. It was a series of articles in consecutive or nearly consecutive issues of a European journal, or perhaps Perception and Psychophysics -- I remember it as being in a format like A4 or US Letter sized pages.
At least the centre-surround structures in the retina perform the function you think necessary. Most of the time (i.e. when the centre is in a region that is spatially fairly uniform and hasn't changed much over recent -- sub-second -- time), fluctuations in their outputs are a direct representation of the variability of the sensor. Those variations could, in principle, be used to assess the likelihood that a particular fluctuation represents a distinctive location in the "real" world.
If the intensity signal varied exactly as the actual intensity varied, the uncertainty signal would be zero. As the signal's variations departed more and more from the actual intensity variations, the uncertainty signal would become larger.
In the centre-surround case, as the signal's variations departed more from the surrounding average, the likelihood that they represent something in the real world would increase. But so would the likelihood that the centre sensor was misbehaving. That possibiiity is, in principle, testable by temporal measurement and control -- rather like applying "The Test".
The human eye isn't stationary except under experimentally controlled conditions (visual objects vanish when eye movements are prevented). If after an eye movement the same retinal sensor exhibits unexpectedly high variability, it's probably malfunctioning, but if the high variability is associated with a changed retinal location consistent with the control output that moved the eye, then it's probably variation in the environment.
I won't even ask how the necessary statistical calculations are carried out, or where. I'll stipulate that there are some cells in the retina that can do an equivalent analog computation, though I'll leave it to someone else to point them out.
Done, I hope.
The uncertainty signals from these cells would then run up the optic nerve, paired with the corresponding intensity signals.
Possible. I don't know if they do or if they don't. Quite possibly the only signal that goes up the optic nerve is one that could be labelled "likelihood of interesting event here", rather than "intensity here". We are, after all, notoriously bad at estimating light intensity other than by assessing the precision of visible edges and the like (as in you example some time ago of the increasng difficulty of reading as the evening light fades).
The real problem is locating the mechanism by which this uncertainty-perceiver measures the actual intensity of the light falling on the retina. We can assume that the intensity of light is detected only by rods and cones (unless you want to propose some other kinds), so the uncertainty-detector must be using some spare rods and cones. These, however, would have to be different from the ones that generate intensity perception signals, because they would have to generate signals that are accurate measures of the light intensity at all brightness levels, down to counts of individual photons. Otherwise we would just have another set of intensity signals having unknown uncertainty.
Do you see the fallacy in the above? It comes from the timeless philosopher's argument-generation machine: "Given my assumptions, it can't be true, so therefore it can't be true." The logic is impeccable, but the assumptions are open to question.
Since my argument can be applied to the generation of uncertainty signals at any level in the hierarchy, I can't see how the uncertainty in ANY neural signal can actually be measured directly.
Do you see how the same kind of structure could, in principle, apply at all perceptual levels? As I have said several times in talking about this proposal, a propagated "uncertainty' signal would be only one among several inputs to an "perceptual uncertainty function" associated with a given perception.
The only way I can think of doing this is to use computing processes at the logic level, where we do mathematics and other rule-driven things, and measure uncertainty by comparing perceptions of different kinds from lower levels.
Well, your mechanism here comes close to what the retina does, but, as with the retina, there's no need to involve perceptual logic-level functions. Neurons can perform an awful lot of internal computation. For example, I believe they compute very accurate logarithms over one or two orders of magnitude under some conditions, so multiplication (and indeed, I believe correlation, too) present little problem at any perceptual level. Comparing perceptions that under normal conditions have similar values is a standard way of detecting anomalies, and of discovering the variability of each individual perceptual mechanism.
If we then view all of experience from this level, we will be able to perceive the amount of uncertainty in any set of lower-level signals -- but the perceptual signals indicating uncertainty would then exist only at the logic level. And none of the measurements could depend on knowing the actual state of whatever the perceptual signal represents.
Remove the reference to logic level, and I think that we are close to an understanding.
Martin