Uncertainty...

[Martin Taylor 2008.04.08.11.00]

[From Bill Powers (2008.04.05.1050 MDT)]

Bill asked specifically for comments on this post. These comments may serve also as a partial answer to [Tracy B. Harms (2008 4 3 8 30)], which I hope to answer separately.

So this tells us we can be uncertain about a perception of a completely determinate variable. The uncertainty arises not from the presumed thing being perceived, but strictly from the observer's inability to find any way to predict it.

Yes, exactly! No uncertainty about that:-)

And now you agree with me. I am confused. Do you think there is such a thing as uncertainty that can be perceived, or do you think that uncertainty IS a perception?

This is a trick question, a variant on "Do you think there is a real reality to be perceived or do you think reality IS a perception." My answer is "Yes". Uncertainty is a perception, a function of variables outside the process that creates the perception, just as any perception is a function of variables outside the processor that creates the perception.

I remember a thread from a few years ago that contains echoes of the present problem. I think it was Bruce Nevin who proposed (with some support) that there is not a category level of perception and control, but that category perceptions occur at every level, as if in a column running parallel to the hierarchy and conducting category signals upward along with all the other perceptual signals. Thus

You have it one-quarter right. It was my proposal, not Bruce's, and the proposal contained no suggestion of a "column running parallel to the hierarchy and conducting category signals upward along with all the other perceptual signals." Nor did it have any suggestion that "each perceptual signal would be accompanied by a second signal indicating the category to which that perceptual signal belongs." So I guess you have it 1/8 right, not 1/4.

Actually I still think my actual proposal, rather different from the one you paraphrase, is worth pursuing further, but I haven't mentioned it in the intervening years because it seemed to annoy you, and I did not think that to be a good idea.

Now you are proposing that for every perceptual signal at every level in the hierarchy, there is a second signal accompanying it which carries information about the uncertainty in the perceptual signal.

Not quite. Let me rewrite that with a couple of minor but significant wording changes, as follows:

"Now you are proposing that we should consider the possibility that for any perceptual signal at any level, there might be a second signal accompanying it that carries information about the uncertainty associated with the current value of that perceptual signal."

If you simply pause and ask how you would design such an arrangement, I think you would see what is wrong with it.

The implication being that you perceive that I have not paused and considered how such an arrangement might be designed. I'm interested in whether you also perceive any uncertainty about that perception, and if you do, what information might contribute to the uncertainty perception. Might your sources include your historical experiences (memory) of my propensity to offer proposals without having given them any thought? Might it include perceptions of statements I have made that suggest I have issues with how it might be designed (I have made some)?

If, however, you don't have a perception of any uncertainty about the perception that I have not given consideration to the design problem, then your subjective experiences don't correspond to mine very closely. Neither would it be totally consistent with your earlier comment that a belief was a perception that included doubt. That was, after all, the trigger for this whole thread on uncertainty. So I currently believe that you do perceive some level of uncertainty about the perception that I have not thought about design.

OK. To the problem.

Suppose you want to generate a signal indicating the uncertainty in an intensity signal from the retina. You would need an input function that can receive a copy of the intensity signal, and compare that signal with a signal representing the actual intensity of light falling on the retina.

Why do you make this assertion? There are other possibilities, which the actual retina, as well as simulations, appear to use. I will deal only with one version of one type of possibility, and ignore a completely different range that might be more significant at higher levels -- control.

It is possible is that the "real world" contains, as it appears to do, spatial and temporal regions over which the intensities detected by our sensor systems vary only slowly, together with other, tightly constrained, spatial and temporal regions over which the intensity varies rather abruptly. The average value of the signals from neighbouring regions, excluding those from regions beyond a rapid change, could easily serve as a surrogate for your requirement of a signal that recorded the actual intensity.

By comparing the two signals this function could determine the relationship between the intensity signal and the actual intensity.

Yes, and the same goes for comparing with the local average.

Many years ago, I read a series of articles in which some kind of retina-like structure with adaptive lateral connections was exposed to the kinds of spatio-temporal patterns in the (presumed) natural world. It developed structures lik on-centre-off-surround and the reverse, and oriented edge detectors, and so forth. I don't remember the author, but I think the first name was something like Christof. It was a series of articles in consecutive or nearly consecutive issues of a European journal, or perhaps Perception and Psychophysics -- I remember it as being in a format like A4 or US Letter sized pages.

At least the centre-surround structures in the retina perform the function you think necessary. Most of the time (i.e. when the centre is in a region that is spatially fairly uniform and hasn't changed much over recent -- sub-second -- time), fluctuations in their outputs are a direct representation of the variability of the sensor. Those variations could, in principle, be used to assess the likelihood that a particular fluctuation represents a distinctive location in the "real" world.

If the intensity signal varied exactly as the actual intensity varied, the uncertainty signal would be zero. As the signal's variations departed more and more from the actual intensity variations, the uncertainty signal would become larger.

In the centre-surround case, as the signal's variations departed more from the surrounding average, the likelihood that they represent something in the real world would increase. But so would the likelihood that the centre sensor was misbehaving. That possibiiity is, in principle, testable by temporal measurement and control -- rather like applying "The Test".

The human eye isn't stationary except under experimentally controlled conditions (visual objects vanish when eye movements are prevented). If after an eye movement the same retinal sensor exhibits unexpectedly high variability, it's probably malfunctioning, but if the high variability is associated with a changed retinal location consistent with the control output that moved the eye, then it's probably variation in the environment.

I won't even ask how the necessary statistical calculations are carried out, or where. I'll stipulate that there are some cells in the retina that can do an equivalent analog computation, though I'll leave it to someone else to point them out.

Done, I hope.

The uncertainty signals from these cells would then run up the optic nerve, paired with the corresponding intensity signals.

Possible. I don't know if they do or if they don't. Quite possibly the only signal that goes up the optic nerve is one that could be labelled "likelihood of interesting event here", rather than "intensity here". We are, after all, notoriously bad at estimating light intensity other than by assessing the precision of visible edges and the like (as in you example some time ago of the increasng difficulty of reading as the evening light fades).

The real problem is locating the mechanism by which this uncertainty-perceiver measures the actual intensity of the light falling on the retina. We can assume that the intensity of light is detected only by rods and cones (unless you want to propose some other kinds), so the uncertainty-detector must be using some spare rods and cones. These, however, would have to be different from the ones that generate intensity perception signals, because they would have to generate signals that are accurate measures of the light intensity at all brightness levels, down to counts of individual photons. Otherwise we would just have another set of intensity signals having unknown uncertainty.

Do you see the fallacy in the above? It comes from the timeless philosopher's argument-generation machine: "Given my assumptions, it can't be true, so therefore it can't be true." The logic is impeccable, but the assumptions are open to question.

Since my argument can be applied to the generation of uncertainty signals at any level in the hierarchy, I can't see how the uncertainty in ANY neural signal can actually be measured directly.

Do you see how the same kind of structure could, in principle, apply at all perceptual levels? As I have said several times in talking about this proposal, a propagated "uncertainty' signal would be only one among several inputs to an "perceptual uncertainty function" associated with a given perception.

The only way I can think of doing this is to use computing processes at the logic level, where we do mathematics and other rule-driven things, and measure uncertainty by comparing perceptions of different kinds from lower levels.

Well, your mechanism here comes close to what the retina does, but, as with the retina, there's no need to involve perceptual logic-level functions. Neurons can perform an awful lot of internal computation. For example, I believe they compute very accurate logarithms over one or two orders of magnitude under some conditions, so multiplication (and indeed, I believe correlation, too) present little problem at any perceptual level. Comparing perceptions that under normal conditions have similar values is a standard way of detecting anomalies, and of discovering the variability of each individual perceptual mechanism.

If we then view all of experience from this level, we will be able to perceive the amount of uncertainty in any set of lower-level signals -- but the perceptual signals indicating uncertainty would then exist only at the logic level. And none of the measurements could depend on knowing the actual state of whatever the perceptual signal represents.

Remove the reference to logic level, and I think that we are close to an understanding.

Martin

[From Bill Powers (2008.04.08.541 MDT)]

[Martin Taylor 2008.04.08.11.00]

[From Bill Powers (2008.04.05.1050 MDT)]

Bill asked specifically for comments on this post. These comments may serve also as a partial answer to [Tracy B. Harms (2008 4 3 8 30)], which I hope to answer separately.

So this tells us we can be uncertain about a perception of a completely determinate variable. The uncertainty arises not from the presumed thing being perceived, but strictly from the observer's inability to find any way to predict it.

Yes, exactly! No uncertainty about that:-)

And now you agree with me. I am confused. Do you think there is such a thing as uncertainty that can be perceived, or do you think that uncertainty IS a perception?

This is a trick question, a variant on "Do you think there is a real reality to be perceived or do you think reality IS a perception." My answer is "Yes". Uncertainty is a perception, a function of variables outside the process that creates the perception, just as any perception is a function of variables outside the processor that creates the perception.

That's fine, but is uncertainty in the class of perceptions that we are more or less forced to construct, or is it optional?

My sense is that it's semi-optional, and more important to some people than to others. I don't mean that there's nothing to be uncertain about, but just that for some people there are a lot more uncertainties than there are for others. It's as much a matter of attitude as it is of necessity.

I remember a thread from a few years ago that contains echoes of the present problem. I think it was Bruce Nevin who proposed (with some support) that there is not a category level of perception and control, but that category perceptions occur at every level, as if in a column running parallel to the hierarchy and conducting category signals upward along with all the other perceptual signals. Thus

You have it one-quarter right. It was my proposal, not Bruce's, and the proposal contained no suggestion of a "column running parallel to the hierarchy and conducting category signals upward along with all the other perceptual signals."

Well, Bruce's proposal did. His proposal related to the role of the category level in linguistics. Bruce, am I misremembering?

Nor did it have any suggestion that "each perceptual signal would be accompanied by a second signal indicating the category to which that perceptual signal belongs." So I guess you have it 1/8 right, not 1/4.

Again, that was what Bruce proposed, I thought. Ageing memory, I suppose.

Now you are proposing that for every perceptual signal at every level in the hierarchy, there is a second signal accompanying it which carries information about the uncertainty in the perceptual signal.

Not quite. Let me rewrite that with a couple of minor but significant wording changes, as follows:

"Now you are proposing that we should consider the possibility that for any perceptual signal at any level, there might be a second signal accompanying it that carries information about the uncertainty associated with the current value of that perceptual signal."

That is how I understood what you said. I said you were proposing those facts; now you say you are proposing the possibility of those facts. If that's different, it must mean that I should have focused on the idea of possibility rather than on the facts you were proposing. I must have misunderstood you when I thought you proposed that perceptual signals might be complex, having two dimensions instead of just one, and that the uncertainty component was "about" the other one.

But I will accept most of your rewording. You are proposing the following possibility: that for every perceptual signal at every level in the hierarchy, there is a second signal accompanying it which carries information about the uncertainty in the [current value of the] perceptual signal.

However, I don't accept the part I put in brackets. The current value is just what it is. There is no uncertainty perceivable in a single sample. Perception of uncertainty (high or low) arises only from a series of observations. Am I wrong about that?

If you simply pause and ask how you would design such an arrangement, I think you would see what is wrong with it.

The implication being that you perceive that I have not paused and considered how such an arrangement might be designed.

Oh, don't get insulted. I don't know what you have considered; I know only what I rememer of what you have posted. You haven't proposed any designs yet.

Suppose you want to generate a signal indicating the uncertainty in an intensity signal from the retina. You would need an input function that can receive a copy of the intensity signal, and compare that signal with a signal representing the actual intensity of light falling on the retina.

Why do you make this assertion? There are other possibilities, which the actual retina, as well as simulations, appear to use. I will deal only with one version of one type of possibility, and ignore a completely different range that might be more significant at higher levels -- control.

I don't think that the arrangement you propose is capable of detecting uncertainty, or of distinguishing real variation from "noisy" variation. You have said that uncertainty is in a distribution, and I agree. There is nothing about a signal derived the way you describe that would indicate any particular distribution. And of course if you are merely comparing one perception to another one, there is no way to determine which one is the noisy one and which one is the accurate representation.

The average value of the signals from neighbouring regions, excluding those from regions beyond a rapid change, could easily serve as a surrogate for your requirement of a signal that recorded the actual intensity.

But that only establishes that one signal varies while the surrounding average varies less. How does that indicate anything uncertain about the signal that varies?

The Modern Control Theory people have noticed this problem in considering what they call "system identification." How do you know whether a variation over a series of measurements of a system variable is due to an external disturbance, or to noise in the measurement? If the former, you have to try to anticipate the disturbances; if the latter, all you can do is reduce your ambitions for precise control.

By comparing the two signals this function could determine the relationship between the intensity signal and the actual intensity.

Yes, and the same goes for comparing with the local average.

Except that you don't know if the variations are real -- if they are actual variations in environmental variables that need to be opposed, or if they are being generated by the input function, so opposing them would increase the variance of the input quantity instead of decreasing it. That's why I said you have to know the ACTUAL value of the external variable, not a perceived value of it, to calculate the uncertainty (if any) in a varying perceptual signal.

I think we've just about worn out this subject for now. You clearly think that uncertainties are very important in perception and control; I don't. I think that all the examples of uncertainty involve very unusual, boundary-type conditions, whereas practically all perceptions that we control are well up into the magnitude ranges where the uncertainty is negligible. I don't dispute that there are conditions under which perceptions become much less reliable, and I don't begrudge anyone an interest in studying those conditions, but I just don't think that they apply to most ordinary behavior. Our tracking experiments don't take uncertainty into account at all, nor do they need to: the RMS prediction accuracies are well under 3% of the range of variation now, and whatever inaccuracies are left look pretty random. Some day we might want to come back to these data and analyze that last 2%, but I don't think that time is now. We're still at the pickaxe stage, not the hands-and-knees-camel's-hair-brush stage of our archeology.

Best,

Bill P.