Uncertainty (was second-order and third-order

[From Bill Powers (2008.04.03.0710 MDT)]

Martin Taylor 2008/04/03/00/34 --

Yes, that makes it clear. Shannon apparently thought we could experience both the observation and the thing that is observed, separately.

That's exactly opposite to Shannon's position.

Sorry, my statement was too brief and didn't come out right. Shannon was talking about situations in which it is possible to know both what message was sent and what message was received. That is appropriate for analyzing communication systems, Shannon's primary concern at Bell Labs. In that case we DO know both ends of the relationship. If we try to apply communication theory (a term I prefer to "information theory") to human perception of the nonverbal world, however, the source of the message is inaccessible to everyone; all we have is what we have received. We can't tell whether the "message" was precisely what we received, or some sharper version of it, or if there was actually a message at all.

The fuzzy handwriting example you sent didn't animate on my computer until I accessed it in the "embedded" folder. Then -- yesterday -- I saw what you were talking about. In fact a good part of the sharp message was filled in with essentially identical squiggles, with imagination supplying the missing letters (no, that's wrong -- the individual letters never appeared to me). The letters with extenders above and below the line were also just identical loops. Only the first letter of each word was actually a letter configuration.

Nevertheless, I could easily read most of the words when the animation approached sharpness (actually a little better just before complete sharpness) and was pretty sure I read what the writer meant to convey (because no alternative interpretations came into view). There were, however, two or three "words" that my brain could not fill in. It was instructional to examine the difference. Apparently, the main subjective difference was that I didn't imagine hearing a faint voice saying a word when I looked at those exceptions, but did experience that imagined voice, or something voice-like, when scanning over the others. I did no conscious interpretation or decoding. Either I immediately experienced a whole familiar word, or I did not.

So in fact, strictly speaking, there was no message transmitted: no computer that looks up strings of letter codes in a dictionary could have read it, no matter how sharp it got.

Shannon was interested in "redundancy" of language, if I remember right. I think that's what we have here. It looks to me as if this is a beautiful example in support of the Pandemonium model, and drops some very strong hints about how the perception of written words works. I think it also supports the ideas you presented in Durango about auditory recognition of words, with formants swooping around in phoneme space but not coming particularly close to the static positions where the pure phonemes would be located. It's as if the brain learns to solve a set of simultaneous equations -- perhaps through E. coli reorganization -- that simply weight features of the incoming stream of signals. I think there is also evidence for your idea of cross-connections between perceptual input functions at the same level, simply because we do not experience strictly what the Pandemonium model would predict: a cacaphony of voices yelling for attention. The stronger voices suppress the weaker, so we end up with only one or a few that really demand higher-order attention, even if there are many responding to the input stream at the same time.

With that picture in mind it seem to me that I can understand the fuzzy squiggles example. There is no decoding of the message, but rather a lot of responses to possibilities suggested by the spans of consecutive squiggles and the placement of up and down loops, with the initial letters eliminating great gobs of possibilities at one stroke. It's an almost purely analog process that converges, over a long period of learning, to a set of perceptual input functions that give unique responses to the overall pattern, and that suppress the weaker responses in favor of the stronger (the suppression, of course, de-suppresses the stronger ones, so the result verges on the effect of your beloved flip-flops). The computer-like precision of response to ASCII codes is simply not there: that's not how natural recognition happens. Computers need a lot more help than we do.

Another consideration is that we also have a phenomenon like the visual response to colors. Somehow our eyes have evolved to be sensitive to just that octave or so of wavelengths that are transmitted through air without much loss. How lucky for us! Somehow we have managed to learn to write and speak words that can be recognized just by mushing together weighted sums of up and down squiggles of various size, given only a few hints like a recognizable (more or less, but just enough) initial letter. This explains, of course, what we see when we look at the fonts available in Word formats -- the immense ransom-note variety of shapes and curliques and distortions that can be used show us exactly what DOESN'T matter about letter shapes. It might be interesting to superimpose all the fonts that people can readily read to see just what DOES matter. No doubt someone has done that.

I'm reminded, of course, of those other examples we've seen in which only the first and last letter of a word is placed properly, while the intervening letters are scrambled. But we can still read the words. Or I should say, "words."

It seems to me that all this makes perception easier to understand. We are deceived by all the detail we can experience in sharp pictures and well-formed fonts and hyperarticulated words. The detail doesn't matter much, or at all. It's not a code. It's an analog solution of a set of simultaneous equations with some added edge-sharpening to reduce the number of candidate possibilities. That begins to look remotely feasible for a model of how a ball of neurons might do the job.


Bill P.