Uncertainty (was second-order and third-order beliefs)

[Frpom Bill Powers (2008.04.02.1106 MDT)]

Martin Taylor 2008.04.02.10.38 –

Bill:

At any rate, it seems to me
that saying I believe something introduces the possibility of doubt.
“He’s an honest person,” versus “I believe he’s an honest
person”.

Martin*:*
Yes, I agree. That’s the distinction I would make between the kind of
perception in “I see a chair in this room” and “I believe
there is a chair in the next room” and “I believe that what I
see through the fog is a chair”. They are all perceptions, but the
latter two have less supporting information – the perceptual input
function has ambiguous inputs along with the well defined ones.
This suggests that all perceptions actually have two attributes: the
value and the uncertainty.
-------------------

I see what you mean. However, I don’t see the uncertainty as being an
attribute of a perception, but simply as another perception. I’d say it
is the output of a perceptual input function that takes lower-order
perceptions as inputs and generates an output perceptual signal that is a
report on the degree of uncertainty being perceived. To see a
relationship between that perception and the perceptions from which it
was derived would require a third perceptual input function receiving
both the uncertainty signal and the signals from which the uncertainty
was derived. That would generate the perception of a relationship between
a sense of uncertainty and some other perception. It would also explain
how we can perceive an unpredictably varying perception without feeling
uncertain about it, and how we can have a feeling of uncertainty without
knowing what it is we’re uncertain about.

My biggest problem is with your proposal for a vector representation
(it’s not really a complex number, is it? No square roots of -1). This
requires the ability to label signals: this is a signal indicating a
perception, and that is a signal indicating the uncertainty in the
perception (or the thing perceived). I don’t think there is any way to
label neural signals. They are all alike, just like electrical signals in
a circuit. And the label-signal itself would have to be indicated as
being a label by another signal, and so on. I don’t think that works.
You’re really reverting to the “pattern” or “coding”
idea of perception, in which one neural signal pathway can carry signals
representing many different perceptions, with the code indicating which
perception it is. That idea in turn is conditioned by the concept that
nerves carry “messages” which are really discrete packages –
actually, what I would call category perceptions. This came out of the
transition from analog computers to digital computers as the basic brain
models. It leads to the AI way of thinking, in which once the name of an
action has been decided upon, we can assume that the action has been
carried out. The whole system is based on words and logic, not continuous
processes and physical circuits.

PCT is at bottom a circuit-based kind of model, in which signals are just
signals, serving only to connect the output of one function-network to
the input of another and conveying only magnitude information. No signal
considered in isolation has any meaning; to figure out its meaning you
have to know where it came from, and also what is done with it after it
gets to its destination. And, wasteful or not, the basic architecture is
that of Selfridge’s Pandemonium model, in which each different kind of
perception comes from a physically different input function that produces
it. Only the connectivity matters; the signals are all alike. There is no
machinery at the receiving end that can distinguish different kinds of
perceptions in any one signal path. What arrives by one signal path is
simply a representation of the magnitude of one variable. The variable
might represent uncertainty, or it might represent the taste of
applesauce. But it just tells the destination how much of it there is,
not WHAT it is. The identification of WHAT doesn’t come until the
category level – and then, all that says WHAT is there is the fact that
there is more signal in this path than that path.

Best,

Bill P.

Re: Uncertainty (was second-order and third-order
beliefs)
[Martin Taylor 2008.04.02.14.48]

This is in part a test message in a investigation of why a long
message on this topic failed twice to be posted when I submitted it,
and also failed when Rick submitted it on my behalf. This version may
fail to be posted, because I use the name Shannon.

[Bill Powers (2008.04.02.1106
MDT)]

Martin Taylor 2008.04.02.10.38 –

Bill:

At any rate, it seems to me that
saying I believe something introduces the possibility of doubt.
“He’s an honest person,” versus “I believe he’s an
honest person”.

Martin*:*
Yes, I agree. That’s the distinction I would make between the kind
of perception in “I see a chair in this room” and “I
believe there is a chair in the next room” and “I believe
that what I see through the fog is a chair”. They are all
perceptions, but the latter two have less supporting information –
the perceptual input function has ambiguous inputs along with the well
defined ones.
This suggests that all perceptions actually have two attributes: the
value and the uncertainty.
-------------------

I see what you mean. However, I don’t see the uncertainty as being an
attribute of a perception, but simply as another
perception.

I think you and I and Rick are all agreed on that. Shannon
certainly would say so. For Shannon, all uncertainty is about
something the other end of a communication channel – in this case the
Perceptual Input Function and what lies between it and the
environmental variable to which the percepual signal value has an
uncertain realtionship. Shannon’s “Information” is all about
changes in the uncertainty about something before and after the
something is observed, whether it is about the identity of the
something or about its value on a continuum.

I’d say it is the output of a
perceptual input function that takes lower-order perceptions as inputs
and generates an output perceptual signal that is a report on the
degree of uncertainty being perceived.

Uncertainty about what? That’s been the main issue that’s been
troubling me. Just to perceive uncertainty is no value. You can’t
control such a perception. The only way to control a perception of
uncertainty is do act in a way that relates to the thing in the
environment about which you are uncertain.

To see a relationship between that
perception and the perceptions from which it was derived would require
a third perceptual input function receiving both the uncertainty
signal and the signals from which the uncertainty was derived. That
would generate the perception of a relationship between a sense of
uncertainty and some other perception.

This gets a little baroque, doesn’t it? And it seems to
presuppose a wiring to a specific relation perceiver that has input
functions hard-wired from each perceptual signal and from its
corresponding uncertainty perceptual signal. Then there is an issue as
to how that “reference” perceptual control for the relation
perception comes to affect the actual control of the uncertainty
perception about the perception that is uncertain. In effect, you are
doubling or tripling the number of independent control units in the
system.

Isn’t it simpler to imagine that the relation between perceptual
signal values and their uncertainties is maintained in the connection
pattern through the system up to the level at which the particular
perception is produced. Such a connection organization would avoid
several problems, without, so far as I can see, introducing the novel
ones yours introduces.

At first sight, I don’t like your proposal, because I don’t see
how it could work, and because it seems unnecessarily complicated.
Neither reason is sufficient to say it’s wrong, though.

It would also explain how we can
perceive an unpredictably varying perception without feeling uncertain
about it,

That, we can do, for sure. But unpredictably varying has only to
do with uncertainty for predicted future value, not with uncertainty
about the present value, which is the value we normally don’t feel
uncertain about. (Caveat: We do feel uncertain about the present
value, though, if the changes in value are fast enough as well as
being unpredictable.)

and how we can have a feeling of
uncertainty without knowing what it is we’re uncertain
about.

I suppose, since uncertainty is a perception, one can have
uncertainty about almost anything, and knowing what we are uncertain
about could be one of them. But I don’t know that I can remember
experiencing it.

My biggest problem is with your proposal
for a vector representation (it’s not really a complex number, is it?
No square roots of -1). This requires the ability to label signals:
this is a signal indicating a perception, and that is a signal
indicating the uncertainty in the perception (or the thing
perceived).

In the system whence I got the notion, it was indeed a complex
number defined by r-theta, where in PCT language theta was the
magnitude, and the greater r, the less uncertainty about theta. That
representation worked very well in a perceptron-like structure.

I don’t think there is any way to
label neural signals. They are all alike, just like electrical signals
in a circuit.

But in the PCT hierarchy, all signal paths are labelled. They
connect THIS output to THAT input. No more labelling than that is
required.

You’re really reverting to the
“pattern” or “coding” idea of perception, in which
one neural signal pathway can carry signals representing many
different perceptions, with the code indicating which perception it
is.

That isn’t necessary. Supposing the complex number representation
were actually valid, all that would be required for using complex
numbers would be to replace the single connection for value with two
connections.

That idea in turn is conditioned by
the concept that nerves carry “messages” which are really
discrete packages – actually, what I would call category
perceptions.

I don’t know that it need be category perceptions at all.
Apparently the timing pattern of neural impulses relative to their
neighbours is critical, perhaps more so than their frequency. That
gives a two-dimensional analogue representation, which could
accommodate a complex variable, especially since it seems that the
less prominent the target (such as a light flash or, probably, a tone)
the earlier the first relevant impulse.

PCT is at bottom a circuit-based kind of
model, in which signals are just signals, serving only to connect the
output of one function-network to the input of another and conveying
only magnitude information.

Classically, in neuropsychology, that’s called “place
coding”. There used to be a lot of argument as to whether
place-coding or signal-coding was the sole or the most important
method of getting information through the neural system. I am not up
on contemporary neurophysiology, but the little I do know suggests
that both are important. Certainly impulse timing seems to
matter.

No signal considered in isolation
has any meaning; to figure out its meaning you have to know where it
came from, and also what is done with it after it gets to its
destination. And, wasteful or not, the basic architecture is that of
Selfridge’s Pandemonium model, in which each different kind of
perception comes from a physically different input function that
produces it. Only the connectivity matters; the signals are all
alike.

Yes. In a place-coding system it doesn’t matter that one path
resulting in P(V) connects, eventually, to environmental variable V,
and another derives ultimately from some function that outputs
uncertainty “U(V)”. The fact that the signals are on THOSE
wires is what matters. And it would be the same if the perceptual
Input function that produces P(V) also produced U(V), and if those two
signals, on separate connectors, both served as input to any
higher-level PIF that used P(V).

There is no machinery at the
receiving end that can distinguish different kinds of perceptions in
any one signal path. What arrives by one signal path is simply a
representation of the magnitude of one variable. The variable might
represent uncertainty, or it might represent the taste of
applesauce.

Yep.

But it just tells the destination how
much of it there is, not WHAT it is.

So far, so good.

The identification of WHAT doesn’t
come until the category level – and then, all that says WHAT is there
is the fact that there is more signal in this path than that
path.

Why would you say that? WHAT it is determines which error signal
induces what action outputs, outputs that affect WHICH environmental
variable. That’s as true at the lower levels as it is at the category
level and above. I fail to see what “category level” has to
do with the situation.

In a way, I hope my invocation of Shannon pervents this message
being posted, because that would resolve a mystery.

Martin

[from Tracy Harms (2008 4 2 13:50 Pacific)]

Martin Taylor wrote:

At any rate, it seems to me that saying I believe something

introduces the possibility of doubt. “He’s an honest person,”

versus “I believe he’s an honest person”.

Yes, I agree. That’s the distinction I would make between the

kind of perception in “I see a chair in this room” and "I

believe there is a chair in the next room" and “I believe that
what I see through the fog is a chair”. They are all
perceptions, but the latter
two have less supporting
information – the perceptual input function has ambiguous

inputs along with the well defined ones.

This suggests that all perceptions actually have two

attributes: the value and the uncertainty.

Yes, when somebody couches a claim in terms of belief it indicates a
desire to communicate a degree of uncertainty. But no, perceptions do not have uncertainty as an attribute.

Perceptual function input contributors cannot be sorted into categories such as well-defined or ambiguous. The perceptual system input subsystem encounters whatever it encounters, from which it constructs a composite signal. The system, per se, has no potential for discerning the relative effects of various factors on the ultimate signal that results, it only has the ability to produce the result.

Between the point where I began composing this and now I see a reply by Bill Powers in which he lays out neatly the line of thinking I was hoping to promote. (2008.04.02.1106 MDT)

Since I’ve been silent on the list so long, I guess I’ll post this anyway – though it devolved into “what Bill said!”

Best regards to
all,

Tracy

···

You rock. That’s why Blockbuster’s offering you one month of Blockbuster Total Access, No Cost.

Testing 2

···

On Wed, Apr 2, 2008 at 2:10 PM, Richard Marken <rsmarken@gmail.com> wrote:

[From Rick Marken (2008.04.02.1410)]

I'm sending Martin's mystery post on uncertainty as an attachment. I'm
uncertain whether it will get there.

Best

Rick
--
Richard S. Marken PhD
rsmarken@gmail.com

--
Richard S. Marken PhD
rsmarken@gmail.com

Testing 3

···

On Wed, Apr 2, 2008 at 1:49 PM, Tracy Harms <t_b_harms@yahoo.com> wrote:

[from Tracy Harms (2008 4 2 13:50 Pacific)]

Martin Taylor wrote:

>> At any rate, it seems to me that saying I believe something
>> introduces the possibility of doubt. "He's an honest person,"
>> versus "I believe he's an honest person".
>
> Yes, I agree. That's the distinction I would make between the

> kind of perception in "I see a chair in this room" and "I
> believe there is a chair in the next room" and "I believe that
> what I see through the fog is a chair". They are all
> perceptions, but the latter two have less supporting
> information -- the perceptual input function has ambiguous
> inputs along with the well defined ones.
>
> This suggests that all perceptions actually have two
> attributes: the value and the uncertainty.

Yes, when somebody couches a claim in terms of belief it indicates a desire
to communicate a degree of uncertainty. But no, perceptions do not have
uncertainty as an attribute.

Perceptual function input contributors cannot be sorted into categories such
as well-defined or ambiguous. The perceptual system input subsystem
encounters whatever it encounters, from which it constructs a composite
signal. The system, per se, has no potential for discerning the relative
effects of various factors on the ultimate signal that results, it only has
the ability to produce the result.

Between the point where I began composing this and now I see a reply by Bill
Powers in which he lays out neatly the line of thinking I was hoping to
promote. (2008.04.02.1106 MDT)

Since I've been silent on the list so long, I guess I'll post this anyway --
though it devolved into "what Bill said!"

Best regards to all,

Tracy

________________________________
You rock. That's why Blockbuster's offering you one month of Blockbuster
Total Access, No Cost.

--
Richard S. Marken PhD
rsmarken@gmail.com

[From Bill Powers (2008.03.02.1536 MDT)]

Tracy Harms (2008 4 2 13:50 Pacific) -

Perceptual function input contributors cannot be sorted into categories such as well-defined or ambiguous. The perceptual system input subsystem encounters whatever it encounters, from which it constructs a composite signal. The system, per se, has no potential for discerning the relative effects of various factors on the ultimate signal that results, it only has the ability to produce the result.

Tracy, that is exquisitely clear. I endorse your view completely.

Best,

Bill P.

[From Bill Powers (2008.04.02.1447 MDT)]

Martin Taylor 2008.04.02.14.48 --

Message came through fine.\

I think you and I and Rick are all agreed on that. Shannon certainly would say so. For Shannon, all uncertainty is _about_ something the other end of a communication channel -- in this case the Perceptual Input Function and what lies between it and the environmental variable to which the percepual signal value has an uncertain realtionship.

I think that's the critical difference between us right now. If I'm uncertain, it's about the perception, not what it represents (that's an entirely different order of uncertainty -- is there a real world out there?). If somebody says "sit" and my deaf old ear hears it, I wonder if what I heard was "sit" or "fit". And I don't mean that I wonder if the original utterance was one word or the other -- I'm not sure what the perception was. Did I hear the sound-configuration I spell sit, or the one I spell fit? There is a slight difference in the sibilance but they're easy to confuse. At that point I'm not interested in epistemology at all. I'm just wondering if I should imagine that I heard (the sound of) fit, or (the sound of) sit.

It's not as if there are two entities, the perception, and the thing that it represents. I never experience the thing that it represents -- only the perception. In fact, it seems to me, usually, that the perception IS the thing. I reify it and put it in the space outside me, which means of course that I put it among the perceptions I call "outside."

Shannon's "Information" is all about changes in the uncertainty about something before and after the something is observed, whether it is about the identity of the something or about its value on a continuum.

Yes, that makes it clear. Shannon apparently thought we could experience both the observation and the thing that is observed, separately.

I'd say it is the output of a perceptual input function that takes lower-order perceptions as inputs and generates an output perceptual signal that is a report on the degree of uncertainty being perceived.

Uncertainty about what? That's been the main issue that's been troubling me. Just to perceive uncertainty is no value. You can't control such a perception. The only way to control a perception of uncertainty is do act in a way that relates to the thing in the environment about which you are uncertain.

Well, that's the problem here, isn't it? We all feel a bit uncertain about where the others are coming from. If we knew exactly what we feel uncertain about, we'd simply clarify the matter and go on, no big deal. Uncertainty results from poor input information, as you pointed out about my three kinds of uncertainty. Those were causes of uncertainty, but the uncertainty itself was in the receiver of the information, not in the information. As Rick has been trying to say, the uncertainty doesn't arise until you have to do something based on the information. As Tracy said so crisply and clearly, the information is just what it is. It isn't certain or uncertain. The input functions process it and produce the outputs they produce.

I'm beginning to think that uncertainty is one of those words like intelligence that seems fraught with meaning until you really focus on it and start asking hard questions about what it means. Then it sort of falls to pieces, exposing the vacancy behind it.

Best,

Bill P.

[Martin Taylor 2008/04/03/00/34]

[From Bill Powers (2008.04.02.1447 MDT)]

Martin Taylor 2008.04.02.14.48 --

Shannon's "Information" is all about changes in the uncertainty about something before and after the something is observed, whether it is about the identity of the something or about its value on a continuum.

Yes, that makes it clear. Shannon apparently thought we could experience both the observation and the thing that is observed, separately.

That's exactly opposite to Shannon's position.

Martin

[Martin Taylor 2008.04.03.09.20]

[From Bill Powers (2008.04.02.1447 MDT)]

It's interesting that Bill sees uncertainty as relating only to what is perceived and not its relation to the relevant environmental variable, whereas Rick insists that there is no uncertainty about what is perceived, and any uncertainty is only about its implications. Meanwhile, I have been prevented, multiple times by different routes, from posting my message written March 31 on how I see uncertainty.

I'm going to try to split that message into two parts, for two reasons. One is to see whether either or both will get through and at least submit part of my argument for consideration. The other is to try to localize the problem in the message, if the problem is indeed local, so that it can be fixed in the CSGnet mailer (I can send the message safely and completely to Rick, but he can't post it for me any more than I can post it for myself).

So, I will send two messages headed not by the usual date stamp but by "Part 1" and "Part 2", followed by text that should be read as a single message. If one of them fails to come through, I will split it and send "Part x.1" an Part x.2". It won't make for easy reading, I'm afraid, but it may help reduce your uncertainty about (your perception of) my meaning when I talk about "uncertainty".

Martin

Part 1

[Martin Taylor 2008.03.31.17.30]

[From Bill Powers (2008.03.30.1045 MDT)]

Martin Taylor 2008.03.30.16.27 --

... and Rick Marken.

It's been useful to watch this interchange without getting involved for a few days. I think the basic problem here is that there isn't any agreement on what "uncertainty" means. Martin thinks there can be an "uncertainty signal" accompanying every perceptual signal;

Actually, no. I suggest that as a possibility to be examined.

Rick thinks uncertainty is in the perceptual input functions of the beholder and is just an attitude toward other perceptions. I think it's a queasy feeling in the pit of the stomach that comes from having to do something but not knowing what to do. But what is uncertainty, really?

One way to define it is in terms of variability. But that would mean that in any single sample of a perceptual signal, there is never any uncertainty, because variability exists only in multiple samples.

I think of it in the more technical sense of bayesian analysis or of Shannon. As a perception, that's as hard to describe as is the perception of "red" (either the word or the colour). So, I make do with the technical description and map it onto the perception as though they were the same thing. That's imporper, I know. One really ought to do experiments tracking and disturbing the uncertainty of some perception, and look for the appropriate functions in the same way Rick tests for whether it's a+b or a*b that someone is controlling.

If you do take the Bayes or Shannon view of uncertainty, it is not defined in terms of variability. It is defined as a probability distribution _about_ something. That probability distribution defines how much you (believe you) know about something, so long as the possibilities can be described either as discrete entities or as points in an n-dimensional continuum (in the latter case we are talking about probability density rather than simple probability, but it's the same thing -- one can be transformed into the other).

How you come to the probability distribution that defines uncertainty is another matter. You may derive it from observations of historic variability in the variable concerned. That is classically how it is often done. But you can also derive it from a model or from experience with similar situations. However it is done, the result is a _subjective_ measure (i.e. a perception), for both Bayes and Shannon. For example, suppose a bright dot is presented for a single frame on a CRT, and one is aked to move a second dot to the same pixel on the screen. One can put it fairly close, but would never be sure of being on the exact same pixel. The perception of the location of the flashed dot has some uncertainty (or at least the memory of it does, and the original perception probably does). There's no variation from which to derive that uncertainty.

Part 2 will follow, and will include this last paragraph at its head, with no datestamp or anything like that.

[Martin Taylor 2008.04.03.09.33]

I have sent my March 31 message in two parts. Probably neither will make much sense without the other, or at least neither will be complete without the other. If both arrive, then the problem isn't localizable within the message. If one arrives, then I will be harrassing you with furher subdivisions until the location of the problem can be found.

I'm sorry to do this, but I see no other way of sorting out a problem with the CSGnet mailer that cannot affect this message alone. Or of getting my message out without rewriting it.

Martin

[From Bill Powers (2008.04.03.0826 MDT)]

Looks as if it came through this time.

Part 1

Martin Taylor 2008.03.31.17.30 --

Bill:
One way to define it is in terms of variability. But that would mean that in any single sample of a perceptual signal, there is never any uncertainty, because variability exists only in multiple samples.

Martin:
I think of it in the more technical sense of bayesian analysis or of Shannon. As a perception, that's as hard to describe as is the perception of "red" (either the word or the colour). So, I make do with the technical description and map it onto the perception as though they were the same thing. That's improper, I know. One really ought to do experiments tracking and disturbing the uncertainty of some perception, and look for the appropriate functions in the same way Rick tests for whether it's a+b or a*b that someone is controlling.

If you do take the Bayes or Shannon view of uncertainty, it is not defined in terms of variability. It is defined as a probability distribution _about_ something.

There's the problem. We should always be wary when an ordinary word like "about" turns up underlined or otherwise gratuitously emphasized, because that tells us it's being used in a fuzzy undefined way. Of course the apparently intended effect of the underlining is to tell us this is an important word, but invariably it is never defined when used this way. It's like an appeal to a meaning that is too deep and obvious to mention, that all listeners are supposed to dredge up when they see the underlining, so they can be confident that they know what is meant. In fact just the opposite is the case. When you see a word underlined, you are pretty safe in assuming that neither you nor its author knows what it means.

When you say a probability distribution is _about_ something, just what is this aboutness relationship, and what knows that it exists? Over here is the probability distribution; over there is a sequence of discrete occurrances (which happen in exactly one sequence and no other) with a label on which is written the word "uncertainty." Exactly what is the connection between the distribution and the cloud?

I think I know what it is. It is the same connection we seem to experience between ourselves and something we are "looking at." Looking is _about_ the thing we are doing the looking at. There's a sense of the aperture through which we do the looking, and something extending from that aperture toward the thing that is the object of the looking-at. We gaze, we stare, we direct piercing glances, we focus on the object, we concentrate our attention on it. Can't you just feel that flow of something going from us to it?

Of course we know now that nothing of the sort is happening. The light rays are coming from the something, entering our eyes, and giving rise to neural signals that come inward to our brains, where we are. The flow of information is exactly opposite to the direction we imagine when we feel ourselves "looking at" something.

I suggest that the same thing is going on when someone speaks of the probability distribution being _about_ the uncertainty. I suggest that the relationship goes in exactly the opposite direction: the probability distribution is how someone who thinks in those terms perceives uncertainty when a set of lower-order perceptions sometimes appear one way and sometimes another way. The uncertainty is embodied in the probability distribution, not out there in the thing being represented that way.

It seems to me that you are really trying to say the same thing:

How you come to the probability distribution that defines uncertainty is another matter. You may derive it from observations of historic variability in the variable concerned. That is classically how it is often done. But you can also derive it from a model or from experience with similar situations. However it is done, the result is a _subjective_ measure (i.e. a perception), for both Bayes and Shannon. For example, suppose a bright dot is presented for a single frame on a CRT, and one is aked to move a second dot to the same pixel on the screen. One can put it fairly close, but would never be sure of being on the exact same pixel. The perception of the location of the flashed dot has some uncertainty (or at least the memory of it does, and the original perception probably does). There's no variation from which to derive that uncertainty.

I agree that the variation in question occurs over time, but you appear to be trying to disagree with that. "Experience with similar situations" seems to refer to perceptions of lower order that are different from one instance to another. That's all I mean when I speak of "variability." You say "There's no variation from which to derive that uncertainty" in the flashing of one dot at one instant, and of course I agree. But over a span of time including many instances of the dot placement, there is a particular sequence of different placements of exactly the kind I mean, and of exactly the kind represented by a probability distribution. Of course the probability distribution omits the factor of sequence; there is any number of ways to generate a given distribution, including using the formula for a normal curve to create the points in a totally systematic way. Seeing a probability distribution doesn't tell you whether the variations were random or completely ordered. In fact, the distribution is a way of imposing order where none exists: that smooth and symmetrical curve does not represent anything that actually exists in the lower-order world. I could make a series of systematically varying predictions that have exactly the same distribution as the actual uncertain phenomenon has, and get almost every prediction wildly wrong.

A probability distribution is about lower level perceptions in the same sense that a perception of a shape is about the sensations from which the shape is derived. The aboutness arrow runs from the sensations to the shape; it runs from the variations in lower-level perceptions to the distribution curve, a higher-level invariant derived from lower-level details.

Best,

Bill P.

[from Tracy B. Harms (2008 4 3 8 30)]

Shannon’s work has no bearing on the sort of uncertainty that we’re attempting to wrestle with in this discussion. It is, rather, about signalling adequacy. While I to think that signal compression questions are relevant to certain types of control-system problems (i.e. those that involve signal transmission between components, such as input-function to comparator-function), it has no meaning for the sort of problem which has been raised here.

The problem at hand has to do, in large part, with the adequacy of the control-system across a range of possible environments. One of the most debilitating mistakes of knowledge-theory has been conceiving of the relationship between environment and agent as one of signal-production by the environment, and the role of the agent being reception and decoding of the signal. Avoidance of that confusing (yet
seductive) metaphor is one of the big benefits PCT has to offer.

Tracy

···

Martin Taylor mmt-csg@MMTAYLOR.NET wrote:

[Martin Taylor 2008/04/03/00/34]

[From Bill Powers (2008.04.02.1447 MDT)]

Martin Taylor 2008.04.02.14.48 –

Shannon’s “Information” is all about changes in the uncertainty
about something before and after the something is observed, whether
it is about the identity of the something or about its value on a
continuum.

Yes, that makes it clear. Shannon apparently thought we could
experience both the observation and the thing that is observed,
separately.

That’s exactly opposite to Shannon’s position.

Martin


You rock. That’s why Blockbuster’s offering you one month of Blockbuster Total Access, No Cost.

[From Bill Powers (2008.04.03.0932 <MDT)]

The post I just replied to at length, Part 1 of your transmission, arrived at 09:31. This second part arrived at 09:36 AM, as follows:

[Martin Taylor 2008.04.03.09.33]

I have sent my March 31 message in two parts. Probably neither will make much sense without the other, or at least neither will be complete without the other. If both arrive, then the problem isn't localizable within the message. If one arrives, then I will be harrassing you with furher subdivisions until the location of the problem can be found.

I'm sorry to do this, but I see no other way of sorting out a problem with the CSGnet mailer that cannot affect this message alone. Or of getting my message out without rewriting it.

Martin

There was no Part 2.

Best,

Bill P.

[Martin Taylor 2008.04.03.16.50]

[From Bill Powers (2008.04.03.0826 MDT)]

Looks as if it came through this time.

I haven't seen part 2 yet, and you are responding to Part 1. I won't comment on this until I've got the whole message across somehow or other. And I think it would make for a more coherent conversation if you withold comment on any of it until you have all of it.

I'm going to post Part 2 in two pieces, 2.1 and 2.1. One of those is likely to fail. If 2.1 gets through, please append it to the Part 1 you have received. There will be a one-paragraph overlap at the start of Part 2.1.

Martin

[Martin Taylor 2008.04.03.17.31]

Just for the record, I sent out Parts 2.1 and 2.2 of my [Martin Taylor 2008.03.31.17.30] message 30 minutes ago, so if you see this and haven't seen them, presumably both fell into the black hole, meaning that the problem is in the area of overlap between them. If so, I'll rewrite the overlap bit and try to send an entire Part 2.

Martin

[From Bill Powers (2008.04.03.1553 MDT)]

Martin Taylor 2008.04.03.17.31 --

Neither part 2.1 or 2.2 has arrived here at 17:53 your time.

Best,

Bill P.

[Martin Taylor 2008.04.03.23.22]

I have sent Part 2 again, complete, having rewritten the paragraph that overlapped between the failed 2.1 and 2.2. There were two overlapping paragraphs, but the first was a quote, so it shouldn't have caused the problem. We shall see.

Martin

[Martin Taylor 2008.04.03.23.41]

I have sent Part 2 again, complete, having rewritten the paragraph that overlapped between the failed 2.1 and 2.2. There were two overlapping paragraphs, but the first was a quote, so it shouldn't have caused the problem. We shall see.

Well, that didn't work, so I have written a paraphrase of everything that overlapped between the two parts, and posted the whole Part 2 again.

Martin

[From Rick Marken (2008.04.03.2240)]

Still no part 2 at 10:38 PDT. What are you putting in those arguments?
I think you are just demonstrating how uncertain perceptions can
be;-)

Best

Rick

···

On Thu, Apr 3, 2008 at 8:43 PM, Martin Taylor <mmt-csg@mmtaylor.net> wrote:

[Martin Taylor 2008.04.03.23.41]

> I have sent Part 2 again, complete, having rewritten the paragraph that
overlapped between the failed 2.1 and 2.2. There were two overlapping
paragraphs, but the first was a quote, so it shouldn't have caused the
problem. We shall see.
>
>

Well, that didn't work, so I have written a paraphrase of everything that
overlapped between the two parts, and posted the whole Part 2 again.

Martin

--
Richard S. Marken PhD
rsmarken@gmail.com