E. coli analysis; alerting

[Martin Taylor 950216 10:40]

Bill Powers (950215.1240 MST)

Actually if the step is taken to be a step in direction rather than
distance, only one added calculation needs to be made: to project the
direction until it intersects a radius to the target at right angles,
and use that as the origin for the next step.

    Yes, that's exactly the rationale I used in saying that you could
    get the probability by taking half the length of step that gave you
    the desired probability of improving on the initial state.

Sorry, I don't see any resemblance between my paragraph and your
paragraph. I think you're skipping too many steps; how about trying
again?

The following discussion restates the same idea that we both used. (Perhaps
you missed my posting to which I referred in the paragraph you quoted?
Martin Taylor 950213 11:00). Here's the relevant part:

... the problem is
set up to deal with ONE step. If you make two steps in a given direction,
you can use the data to see what would have been the probability you would
have of improving the situation after one step of double the size. In other
words, you can see how far in a given direction you can go, maintaining
improvement over the initial state. Half that distance (I'm pretty sure) is
the distance at which the improvement will stop as your e-coli keeps taking
consecutive steps in the initial direction.

I'd remove the "I'm pretty sure" now, since I realized that the three points
defined by the origin, the end-point of a step, and the optimum, define
a plane, and the 2-D geometry is easy. It's essentially the same as you
used.

···

=====================
In the same post, in case it never got to you, there is the following
passage (the quote is from Bill Powers (950210.1900 MST)):

A full analysis of the problem requires taking the reference signal into
account. When e. coli has a positive nonzero reference signal for the
rate of increase of the nutrient, this is equivalent to defining an
angle to left and right of the optimum direction that is less than 90
degrees each way, or in 3 dimensions, a cone. Tumbles will occur when
the result is a direction lying outside the angle or cone, even if the
resulting rate is still somewhat positive. This makes for more tumbling,
but more progress toward the goal when a favorable result does occur.

That would be an interesting complementary study. It might not require
much extra programming, but it would sure require a lot of extra computer
time, since there's another dimension to be examined. As matters stand,
our HP-9000 takes much of the weekend to do the higher-dimensional tests.
I'll enquire."

I have enquired, and am told that programmatically it is trivial to do
this. The only problem is the amount of computer time required to scan
the extra variable. I have asked my colleague to do it, looking at different
ratios of improvement, starting with the probability that a step of length
M induces an improvement of at least 5% in a space of D dimensions. That
5% will be what changes from run to run. But each such run takes quite a
few hours of CPU time, so there won't be many.

================

On alerting: I see your discussion as very much akin to a cognitive
psychologist's discussion of "control by planning." It is fine for
changing references in an undisturbed world, but not for the real world.

While I'm talking on the phone, I may realize that I haven't stored what
I was writing for some time. So the higher-level systems will tell the
caller to wait, go to the keyboard, store the text, and go back to the
talking-on-the-phone system. No problem. This isn't some special
phenomenon called "alerting." It's just time-sharing.

Yes, fine. Nothing unpredicted happened in the outer world that affected
a perception you were not at the time controlling. Under the same kind
of conditions, plans work quite well.

As for what are "alerting stimuli": I hope I made it quite explicit, both
earlier and yesterday, that there is no suggestion of cause-effect in them.
But perform a little thought experiment (which you could make real).

Imagine yourself deeply immersed in, say, writing something that engages
you deeply. Now imagine the different things that you might well do under
a bunch of different circumstances (a) someone you didn't know was there
touches you on the back of the neck; (b) someone you didn't know was there
says your name in a normal voice; (c) someone you didn't know was there
says something in a normal voice to other people who were already conversing;
(d) there is a noise of a crack or thud somewhere close; (e) in another
window on your screen, data from an analysis continue to scroll up; (f)
a word in the text you have written, near where you are looking, changes
without you having commanded it to...

I guarantee you that some of these events are more likely to be followed
by a change in what you are controlling for than others are. In fact,
in (f), even though you are at that moment PRIMARILY controlling for perceiving
a text to be the way you want, it is highly unlikely that you will see the
word change at all. (There are lots of relevant studies on whether and
when people see words changing; and one of the mistakes in the Macintosh
interface is to put all the warning dialogue boxes in the same place so
that the text can change without the user seeing that it has done so).

None of these senses in any way cause attention to shift. As you, Rick,
and I all seem to agree, what happens is that some perception changes in
a control system that has the effect of shifting the locus of control.
But typically, being touched unexpectedly is a sensation that more often
results in an attention shift than does hearing an unexpected sound. Hearing
your name more than hearing some other word, seeing a flicker or a flash
in the visual periphery more than the same in central vision (though in
this latter case you could argue--I think incorrectly--that you are already
controlling for all that you see in central vision).

Maybe you should offer an example that you think PCT can't handle
without the "alerting" hypothesis.

I'm not sure what you are asking. My thesis is that PCT implies the
alerting notion, not that PCT can't handle something without it. The
only way I could see making an example in which PCT could work without
alerting is in a simulation hierarchy that has exactly the same number of
perceptual degrees of freedom in each level as output degrees of freedom.
Real animals aren't like that. Maybe you could characterize what the kind
of example you are asking for might look like?

Martin