Update (Re: Reading versus imagination (was Uncertainty, Information and control part I: U, I, and M))

[Martin Taylor 2013.04.02.10.30]

Since I have been silent on the question of perceptual uncertainty for a couple of weeks, I thought I should explain why. It has nothing to do with lack of interest, as Rick surmised. In fact, the reason is the exact opposite. I found I was devoting so much time to the question of identifying the controlled variable in my little example of coding in the "Processing" language that I was not leaving time to do: FInd the paperwork for this year's taxes; Prepare an abstract for a workshop on PCT in computer interface (I don't think I will be going, since it is just one day in the UK); dealing with stuff for my NATO group; actually programming the demos I want to use in part 2 of my tutorial; programming the kind of tests for the controlled variable that I had suggested were required in the earlier part of this thread; and finally, programming the simulation that I had agreed with Bill were required to demonstrate the effective equivalence between the width of a perceptual tolerance zone and decreased apparent loop gain (this last is the reason for the tests that I had suggested in testing for the controlled variable.

Most of the above reasons for delay still hold, but I am writing this now because I will be away for the next week or so, and I didn't want you to think I had forgotten you.

Now to specifics:

[From Rick Marken (2013.03.18.2130)]

Rick Marken (2013.03.17.1510)--

RM: Here are two figures that summarize the results of my analysis of some
data from Martin's pursuit tracking task where the main independent
variable was the horizontal separation, s, between cursor and target...
I can think of two possibilities; one is that system noise is amplified as s
  increase; the other is that the same level of existing noise in the system
  "shows through" in performance more when the loop gain goes down
(as it does as s increases). This will be the next thing I try to test with the models.

RM: Well, not much interest in this apparently. But I find it
fascinating. I did try adding a fixed level of noise to the simulation
(a random number between -.4 and .4 that was added to the input
variable of the v and a model). As expected the noise degraded
performance equally for all separations for the v control model; but
the noise degraded performance more as separation increased for the a
control model.

There are two problems with this approach:

1. If you think of this as a model of what is happening within the human, you have a perceptual value that jitters all over the lot, but that's not what one actually perceives. What one perceives is that small changes in the relative positions of the target and cursor make no difference in the perception. If the orientation of the two is held constant, the perception does seem to change over time, but only slowly, not in a jittery manner. The fluctuations may well represent noise, but it is noise that is averaged over some long time period. Changing the perceptual relation to the sensory input randomly frame-by-frame does not model the perceptual experience very well.

2. The range of variation (of the relative position of the cursor and target over which one cannot see that they are not horizontally aligned) varies linearly with the horizontal separation, according to the measurements included with the original data. Even if the addition of frame-by-frame noise variation works as a model, the noise amplitude should range from about 1 pixel when the cursor and target ellipses touch to about 4.5 pixels at the widest separation.

I had earlier suggested using a tolerance zone for the perceptual signal, using the expression

p = sign(qi)*(max(abs(qi)-tol, 0)

where qi is the environmental input variable (typically qo +- d, +- depending on whether it is compensatory or pursuit tracking), and tol is the half-width of the tolerance zone. A tolerance zone does reflect the perceptual experience, and is applicable to cases in which the perceptual uncertainty is due to missing information rather than to noise-induced variation in a value (e.g. the bus-arrival time in my example -- if you know that buses come every 15 minutes, you know that the next but will come at some time between zero and 15 minutes from now, but if you don't know when the last bus came, that's all you do know, and if you are controlling a perception of arriving at the bus stop just before the bus arrives, that 15-minute period is the tolerance zone of the controlled perception.

···

On 2013/03/19 12:29 AM, Richard Marken wrote:
--
[Aside: A real-life example of the opposite situation happened in Denmark. We were visiting a friend in a suburb of Copenhagen many years ago, and were going to have dinner with a friend who live further out on the nearby rail line. The train was scheduled for time X, and we were a few blocks from the station. About 10 minutes before train time, I was getting nervous about missing the train, but our host said there was no problem, because it takes 7 minutes to reach the station, and we would leave in 3 minutes. We did, and we climbed the station stairs as the train pulled in. It opened its doors as we arrived at the platform, and we boarded without breaking step.]
--
I thought I had presented a sufficient analysis of why changing the value of width of the tolerance zone emulates changing the value of the loop gain, but Bill considered that a simulation was needed to demonstrate it, and I agree. I'm trying to include that in my programming using the Processing language.

----------------

I don't want to burden you with problems, but in case you want to pursue the Processing language for further work in PCT demos and experiments, I thought it might be of some use to describe my experience as a novice in the language, though not in low-level programming (I started before there were even such things as assemblers, let alone compilers).

Firstly, if you get things right, Processing makes it easy to do a lot, even in 3D, and there are many very useful libraries. The output seems fast and effectively platform-independent, though I know no benchmarks to quantify that subjective impression. I'm using a GUI library called ControlP5, which gives you a lot of the standard controls and allows you to roll your own if you don't like any of the ones provided.

Secondly, the IDE is not very helpful in debugging. I grant that I may have missed possibilities that actually exist, but so far, the only way I have found to debug is by printing out messages to the embedded console, and by programming a "wait(milliseconds)" command to give you time to look at the result if the problem could be in a fast loop. The messages at compile-time (effectively that is the same as run-time) are mostly useful, though one or two have left me puzzled for a few minutes. The most difficult aspect of debugging is that I have found no equivalent of a "break-point and continue" that would allow one to query current variable values, for example. This lack has meant that I have spent many hours tracking down such simple errors as a minus where I should have had a plus, or a scaling factor applied twice or wrongly multiplying where a division was appropriate. Such errors should be fixable in minutes by stepping along breakpoints rather than in the hours or days it has taken me to find some of them using my novice-level understanding of the language.

----

When I do have a working version of the program I'm trying to build, I want to remake it in LiveCode with the Allez-OOP Object-Oriented extension programmed by Allan Randall. It would be a good test of the relative good and bad points of the two systems.

For now, I'm going to keep working on my program and not contribute more than passing comments on CSGnet. There will probably not be many of those over the next week or ten days.

Martin

[From Rick Marken (2013.04.02.2005)]

Martin Taylor (2013.04.02.10.30)--

RM: Nice to have you back, Martin.

RM: Well, not much interest in this apparently. But I find it
fascinating. I did try adding a fixed level of noise to the simulation
(a random number between -.4 and .4 that was added to the input
variable of the v and a model). As expected the noise degraded
performance equally for all separations for the v control model; but
the noise degraded performance more as separation increased for the a
control model.

MT: There are two problems with this approach:

1. If you think of this as a model of what is happening within the human,
you have a perceptual value that jitters all over the lot, but that's not
what one actually perceives. What one perceives is that small changes in the
relative positions of the target and cursor make no difference in the
perception. If the orientation of the two is held constant, the perception
does seem to change over time, but only slowly, not in a jittery manner. The
fluctuations may well represent noise, but it is noise that is averaged over
some long time period. Changing the perceptual relation to the sensory input
randomly frame-by-frame does not model the perceptual experience very well.

RM: It works just as well if the noise is added to the output rather
than the perception. Does that help? Indeed, the noise can be added
anywhere in the loop and you get the same result with the angle
control model: the amount of random variation in the output increases
with separation even though the same amount of noise is always added
to the loop. This results from the fact that the loop gain decreases
due to the change in input gain (k.i in the linear control model
equations) as separation increases.

MT: 2. The range of variation (of the relative position of the cursor and target
over which one cannot see that they are not horizontally aligned) varies
linearly with the horizontal separation, according to the measurements
included with the original data. Even if the addition of frame-by-frame
noise variation works as a model, the noise amplitude should range from
about 1 pixel when the cursor and target ellipses touch to about 4.5 pixels
at the widest separation.

RM: That's basically what happens when angle is the controlled
variable and a small, constant level of noise is added to the loop.

MT: I had earlier suggested using a tolerance zone for the perceptual signal,
using the expression

p = sign(qi)*(max(abs(qi)-tol, 0)

RM: I could easily change the model so that p is computed in this way.
But first could you explain the expression "max(N,0)", where N is
abs(qi)-tol. Also, in the angle model, qi includes both v and s (where
v is the vertical distance between cursor and target and s is the
horizontal separation between them) and p = arctan(v/s) while in the
difference control model qi includes only v and p = v. When I
calculate p using your "tolerance zone" equation what should I use for
qi? I think it should be arctan(v/s) for the angle control model and v
for the difference control model. Do you agree or should I do
something different?

MT: I thought I had presented a sufficient analysis of why changing the value of
width of the tolerance zone emulates changing the value of the loop gain,
but Bill considered that a simulation was needed to demonstrate it, and I
agree. I'm trying to include that in my programming using the Processing
language.

RM: I would like to do this for you. My models are running and ready
to go, so if you could just explain how to implement you tolerance
zone calculation into the model I could tell you what happens very
quickly.

MT: I don't want to burden you with problems, but in case you want to pursue the
Processing language for further work in PCT demos and experiments, I thought
it might be of some use to describe my experience as a novice in the
language, though not in low-level programming (I started before there were
even such things as assemblers, let alone compilers).

RM: Thanks for this. I'm sold on Processing but I do have a lot to
learn. Adam did translate two of my java demos into Processing. One is
the basic control task, which I have up on the net at:

http://www.mindreadings.com/BasicTrack/

Adam also translated my "Mindreading" program into Processing. I had
to change it a bit so that the avatars just move horizontally because
the 3D version is hard to use on a tablet (your finger gets n the way
of your view). My 2D version is available at:

http://www.mindreadings.com/Mindread/

Your control the avatars by moving your finger back and forth
horizontally at the bottom of the display, so you can see the avatars
about it.

The cool thing about Processing is that you can run these things on a
tablet, which is very exciting to me because it looks like tablets are
going to be the main internet devices used in education. So if I can
get my act together (I have other things to do too) I can write my
demos in Processing so that they can be used on any internet client:
laptop, tablet or smartphone.

Best regards

Rick

···

Firstly, if you get things right, Processing makes it easy to do a lot, even
in 3D, and there are many very useful libraries. The output seems fast and
effectively platform-independent, though I know no benchmarks to quantify
that subjective impression. I'm using a GUI library called ControlP5, which
gives you a lot of the standard controls and allows you to roll your own if
you don't like any of the ones provided.

Secondly, the IDE is not very helpful in debugging. I grant that I may have
missed possibilities that actually exist, but so far, the only way I have
found to debug is by printing out messages to the embedded console, and by
programming a "wait(milliseconds)" command to give you time to look at the
result if the problem could be in a fast loop. The messages at compile-time
(effectively that is the same as run-time) are mostly useful, though one or
two have left me puzzled for a few minutes. The most difficult aspect of
debugging is that I have found no equivalent of a "break-point and continue"
that would allow one to query current variable values, for example. This
lack has meant that I have spent many hours tracking down such simple errors
as a minus where I should have had a plus, or a scaling factor applied twice
or wrongly multiplying where a division was appropriate. Such errors should
be fixable in minutes by stepping along breakpoints rather than in the hours
or days it has taken me to find some of them using my novice-level
understanding of the language.

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2013.04.03.0900)]

MT: I had earlier suggested using a tolerance zone for the perceptual signal,
using the expression

p = sign(qi)*(max(abs(qi)-tol, 0)

RM: I could easily change the model so that p is computed in this way.
But first could you explain the expression "max(N,0)", where N is
abs(qi)-tol.

RM: Never mind, I figured it out. What a maroon I am. But I still need
an answer to this:

RM: When I calculate p using your "tolerance zone" equation what
should I use for qi? I think it should be arctan(v/s) for the angle control
model and v for the difference control model. Do you agree or should I do
something different?

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2013.04.04.10.21]

[From Rick Marken (2013.04.02.2005)]

Martin Taylor (2013.04.02.10.30)--

RM: Nice to have you back, Martin.

Thanks.

RM: ...I did try adding a fixed level of noise to the simulation
(a random number between -.4 and .4 that was added to the input
variable of the v and a model). As expected the noise degraded
performance equally for all separations for the v control model; but
the noise degraded performance more as separation increased for the a
control model.

MT: There are two problems with this approach:

1. If you think of this as a model of what is happening within the human,
you have a perceptual value that jitters all over the lot, but that's not
what one actually perceives. What one perceives is that small changes in the
relative positions of the target and cursor make no difference in the
perception. If the orientation of the two is held constant, the perception
does seem to change over time, but only slowly, not in a jittery manner. The
fluctuations may well represent noise, but it is noise that is averaged over
some long time period. Changing the perceptual relation to the sensory input
randomly frame-by-frame does not model the perceptual experience very well.

RM: It works just as well if the noise is added to the output rather
than the perception. Does that help? Indeed, the noise can be added
anywhere in the loop and you get the same result with the angle
control model:

As you probably would with almost any model, including the height-difference model. I say "almost" because I can think of at least one exception. The exception is the case in which noise serves to round off the sharp corner of a tolerance zone. The effect might depend on where in the loop the noise is introduced.

My problem with using noise that is simulated by adding a random value to the perceptual result of a fixed qi at each frame of the simulation "movie" is that the perceptual effect is of the perceptual value skittering about all over the place, whereas the actual perceptual effect is that the perception seems solid and equal to its reference value when the actual display values are different -- the ellipses are not aligned. Now we could, and probably should, deal with this by treating the perceptual function not as a simple function of the current qi value, but as a function of present and past inputs together. But that would simply be punting the issue down the road.

Anyway, adding the noise to the output doesn't address the issue at hand, which is the effect of perceptual uncertainty on the quality of control. Introducing noise is only one possible reason for uncertainty, and introducing a fixed amount of noise independent of separation does indeed lead one to expect that separation would have no effect on control of "v". I consider doing that to be an obvious mistake, given the data at hand that allow you to estimate a variable that is at least proportional to perceptual uncertainty.

Consider this noiseless scenario. I am controlling a variable using a lever that allows me to switch between two states. In one state the value of the variable increases at a fixed rate, while in the other it decreases at the same fixed rate. I perceive the value of the variable by way of an associate who reads a display that resets every 20 seconds and calls it out to me. My reference value for the variable is 1752.

The associate calls out "One". Do I set the lever to increase or decrease the value? I don't know, because my range of perceptual uncertainty is 1000, and it encompasses the possibility that the variable is at its reference value.
The associate calls out "Six". My perceptual tolerance zone is now down to 100, and its new range of uncertainty does not include the possibility that the variable is at its reference value. Now I know to set the lever to "increase".
The associate calls out "Four" and then "Seven", but those numbers do not alter my output (the setting of the lever).

After 20 seconds, the display reading changes.

The associate calls out "One". What do I do with the lever? Nothing. The range of perceptual uncertainty includes the reference value.
The associate calls out "Seven". What do I do with the lever? Nothing. The range of perceptual uncertainty includes the reference value.
The associate calls out "Five". What do I do with the lever? Nothing. The range of perceptual uncertainty includes the reference value.
The associate calls out "Two". What do I do with the lever? Not nothing, even though the range of perceptual uncertainty includes the reference value. because I know that the lever is set to "increase", and there will be some small lag before any change I might make takes effect. Nor do I switch and leave it at "decrease" while I wait for the next call-out of the display, because I know that by then it cold have dropped as far as 1647, which was the previous value read out by the associate. Probably I will oscillate the lever between "increase" and "decrease" until the next call-out, since doing that will reduce the average rate of increase or decrease of the variable value.

Does this help in distinguishing between "perceptual uncertainty" and "noise", noise being only one possible cause for perceptual uncertainty?

MT: 2. The range of variation (of the relative position of the cursor and target
over which one cannot see that they are not horizontally aligned) varies
linearly with the horizontal separation, according to the measurements
included with the original data. Even if the addition of frame-by-frame
noise variation works as a model, the noise amplitude should range from
about 1 pixel when the cursor and target ellipses touch to about 4.5 pixels
at the widest separation.

RM: That's basically what happens when angle is the controlled
variable and a small, constant level of noise is added to the loop.

My point is the same as it has been. I do not think we have assuredly found any place where the two possibilities differ. You use a noise added to v that is fixed independent of separation, and then use that fixed noise value in treating both angle and v as candidate controlled variables. I do not think that is reasonable in either case. It might be reasonable for v if the subject were controlling the absolute height of the ellipse, but the subject isn't asked to do that. He is asked to control the relative height of two ellipses. The uncertainty of whether the two are at the same height is not the same as estimating the absolute height of one and then of the other and comparing those two heights.

It's the same problem as that of relative and absolute pitch in music. Apart from some individuals who do seem to have absolute pitch, most people can tell whether one note is a semitone higher of lower than the note just played, but not whether it is a semitone higher or lower than one played a minute, an hour, or a day ago. Someone with absolute pitch could do it by remembering that the first note had been G whereas this one is G#. Almost everybody could probably do it on an "absolute" basis if the separation was large enough, such as a couple of octaves. Similarly, someone with "absolute height perception" could do the "v" task by seeing that the target is at 476 pixels and the cursor at 474, independent of separation. But most of us could not do that, and separation makes a difference, though we could if the absolute difference was large enough, say a few hundred pixels.

For the angle task, most of us have a pretty good idea of horizontal, which is the "absolute" reference value. Maybe adding a constant noise to the angle is less obviously wrong, but I think it is wrong, nevertheless, because we are rather more accurate in judging non-horizontality for long lines than for short. It would be interesting to compare the models if the task were to keep the cursor ellipse a constant 50 pixels above the target, or to keep the angle a constant 15 degrees from horizontal. I'm not sure what to expect from those tasks.

  When I
calculate p using your "tolerance zone" equation what should I use for
qi? I think it should be arctan(v/s) for the angle control model and v
for the difference control model. Do you agree or should I do
something different?

I agree. But you have to use an appropriate value for the noise in each model. But I don't know how you would estimate it for the angle model independently of using the same data as you would use for the "v" model. Based on my rough data, for the "v" model, your noise value should be something like N = k*(1 + .0035s) where s is the lateral separation and k is a scaling constant with a value on the order of unity (perhaps something between 0.5 and 2; try 1.0 as a first cut). For the angle model, it should be arctan(N/s), unless people (me) are using an absolute judgment of horizontality.

MT: I thought I had presented a sufficient analysis of why changing the value of
width of the tolerance zone emulates changing the value of the loop gain,
but Bill considered that a simulation was needed to demonstrate it, and I
agree. I'm trying to include that in my programming using the Processing
language.

RM: I would like to do this for you. My models are running and ready
to go, so if you could just explain how to implement you tolerance
zone calculation into the model I could tell you what happens very
quickly.

Thanks for the offer. I'm trying to do it myself, but it would be much better to have two independent versions. What I would do is a series of trials. Each trial is like the following:

Take a particular disturbance waveform (different for each trial) and optimize a series of control models for best control. The disturbance should be slow enough that control is pretty good, but visibly not perfect. The control model is the standard one except that the perceptual signal relation to qi has a tolerance zone of half-width W. There has to be some loop transport lag, because if transport lag is zero you can get essentially perfect control by setting the output gain rate to infinity. You can fix the transport lag for all the runs of all the trials. The magnitude of the lag interacts with the speed of the disturbance in determining the optimum gain. For this particular lag and disturbance waveform, first set W to zero and find the optimum gain value. Then increase W in a series of small steps and for each step find the optimum gain. If I am right, as W increases, so should the optimum gain value, though not necessarily proportionately or linearly.

If the disturbance waveform consists of a series of step-wise changes rather than being slow and smooth, W should have much less effect on the optimum gain, but my intuition suggests that with non-zero W, the best-controlling model's cursor waveform should show the small over- and under-shoots with slower final near-exponential correction that are shown by human tracks for such disturbances. The "standard model" does not show them, and it would be interesting if the standard model augmented by a perceptual tolerance zone does.

The reasoning behind my intuition is that if W reduces the effective loop gain greatly when the error is small but not much when the error is large, then at the moment after the step the control loop could have a loop gain large enough to make the loop unstable, but when the perceptual value comes close to the reference value, the reduced gain restabilizes the loop, allowing the usual final exponential-like approach toward the reference value.

If and when I get my general control simulation package working, that is one of the experiments I plan to do with it. It would be really nice if you were to do the same independently with a special-purpose program in Processing.

By the way, have you figured out how to save data when your Processing programs run on a tablet? Adam said that there is a way to do it using php, but for me that would be another new language to learn.

Martin

[From Rick Marken (2013.04.05.1040)]

Martin Taylor (2013.04.04.10.21)–

MT: My problem with using noise that is simulated by adding a random value to
the perceptual result of a fixed qi at each frame of the simulation “movie”

is that the perceptual effect is of the perceptual value skittering about
all over the place, whereas the actual perceptual effect is that the
perception seems solid and equal to its reference value when the actual

display values are different –

RM: I agree that adding random noise that is just an equally probable value on each frame is not the best way to go; I have now added a low pass filter so that when I add “noise” it varies pretty smoothy. When I do this the model output contains little wobbles that look very much like those in the human data.

MT: Anyway, adding the noise to the output doesn’t address the issue at hand,
which is the effect of perceptual uncertainty on the quality of control.

RM: Yes, I think I addressed that by implementing your “tolerance zone” model, as I mentioned in an earlier post. As I said, without any noise added, the tolerance zone model can account for the decrease in performance with increasing horizontal target - cursor separation (s) by increasing the width of the tolerance zone. The problem with this model is 1) it does worse than the angle control model at accounting for the actual time variations in output produced by the human and 2) there is no explanation of the mechanism that would lead to an increase in the width of the tolerance zone with increasing s.

MT: Does this help in distinguishing between “perceptual uncertainty” and
“noise”, noise being only one possible cause for perceptual uncertainty?

RM: I think so. And the modeling suggests that hte noise explanation (where the decrease in performance results for an increase in the effect of a constant level of noise due to the decrease in loop gain as s increases when the controlled variable is arctan(v/s)) is better than the uncertainty explanation.

I’m attaching a graph showing the behavior of the tolerance zone (I call it the Threshold Distance, TD), model the Angle (arcsin(v/s)) control model and the human subject during a segment of a trial when the cursor and target were horizontally separated by 800 pixels (could you give me the sampling rate for you program?).

Note that the output of the tolerance zone (TD) model nearly perfectly mimics the human output (the correlation between model and human is .99) but it doesn’t lie on top of the trace for the human as does the output of the Angle control model. This graph shows why the performance of the TD model is poor for large s but fails to give as good a fit to the data as does the Angle control model with constant noise. The TD model, which has no noise added, performs more poorly as s increases because the increase in the tolerance zone apparently prevents the model from keeping the cursor perfectly aligned with the target; there is a constant deviation of cursor from target that results in increased RMS error and, thus, a decreased measure of performance measured in bits as -log2 [rms(c-t)/rms(d)].

The Angle control model, on the other hand, falls nearly exactly on the human trace and the higher frequency random wobbles in the Angle control trace seem very similar to (and of about the same magnitude as) those in the human trace. These wobbles are apparently the reason for the poorer performance of both the human and Angle model with increasing horizontal separation between cursor and target (s). In both cases, because the controlled variable is Angle, the decrease in loop gain with increasing s allows more of the very low level of noise that is always present in the loop to pass.

MT: My point is the same as it has been. I do not think we have assuredly found
any place where the two possibilities differ.

RM: I think the two possibilities are the TD model (which controls the vertical separation between cursor and target, v, via an uncertainty or tolerance zone mechanism) and the Angle control model. I think these two models make clearly different predictions and the data suggests that the predictions made by the Angle model are more correct.

RM: I would like to do this for you. My models are running and ready
to go, so if you could just explain how to implement you tolerance
zone calculation into the model I could tell you what happens very

quickly.

MT: Thanks for the offer. I’m trying to do it myself, but it would be much
better to have two independent versions

RM: That’s fine. I’ve got a student who is interested in doing some research. If you could send me your program maybe I could get her to collect some data from other people.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2013.04.10.1130)

Rick Marken (2013.04.05.1040)–

RM: I’m attaching a graph showing the behavior of the tolerance zone (I call it the Threshold Distance, TD), model the Angle (arcsin(v/s)) control model and the human subject during a segment of a trial when the cursor and target were horizontally separated by 800 pixels (could you give me the sampling rate for you program?).

RM: I guess this didn’t pluck anyone’s magic twanger (that reference should date me pretty well). Or perhaps CSGNet is out of date and everyone is communicating via Facebook or Twitter?

MT: Thanks for the offer. I’m trying to do it myself, but it would be much
better to have two independent versions

RM: That’s fine. I’ve got a student who is interested in doing some research. If you could send me your program maybe I could get her to collect some data from other people.

RM: What do you say, Martin? Can I use your program or at least use the data you sent in a paper?

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com

www.mindreadings.com