[From Bruce Abbott (991128.1745 EST)]
Tim Carey (991129.0600) --
From: Bill Powers <powers_w@FRONTIER.NET>
Fuzzing out the image mathematically means taking the brightness of a
perfect point source and calculating what the array of brightnesses would
be if the light were spread out the way a telescope really spreads it out
into an image.
This bit I got swamped in. I'm not sure what you mean by calculating what
the array of brightneses would be. Calculating what they would be where? Are
you calculating what they would be if we were close enough to see that
degree of detail without the aid of a telescope. I guess the same applies
for "if the light were spread out" (perhaps they are part of the same
confusion). If the light is spread out where? Is there a distinction between
what we perceive through the telescope and the actual image on the
telescope.
Tim, imagine that a theoretically perfect telescope is forming an image on a
rectangular array of microscopic-sized light-intensity sensors. Photons
coming from a point source -- say a star -- would be focused onto a single
sensor, except that the rays interfere with each other and as a result, they
end up being spread out into a series of concentric rings (light and dark
interference bands) around the location of the single sensor. These rings
register on the surrounding sensors.
Now imagine that the telescope is pointed at the moon. Light from one
point-location on the moon "should" produce a single point of illumination
on the sensor array, but as with the star, is instead spread out into
concentric rings around that point. Another point on the moon, close to the
first point, should illuminate the next sensor of the array, but again the
light from this source is spread out into concentric rings. And so on for
all the places on the moon. So the brightness detected by each sensor of
the array is really the sum of all the illumination from all these
overlapping rings that hit that sensor. That brightness can be represented
by a number (ranging, say, from 0 to 255) to give 256 steps of brightness
for each sensor position in the rectangular array.
Because of the interference bands, the image will appear fuzzy and the
contrast (range of intensities present) will be less than would be the case
if there were no interference. The problem is to recreate the ideal image
(what would be present if there were no interference) from the pattern of
brightnesses of the pixels surrounding each individual pixel (sensor value)
in the current image. Because the brightness of each pixel is represented
by a number in the array, mathematical operations can be performed on the
numbers to accomplish this. In Bill's procedure, a "mask" is created, an
array of numbers representing what proportion of the light that "should"
have struck a single sensor appears at that sensor and at sensors
immediately surrounding that sensor. This mask can be centered on a given
pixel and used to mathematically estimate what the brightness of the center
pixel would have been without the scatter.
Bill presented a simple example of a mask that would convert the brightness
of a single point of light into the brightnesses of that point and of the
surrounding points as a result of the interference:
Brightness fuzzing actual image perfect image
of point function brightness brightness
("mask")
0.05 0.07 0.05 5 7 5 0 0 0
100 0.07 0.52 0.07 7 52 7 0 100 0
0.05 0.07 0.05 5 7 5 0 0 0
(I've fixed the typo: the center pixel under "image brightness originally
read "42.") I've added a new array to this: the "perfect image array,"
which is how the array would read if there were no interference. If the
effect of the "fuzzing function" (interference) could be eliminated from the
image, this is how the brightness array would look. The "fuzzing function"
shows that 7% of the light from the original point source would appear at
each pixel directly north, south, east, or west of the point and 5% would
appear at the four corner pixels, leaving only 52% of the original
brightness appearing at the center.
Now imagine that there were _two_ point sources, separated horizontally by
two pixels. The perfect image looks like this:
0 0 0 0 0
0 100 0 100 0
0 0 0 0 0
The "fuzzed out" image would look like this:
5 7 10 7 5
7 52 14 52 7
5 7 10 7 5
Note that the brightnesses of the center column are the sums of the
brightnesses produced by the spread-out light from each point source.
Bill's procedure tries to deduce the pattern of brightnesses that would be
present on the array if interference did not occur by (a) determining the
"fuzzing function" (theoretical interference pattern around each pixel) and
(b) finding a pattern of pixel brightnesses which, when the fuzzing function
is applied to each pixel, results in the array of brightnesses actually
recorded on the array.
Regards,
Bruce A.