Image sharpening

[From Bill Powers (991122.0308 MDT)]

Attachmment: csgsampl.bmp

For those interested in the image sharpening project, the attached bitmap
image of a section of the Moon at first quarter (read it with the Microsoft
Paint program) is the current state of the art here. The right-hand image
is the original picture of an area just to the east of Tycho, the width
being about 160 seconds of arc and one pixel equaling about 1/2 second of
arc. North is up. Exposure was 0.4 sec on a night of excellent seeing.

The straight white line running across the pictures shows where a
photometric trace of the image was made, and also shows the zero of the
intensity plot superimposed on the picture. In the lower left corner is a
slice of the two-dimensional instrument profile, in this case the
theoretical shape of the diffraction pattern of a point source for the 28
mm aperture, computed for 10 wavelengths and weighted by the quantum
efficiency of the CCD for each wavelength before summing. It is also
possible to use the image of a star for this profile. The time is seconds
per iteration. The sharper picture on the left is the result after 5
iterations of the algorithm. It's a bit hard to believe that this image
came from a telescope with an aperture of 1-1/8 inches.

Best,

Bill P.

csgsampl.bmp (61 Bytes)

[From Bruce Nevin (991122.1043 EST)]

Bill Powers (991122.0308 MDT)--

Bill, these are stunning results!

There is an apparent darkening of the image, in part due to more detail
i.e. small shadows amid the originally blurry bright areas, and I guess in
part due to the remaining bright areas not being made more bright to
compensate?

···

At 03:31 AM 11/22/1999 -0700, Bill Powers wrote:

It's a bit hard to believe that this image
came from a telescope with an aperture of 1-1/8 inches.

Indeed! And I'd bet no one in the image-enhancement business would believe
that you accomplished this in less than a minute (58.5 sec?) on a PC.

I'm guessing this be useful in input functions and maybe output functions?

  Bruce Nevin

[From Bill Powers (991122.1053 MDT)]

Bruce Nevin (991122.1043 EST)--

There is an apparent darkening of the image, in part due to more detail
i.e. small shadows amid the originally blurry bright areas, and I guess in
part due to the remaining bright areas not being made more bright to
compensate?

The image as displayed is scaled so the brightest area has a brightness of
63 and the dimmest a brightness of zero (to fit the finest gray scale I
know how to produce). As spread-out light energy is redistributed to the
highlights where it belongs, those highlights become much brighter. The
scaling then makes the dark areas darker. This actually helps us see the
sharpening; where the photometric trace crosses a shadow, the slope get
steeper with sharpening.

I'm guessing this be useful in input functions and maybe output functions?

In the optic nerve, 25% of the fibers carry _outgoing_ signals. That would
be required to implement this algorithm in the retina. Who knows if that
means anything?

Best,

Bill P.

[From Tim Carey (991123.0525)]

[From Bill Powers (991122.0308 MDT)]

Thanks for sending this. They are really wonderful results. Being the
complete dunderhead in mathematics that I am however, could you explain your
use of the algorithm a bit more please. Did you get these results using a
standard negative feedback control system? If so, what was the reference?
etc. (have I just demonstrated my dunderheadedness :-))

Is this an example of how a control system at perhaps the sensory or
configuration levels might function or am I completely off base?

Cheers,

Tim

[From Bill Powers (991123.1450 MDT)]

Tim Carey (991123.0525)--

Thanks for sending this. They are really wonderful results. Being the
complete dunderhead in mathematics that I am however, could you explain your
use of the algorithm a bit more please. Did you get these results using a
standard negative feedback control system?

The basic method works like this:

Suppose you have an image of a star obtained with a telescope on a perfect
night (no atmospheric wavering). This image, if examined at high
magnification, would be seen to consist of a central bright "hump"
surrounded by rings of diminishing brightness, the diffraction rings.

Now suppose we could artificially generate another such image, by starting
with a perfect point-image and mathematically fuzzing it out using the star
image (or a theoretical curve) as a "mask." If we had the right mask, the
artificially fuzzed image would exactly match the original telecopic image
of the star (central bright area, diffraction rings, and all), given only
that the artificial point image had the right brightness.

So let us subtract the artificially fuzzed image from the telescopic image,
and feed back the difference to adjust the brightness of the point image
until the error between the two fuzzy images is zero. This will give us a
point image with a certain brightness, located at the center of the
original telescope star image. We have a little control system which makes
the artificially fuzzed image match the naturally fuzzed image by adjusting
the brightness of a point image.

Now imagine that we have a telescopic image of an "extended object" -- that
is, an image covering an area. Let's imagine that our CCD camera measures
the brightnesses of 320 x 240 or 76,800 pixels. In a data matrix, we create
a new blank image of the same size that will eventually contain the sharp
image. For every point in this image we take the brightness of each pixel,
fuzz it out with the mask function, and superimpose all the resulting
diffraction patterns in a new image, the artificially fuzzed image. Then we
compare the artificially fuzzed image with the naturally fuzzed image over
all 76,800 pixels, and feed back the difference to adjust the brightess of
the pixels in the "sharp" image that is at the input of the fuzzing mask.
After repeating this process a few times, the artificially fuzzed image
nearly matches the original telescopic image, and the "sharp" image
contains the sharpened version of the original image.

So yes, it uses a standard control system (76,800 of them acting at the
same time).

Best,

Bill P.

[From Tim Carey (991126.1345)]

[From Bill Powers (991123.1450 MDT)]

Thanks Bill this is making more sense. I think I understand the logic behind
it now but ...

with a perfect point-image and mathematically fuzzing it out using the

what's a perfect point-image and what's fuzzing it out?

Cheers,

Tim

Hi, Tim --

with a perfect point-image and mathematically fuzzing it out using the

what's a perfect point-image and what's fuzzing it out?

A point image is the image of a perfect point with a certain brightness.
Like the . at the end of this sentence, only far smaller. It's like the
image of a distant star. A perfect image would also be a geometrical point.
The real image, of course, is a blur with rings around it. See the
following web page more info.

http://www.meade.com/support/telewrk.html

Fuzzing out the image mathematically means taking the brightness of a
perfect point source and calculating what the array of brightnesses would
be if the light were spread out the way a telescope really spreads it out
into an image. For example:

Brightness fuzzing image
of point function brightness
                  ("mask")
             0.05 0.07 0.05 5 7 5
100 0.07 0.52 0.07 7 42 7
             0.05 0.07 0.05 5 7 5

The "mask" function is a way of expressing how the telescope would convert
a point-image of any brightness into a fuzzed-out image.

If you imagine that an "extended source" like the moon is made of a lot of
perfect points of different brightnesses, you can see how the final image
(on the right above) would consist of a lot of overlapping fuzzed-out
images. The trick of my method, then, is to find each original
point-brightness such that if each point were fuzzed out and the
overlapping images (on the right) were added up, the result would exactly
match the photograph made by the telescope.

Bill

[From Tim Carey (991129.0600)]

The real image, of course, is a blur with rings around it.

OK, I think I've got this bit. Thanks for the web reference. By real image
I'm assuming you mean the imaged perceived by the person looking through the
telescope.

Fuzzing out the image mathematically means taking the brightness of a
perfect point source and calculating what the array of brightnesses would
be if the light were spread out the way a telescope really spreads it out
into an image.

This bit I got swamped in. I'm not sure what you mean by calculating what
the array of brightneses would be. Calculating what they would be where? Are
you calculating what they would be if we were close enough to see that
degree of detail without the aid of a telescope. I guess the same applies
for "if the light were spread out" (perhaps they are part of the same
confusion). If the light is spread out where? Is there a distinction between
what we perceive through the telescope and the actual image on the
telescope.

I didn't get some of the detail of the rest of the post but I think that
stems from my confusion here. So just getting this bit will be great at the
moment.

Cheers,

Tim

···

From: Bill Powers <powers_w@FRONTIER.NET>

[From Bruce Abbott (991128.1745 EST)]

Tim Carey (991129.0600) --

From: Bill Powers <powers_w@FRONTIER.NET>

Fuzzing out the image mathematically means taking the brightness of a
perfect point source and calculating what the array of brightnesses would
be if the light were spread out the way a telescope really spreads it out
into an image.

This bit I got swamped in. I'm not sure what you mean by calculating what
the array of brightneses would be. Calculating what they would be where? Are
you calculating what they would be if we were close enough to see that
degree of detail without the aid of a telescope. I guess the same applies
for "if the light were spread out" (perhaps they are part of the same
confusion). If the light is spread out where? Is there a distinction between
what we perceive through the telescope and the actual image on the
telescope.

Tim, imagine that a theoretically perfect telescope is forming an image on a
rectangular array of microscopic-sized light-intensity sensors. Photons
coming from a point source -- say a star -- would be focused onto a single
sensor, except that the rays interfere with each other and as a result, they
end up being spread out into a series of concentric rings (light and dark
interference bands) around the location of the single sensor. These rings
register on the surrounding sensors.

Now imagine that the telescope is pointed at the moon. Light from one
point-location on the moon "should" produce a single point of illumination
on the sensor array, but as with the star, is instead spread out into
concentric rings around that point. Another point on the moon, close to the
first point, should illuminate the next sensor of the array, but again the
light from this source is spread out into concentric rings. And so on for
all the places on the moon. So the brightness detected by each sensor of
the array is really the sum of all the illumination from all these
overlapping rings that hit that sensor. That brightness can be represented
by a number (ranging, say, from 0 to 255) to give 256 steps of brightness
for each sensor position in the rectangular array.

Because of the interference bands, the image will appear fuzzy and the
contrast (range of intensities present) will be less than would be the case
if there were no interference. The problem is to recreate the ideal image
(what would be present if there were no interference) from the pattern of
brightnesses of the pixels surrounding each individual pixel (sensor value)
in the current image. Because the brightness of each pixel is represented
by a number in the array, mathematical operations can be performed on the
numbers to accomplish this. In Bill's procedure, a "mask" is created, an
array of numbers representing what proportion of the light that "should"
have struck a single sensor appears at that sensor and at sensors
immediately surrounding that sensor. This mask can be centered on a given
pixel and used to mathematically estimate what the brightness of the center
pixel would have been without the scatter.

Bill presented a simple example of a mask that would convert the brightness
of a single point of light into the brightnesses of that point and of the
surrounding points as a result of the interference:

Brightness fuzzing actual image perfect image
of point function brightness brightness
                 ("mask")
            0.05 0.07 0.05 5 7 5 0 0 0
100 0.07 0.52 0.07 7 52 7 0 100 0
            0.05 0.07 0.05 5 7 5 0 0 0

(I've fixed the typo: the center pixel under "image brightness originally
read "42.") I've added a new array to this: the "perfect image array,"
which is how the array would read if there were no interference. If the
effect of the "fuzzing function" (interference) could be eliminated from the
image, this is how the brightness array would look. The "fuzzing function"
shows that 7% of the light from the original point source would appear at
each pixel directly north, south, east, or west of the point and 5% would
appear at the four corner pixels, leaving only 52% of the original
brightness appearing at the center.

Now imagine that there were _two_ point sources, separated horizontally by
two pixels. The perfect image looks like this:

  0 0 0 0 0
  0 100 0 100 0
  0 0 0 0 0

The "fuzzed out" image would look like this:

  5 7 10 7 5
  7 52 14 52 7
  5 7 10 7 5

Note that the brightnesses of the center column are the sums of the
brightnesses produced by the spread-out light from each point source.
Bill's procedure tries to deduce the pattern of brightnesses that would be
present on the array if interference did not occur by (a) determining the
"fuzzing function" (theoretical interference pattern around each pixel) and
(b) finding a pattern of pixel brightnesses which, when the fuzzing function
is applied to each pixel, results in the array of brightnesses actually
recorded on the array.

Regards,

Bruce A.

[From Tim Carey (991129.17100]

[From Bruce Abbott (991128.1745 EST)]

Great post Bruce. Thanks for the "clarity" :slight_smile:

Cheers,

Tim

[From Bruce Abbott (991129.0720 EST)]

Tim Carey (991129.17100]

[From Bruce Abbott (991128.1745 EST)]

Great post Bruce. Thanks for the "clarity" :slight_smile:

You know, it occurs to me that if Bill could develop a program that works
the same way on CSGnet posts, he'd really have something! Think of it -- in
goes one of my fuzzier messages, and out comes what I really _meant_ to
convey, in startling clarity!

Regards,

Bruce

[From Bruce Nevin (991129.0953 EST)]

At first I thought this might be a possible mechanism in perceptual input
functions or output functions. Then it seemed that this was not feasible,
since it seems to require a perception of attributes of the means of
perception (the mask corresponds to optical imperfection). But now it
occurs to me that something like this could evolve by the usual trial and
error (as in _Without Miracles_). Therefore mechanisms like this could
after all be involved in "clearing up" perceptions. Nothing more than a
hunch, but interesting.

  Bruce Nevin

Hi, Tim --

This bit I got swamped in. I'm not sure what you mean by calculating what
the array of brightneses would be. Calculating what they would be where?

On a piece of film at the focus of the telescope, or on your retina, or on
the sensitive surface of the electronic camera. Normally the image of a
star recorded on film would just look like a point. But if you used a
highly magnifying telescope, the image on the film, even with the telescope
perfectly focused, would consists of a bright central disk surrounded by
faint rings.

Are
you calculating what they would be if we were close enough to see that
degree of detail without the aid of a telescope.

No -- these details don't exist around the real star. They are created by
diffraction effects in the telescope. If you could get close enough to the
real star, you'd see a round ball like our sun.

Best,

Bill P.

[From Bill Powers (991129.0854 MDT)]

In my never-ending quest for something or other, I kept looking on the Web
for prior art concerning the image sharpening method, and finally found it.
There is a method called the Richardson-Lucy method, first published by
Richardson in 1972 in the Journal of the Optical Society of America Vol 62
p. 55. It is now the primary method for restoring images from the Hubble
Space Telescope. It is not exactly the same as mine, and I believe it was
published after I got mine into print, but it uses the same feedback
principle. An artificially created "sharp" image is fuzzed out using a mask
or instrument profile, either theoretical or obtained from a star image.
Then the resulting fuzzed image is compared, point by point, with the
original telescopic image. In Richardson's case, the subsequent correction
to the sharpened image is made by _multiplying_ each point in the
artificial image by the _ratio_ of the original image to fuzzed image,
while in my approach the _difference_ between those two images is _added_
to the artificial image. I added the RL method to my program so I could
compare it with my method, and the results are essentially identical.

So, no great breakthrough. My method does have one advantage, which is that
it does not involve dividing by zero as Richardson's does in places where
the brightness of the fuzzed image goes to zero. I dealt with that when
testing the RL method by simply not making a correction in those places.

If anybody has access to that journal, I would be curious as to whether
Richardson cited me in his article.

Best,

Bill P.

[From Tim Carey (991130.0540)]

[From Bruce Abbott (991129.0720 EST)]

You know, it occurs to me that if Bill could develop a program that works
the same way on CSGnet posts, he'd really have something! Think of it --

in

goes one of my fuzzier messages, and out comes what I really _meant_ to
convey, in startling clarity!

But doesn't that mean that there would have to be a "perfect point"
somewhere in all the CSGnet posts :wink:

Cheers,

Tim

[From Tim Carey (991130.0715)]

Thanks

···

From: Bill Powers <powers_w@FRONTIER.NET>

[From Bruce Abbott (991129.1710 EST)]

Tim Carey (991130.0540) --

Bruce Abbott (991129.0720 EST)

You know, it occurs to me that if Bill could develop a program that works
the same way on CSGnet posts, he'd really have something! Think of it -- in
goes one of my fuzzier messages, and out comes what I really _meant_ to
convey, in startling clarity!

But doesn't that mean that there would have to be a "perfect point"
somewhere in all the CSGnet posts :wink:

Well, ... er, yes! On second thought, maybe it wouldn't work so well after all.

Regards,

Bruce

[From Dag Forssell (991207 2100)]

This was sent just before Christine and I went away for a week, but I used
another computer and managed to use an apparently obsolete address. Right
now, 62 messages are downloading and we will sit down to read snail-mail.

[From Dag Forssell (991201 0520)]

[From Bill Powers (991122.0308 MDT)]

Attachmment: csgsampl.bmp

For those interested in the image sharpening project, the attached bitmap....

Since I get the CSGnet digest, this attachment appears as uuencoded text in
the body of the digest. At long last, I cut it out as follows:

--=====================_943291909==_
Content-Type: application/octet-stream; name="csgsampl.bmp";
x-mac-type="424D5070"; x-mac-creator="4A565752"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="csgsampl.bmp"

Qk02DAIAAAAAADYEAAAoAAAAfQIAANAAAAABAAgAAAAAAAAIAgDEDgAAxA4AAAAAAAAAAAAAAAAA
AICAgAAAAIAAAICAAACAAACAgAAAgAAAAIAAgABAgIAAQEAAAP+AAACAQAAA/wBAAABAgAD///8A
[snip]
Dg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4O
Dg4ODg4ODg4ODg4ODg4ODg4AAAA=
--=====================_943291909==_
Content-Type: text/plain; charset="us-ascii"

and saved it as a text file. Any name goes. 180 KB.

I then drag this text file to the uudecode program, which is widely
available on the Internet, or as I prefer, Alladin Expander from Alladin
Systems. (It opens many formats.)

The file appears in the same directory as csgsampl.bmp, 132 KB.

Now I see just how beautiful Bill's sharpening is! Stunning indeed.

Best, Dag

Dag Forssell
dag@forssell.com, www.forssell.com
23903 Via Flamenco, Valencia CA 91355-2808 USA
Tel: +1 661 254 1195 Fax: +1 661 254 7956