Wavelets and Image Processing

Dr. J. Derado

Fall 2004

KSU

**Introduction**

Since the advent of
high-performance computing, digital imaging technologies and high-level
languages, it has become not just feasible, but commonplace, to manipulate,
analyze and transform what we see or can visualize. During the course of the
sixteen weeks that comprise this semester, we have delved into image processing
and analysis arming ourselves with wavelet theory, image manipulation tools and
our own programming capabilities.

While
learning about wavelet theory and imaging technology, we set forth to
empirically test these concepts using reference images such as the venerable

Note
that we have used Visual Studio .Net C# as our development platform, Adobe
Photoshop 7.0 as our image manipulation tool. Due to the experimental nature of
this project, the source code is far from optimized and will at times feel
cumbersome.

**1 The Haar
Transform**

We began by implementing the simplest kind of wavelet
transform in our project,
the Haar Transform. For an image *P* of _{}size we perform a two-pass filter. On our first pass, we
process the image such that on a row *k*,
for all pixels _{}we have _{}and for all pixels _{} we have
_{}. On our second pass, we perform the transpose of this
operation, that is, we process the image column-by-column and store the
averages and the differences on the result of the above process.

**Sample code 1.1: First pass Haar
transform**

**private**** double[,] firstPass(double[,] inputMatrix, int level, UpdateForm updateForm)**

**{**

**double****[****,] resultMatrix=
new double[inputMatrix.GetLength(0),inputMatrix.GetLength(1)];**

** int
offset=inputMatrix.GetLength(0)/2;**

** int currentPixel = 0;**

** double size = inputMatrix.GetLength(0)*inputMatrix.GetLength(1);**

** updateForm.updateProgress(0.0);**

** updateForm.LabelProgress="Processing Data";**

** for(int y=0;y<inputMatrix.GetLength(1)/level;y++){ **

** for (int x=0;x<inputMatrix.GetLength(0)/2/level;x++){**

** currentPixel++;**

** resultMatrix[x,y] = (inputMatrix[2*x,y]+inputMatrix[2*x+1,y])/SQRT2;**

** resultMatrix[x+offset,y] = (inputMatrix[2*x,y]-**

**inputMatrix****[****2*x+1,y]+127)/SQRT2;**

** // Progress Bar
Updating **

** if
(currentPixel % 10000==0){**

** updateForm.updateProgress(((double)currentPixel*2/size)*100.0); }**

** }**

** }**

** return resultMatrix;**

**}**

**Illustration 1.1 and 1.2: **

**Haar**** Transform Row-by Row and Column by
Column**

**Illustration 1.3: Haar
Transform End Result**

The
purpose of this transformation is to compress the energy into the first
quadrant of the picture, and subsequent tests indicate that over 70% of the
energy is contained in this first quadrant. The top-right quadrant stores the
horizontal differences, the bottom-left the vertical ones and the bottom-right
the diagonal ones. Note that in this last picture we experiences very little
detail that is yet unexplained, but it seems to be a problem with loss of
precision and correct color
rescaling.

Our program
was redone to perform the Haar transform in a single
step and the results are much better. In this one-pass transform, we handle
four pixels at a time, calculating the averages and the details simultaneously.
Also note that since we have been dividing by the square root of two, as
opposed to the number of pixels (for the statistical mean), we have had to
linearly scale the color range appropriately to avoid either clipping, or loss
of precision or energy.

**Illustration 1.4: One-pass Haar
Transform**

Note that in order to preserve the sign of the
differences, we have been translating the difference results to the middle
brightness value (128.) This way we can see that darker values are negative
differences and lighter values are positive differences.

We also decided to perform some experiments with

**Sample code 1.2: Binary/Threshold Haar transform and Absolute Value**

**private**** double[,] singlePassThreshold(double[,] inputMatrix, UpdateForm updateForm, double threshold, int level, bool binary)**

**{**

** level
= (int)Math.Pow(2.0,level-1); **

** double[****,] resultMatrix= new double[inputMatrix.GetLength(0),inputMatrix.GetLength(1)];**

** int
xOffset=inputMatrix.GetLength(0)/2/level;**

** int
yOffset=inputMatrix.GetLength(1)/2/level;**

** int
currentPixel = 0;**

** double**** size = inputMatrix.GetLength(0)*inputMatrix.GetLength(1);**

** double**** multiplier = double.Parse(textMultiplier.Text);**

** **

** updateForm.updateProgress(0.0);**

** updateForm.LabelProgress="Processing
Data";**

** for(int**** y=0;y<inputMatrix.GetLength(1);y++)**

** { **

** for (int x=0;x<inputMatrix.GetLength(0);x++)**

** {**

** if((y<inputMatrix.GetLength(1)/2/level) && (x<inputMatrix.GetLength(0)/2/level))**

** {**

** currentPixel++;**

** resultMatrix[x,y] = normalizeColors(( inputMatrix[2*x,2*y] + **

**inputMatrix****[****2*x+1,2*y] + inputMatrix[2*x,2*y+1] + inputMatrix[2*x+1,2*y+1])/SQRT2,0,721);**

**double**** vertDiff = (-inputMatrix[2*x,2*y]
- inputMatrix[2*x+1,2*y] + **

**inputMatrix****[****2*x,2*y+1] + inputMatrix[2*x+1,2*y+1]);**

**double**** horzDiff = ( inputMatrix[2*x,2*y]
- inputMatrix[2*x+1,2*y] + **

**inputMatrix****[****2*x,2*y+1] - inputMatrix[2*x+1,2*y+1]);**

** double diagDiff = (-inputMatrix[2*x,2*y] + inputMatrix[2*x+1,2*y]
– **

**inputMatrix****[****2*x,2*y+1] + inputMatrix[2*x+1,2*y+1]);**

** if (binary)**

** {**

** if (Math.Abs(vertDiff)>threshold)**

** { resultMatrix[x+xOffset,y] = 255; }**

** if (Math.Abs(horzDiff)>threshold)**

** { resultMatrix[x,y+yOffset] = 255;
}**

** if (Math.Abs(diagDiff)>threshold)**

** { resultMatrix[x+xOffset,y+yOffset] = 255;}**

** }**

** else {**

** resultMatrix[x+xOffset,y] = multiplier*Math.Abs(vertDiff);**

** resultMatrix[x,y+yOffset] = multiplier*Math.Abs(horzDiff);**

** resultMatrix[x+xOffset,y+yOffset] = multiplier*Math.Abs(diagDiff);**

** }**

** if (currentPixel % 10000==0)**

** { updateForm.updateProgress(((double)currentPixel*2/size)*100.0);} **

** }**

** else
{**

** if ((x>=inputMatrix.GetLength(0)/level)
|| (y>=inputMatrix.GetLength(1)/level))**

** { resultMatrix[x,y]=inputMatrix[x,y]; }**

** }**

** }**

** }**

** return**** resultMatrix;**

**}**

**Illustration 1.5: Binary Difference With Threshold 1.0**

**Illustration 1.6: Binary Difference, Threshold 5.0**

**Illustration 1.7: Binary Difference, Threshold 10.0**

**Illustration 1.8: Binary Difference, Threshold 20.0**

**Illustration 1.9: Absolute Value of Difference**

This absolute value pass shows very little energy, so we
process it in Photoshop to compress the brightness range and see more detail.
The typical histogram of the absolute value differences for an image looks like
the following graph and we proceed to compress this:

**Illustration 1.10: Compressed ****Brightness**** ****Range**** in
Absolute Value**

Finally we implemented multiple level Haar
transforms. In fact, we can notice that our Sample Code 1.1 includes an
argument *level* that specifies which
level of the Haar transform it will perform.

**Illustration 1.11 : 3 ^{rd} Level Haar Transform **

**Processing Hurricanes**

At this point we turned our attention to using the tools
that we had developed on geostationary satellite images.

**Illustration 1.12 : Hurricane Ivan over the coast
of ****Florida**

**Illustration 1.13 : Hurricaine
Ivan, Haar 1 ^{st} levelTransform**

As we can see there is no apparent detail of the
hurricane, but if we compress the color range of the differences in Photoshop,
we get the following dramatic results:

**Illustration 1.14: Ivan, Haar
Transform, Exaggerated**

This picture does indeed show substantial detail in the
hurricane, particularly in the region of the eye.

*Geostationary satellite image processing:*

Now we will take a look at Ivan as it makes its way
through the

**Illustration 1.15: Geostationary Progression**

As we can
see, detail is consistent across the pictures, with both the eye and arms being
particularly detectable.

**2. Daubechies
Transform**

The Daubchies coefficients that we used for this transform are:

_{}

and:

_{}

This transform has the peculiarity that it works on four
pixels at the same time, yet every iteration increments by two. Because of this
“overlap”, it is necessary to “wrap around” such that when we reach the end of
the picture and we are processing the last two pixels, we also use the first
two pixels to make the total of four.

**Sample Code 2.1: ****Daubechies**** ****First**** ****Pass**

private double[,]
daubFirstPass(double[,]
inputMatrix, UpdateForm updateForm,int level,bool ignoreAverage)

{

double[,] resultMatrix= new double[inputMatrix.GetLength(0),inputMatrix.GetLength(1)];

int offset=inputMatrix.GetLength(0)/2;

int currentPixel = 0;

for(int
y=0;y<inputMatrix.GetLength(1);y++)

{

for (int
x=0;x<inputMatrix.GetLength(0)/2;x++)

{

currentPixel++;

if (x < (inputMatrix.GetLength(0)/2)-1)

{

resultMatrix[x,y] =

a1*inputMatrix[2*x,y]+a2*inputMatrix[2*x+1,y]+a3*inputMatrix[2*x+2,y]+a4*inputMatrix[2*x+3,y];

resultMatrix[x+offset,y] = Double.Parse(textMultiplier.Text) * (

b1*inputMatrix[2*x,y] + b2*inputMatrix[2*x+1,y] + b3*inputMatrix[2*x+2,y]
+ b4*inputMatrix[2*x+3,y]);

}

else

{

resultMatrix[x,y] = a3*inputMatrix[0,y] + a4*inputMatrix[1,y]
+

a1*inputMatrix[2*x,y] + a2*inputMatrix[2*x+1,y];

resultMatrix[x+offset,y] = Double.Parse(textMultiplier.Text)
* (

b3*inputMatrix[0,y]+b4*inputMatrix[1,y]+b1*inputMatrix[2*x,

y]+b2*inputMatrix[2*x+1,y] );

}

}

}

return resultMatrix;

}

*Note that
in the above code there seem to be unused parameters. These are implemented in
the final project.

The daubechies transform
produced such a dark picture, that it was only useful after Photoshop
processing.

**Illustration 2.1: Daubechies
Transform on ****Lena**

The results on

Although

**Illustration 2.2 : Hurricane Ivan, Daubechies Transform**

As we can see, there is very little eye or arm detail
picked up in this case, even though the original photograph clearly shows both.
However, when we processed a hurricane

**Illustration 2.3 : Hurricane ****Frances****, Daubechies transform**

**3. 3 x 3 Transform**

We finally arrive at the 3x3 transform. IN this transform,
we process three pixels at a time, with no overlap, using the following
coefficients:

_{}

Once again we handle the image row by row, and then column by column. For all pixels

_{} we define

_{}

which are the energy-conserving averages. For the pixels _{}we define

_{}

and finally, for pixels _{}we have

_{}

The code was much more straightforward than the Daubechies transform, which required the “wrap-around”. I
also made it general, accepting user input for the values
a1,a2,a3,b1,b2,b3,c1,c2,c3 which will show in the next example as text fields.

**Sample Code 3.1 : 3 x 3 Transform**

**private**** double[,] threeByThreeSecondPass(double[,] inputMatrix, UpdateForm updateForm)**

**{**

** double****[,] resultMatrix=new **

** double[inputMatrix.GetLength(0),inputMatrix.GetLength(1)];**

** int**** offset=(int)Math.Floor((double)inputMatrix.GetLength(1)/3.0); **

** double**** c1_=double.Parse(c1.Text);**

** double**** c2_=double.Parse(c2.Text);**

** double**** c3_=double.Parse(c3.Text);**

** double**** c4_=double.Parse(c4.Text);**

** double**** c5_=double.Parse(c5.Text);**

** double**** c6_=double.Parse(c6.Text);**

** double**** c7_=double.Parse(c7.Text);**

** double**** c8_=double.Parse(c8.Text);**

** double**** c9_=double.Parse(c9.Text);**

** for****(int
x=0;x<inputMatrix.GetLength(0);x++)**

** {**

** for(int
y=0;y<offset-3;y++)**

** {**

** resultMatrix[x,y] = c1_*inputMatrix[x,3*y] +
c2_*inputMatrix[x,3*y+1] + **

**c3_*inputMatrix[x,3*y+2];**

** resultMatrix[x,y+offset] = c4_*inputMatrix[x,3*y] + **

**c5_*inputMatrix[x,3*y+1]
+ c6_*inputMatrix[x,3*y+2];**

**resultMatrix****[x,y+2*offset]= c7_*inputMatrix[x,3*y]
+ **

**c8_*inputMatrix[x,3*y+1]
+ c9_*inputMatrix[x,3*y+2];**

** }**

** } **

** return**** resultMatrix;**

**}**

**Illustration
3.1 : 3 x3 Transform on ****Lena**

These
results are dramatic since they don’t include Photoshop post-processing. Most
of the energy has been compressed into the averages as expected, but most
detail can be found on the top-right hand and the bottom-left hand area,
leaving 6 areas with little to no information. After post-processing, it is
even more evident how much detail was preserved in these two areas.

**Illustration
3.2 : Exaggerated 3 x 3 Transform:**

We then
immediately proceed to look at several hurricane photographs and compare the
amount of detail preserved in the 3 x 3 transform, compared to the Daubechies transform.

**Illustration
3.3 : Hurricane Ivan, Daubechies vs. 3 x 3**

Even in
these smaller sizes, the difference is staggering. Where Daubechies
could barely represent the detail of the eye and arms of hurricane Ivan, the 3
x 3 transform has no problem. In a more cooperative picture, such as hurricane

**Illustration
3.3 : Hurricane Frances, Daubechies vs. 3 x 3**

We
proceeded to analyze five hurricanes, whose 3 x 3 transforms can be found in
the CD PowerPoint presentation that accompanies this document.

After
producing 3 x 3 transforms of these hurricanes, we focused on the two areas
that had most of the detail. For every hurricane, we select those two areas and
make a new image comprised of only those two. We then look at the histogram of
the resulting image.

**Illustration
3.4 : Histogram of Hurricane Frances Detail**

We now remove the most common
coefficient value +/-9 as a threshold and we get the following results.

**Illustration
3.5 : Histogram and Image of Hurricane ****Frances**** After
+/-9 Threshold**

We then tweak our threshold to be
+/- 5 of the median and try our results
again

**Illustration
3.6 : Frances Detail After +/-5 Threshold**

Repeating this process for the
other hurricanes we get these results.

**Illustration
3.7 : Isabel Detail After +/-5 Threshold**

**Illustration
3.8 : Ivan Detail After +/-5 Threshold**

**Illustration
3.9 : Charley Detail After +/-5 Threshold**

As we can see, some of these
images simply have too much information at the median +/-5 threshold level.

**4. Conclusions**

It was the original intent of
the 3x3 transform to be able to find raw coefficients that would somehow be
unique in the description of the hurricanes. Since the pictures vary in angle,
location and quality, this wasn’t possible. The coefficient distribution seemed
much more so due to the peculiarities of the image than to the intrinsic nature
of the hurricane. We also lack the time to perform a thorough statistical
analysis of the resulting coefficients (the raw coefficients, not once that
they have been scaled and normalized for the sake of visualization.) However,
it is important to note that in the final tweaking of the pixel histogram of
the hurricane detail, it does show that it is possible to describe the arms and
eye of a hurricane using very little information (energy.) Hurricane

It is also important to note
that in order to proceed with deeper research, it would be vital to develop
supporting image analysis tools in lieu of Photoshop that would complement the
wavelet transform software that we have developed. There were many times were
Photoshop, powerful as it is as an image manipulation program, could not give
us the in-depth analysis that we needed. In the end, it is time that truncates
our efforts but the results achieved with the 3 x 3 transforms seems to be very
promising, and in being so, invite us to continue studying it.