Tuesday, December 27, 2011

Wine & Cheese visited again

A while back we talked about the jpeg artifacts in an image give to me by my good friend Justin Lau. Apparently, my solution did not please him very much. So, I am revisiting the poster again. Here is the output I got with my new algorithm. I will revisit the algorithm again as I know Justin's objections in advance.

The code that generated the following output is up for auction. (Minimum bid is $2500)

Meanwhile, I will keep thinking about how to approach the quality of the poster obtained in pdf format.


Output of my new algorithm that corrects jpeg artifacts



Compare the above result to the original image below ...

The original image with severe jpeg artifacts.

Thursday, December 22, 2011

Automatic makeup!

Today, I will post results of my new algorithm called 'automatic make up'.
Below is the original image and the image produced by my automatic make up algorithm.
Original image

Automatic make up!

Saturday, December 10, 2011

Nordstrom

Anisosotropic diffusion may be awesome, but as we have seen, we don't know where to stop. Nordstrom an electrical engineer showed (1992) that if by adding f-u on the right hand of the Perona Malik equation makes its steady state non-trivial. Below is the steady state that was reached at t=1.05.

Non-trivial steady state

Difference between the original and the steady state of Norstrom eq.

Sunday, December 4, 2011

Anisotropic diffusion is awesome

We know that the heat equation diffuses an image isotropically. This is not good, because it destroys the edges too.  Somehow one must find a way for isotropic diffusion, only in a flat region and stop the diffusion near the edges. This is the motivation for Peitro Perona and Jitendra Malik's original formulation of anisotropic diffusion.

The heat equation is $\frac{\partial u}{\partial t}=\Delta u$ i.e. $\frac{\partial u}{\partial t}=\mbox{div}\,(\nabla u)$ with $u(0)=f$.

The idea behind the anisotropic diffusion is to multiply the gradient by a function g that equals 1 when the $\nabla u=0$ i.e. flat regions and goes to 0 as the gradient increases, i.e. near the edges. A typical function that works is $g(x)=\frac{1}{1+(x/b)^2}$. for some constant $b$.

That is $\frac{\partial u}{\partial t}= \mbox{div}\,(g(\Vert\nabla u \Vert^2) \nabla u)$.

Alternatively, this can also be viewed as the gradient descent for the minimization of the energy
$$E(u):=\frac{1}{2}\int_{\Omega} g(\Vert\nabla u\Vert^2).$$
Let us look at some experiments, now.


Here is an experiment with Perona Malik model with b=0.01, at t=4.



Perona Malik at t=4.


There is hardly any loss of visual quality of the image. Compare it to the original below. Notice there is some loss of quality in her hair.



The original




Let us see what happens as we continue to t=10 below. Ok, the image is a bit too smooth, rather artistic.



Perona Malik at t=10


Comparison with heat equation



Compare these results with the heat equation at t=4 below. Very smooth, edges are lost. Not good.



Heat equation at t=4.

Thursday, December 1, 2011

Holiday wine & cheese event is fun

Today, my dear friend Justin Lau, showed me a jpeg poster that he complained was not very good.Of course ! It produces strange artifacts. Below is the poster he showed me:


Original poster

Here is the detail of the artifact:

Details of the artifact





So, I asked him if I could try to correct the image. And this is what I got with the parameters: 0.01, 6 and 8 in my new algorithm, which is also for auction ($1000 minimum bid).


Details of the artifact removal algorithm




Complete poster corrected ... sort of see this post also!




Boo, the sharpest dog in the world !

I do not like Matlab's unsharp filter. Aside from the fact that the name is stupid, this filter overshoots the edges.

Here is today's fun assignment: design a filter using linear combination of derivatives of the image that does not overshoot.

Here is the output where I sharpen Boo, without overshooting him.

Boo, the sharpest dog in the world




Compare the above image with the original below. There is a subtle change in the image, which I personally believe makes it pleasing to look at.

Original Boo




Wednesday, November 30, 2011

A new fast denoising algorithm for auction

Here is a novel denoising algorithm. It denoises a 256x256 color image in 2.17 seconds, grayscale in 0.58 seconds (should this be plural or singular?) in Matlab.

I will give the code with explanation to the highest bidder. Minimum bid $1000 USD.

Noisy image

Denoised image

Details of the noisy image

Details of denoised image:
Notice the texture of the wig is almost perfectly maintained.

Nostaligia

It is snowing in Toronto and it is depressing and gloomy to look outside my window. This is a perfect time to think about the past. When I was a child we used to have a film camera. Tricky contraption, if you ask me. The light would be imparted on a film, we would call it a 'negative'. This film is then developed into a photograph. For some reason, I was more fascinated by the negative. Somehow I would think that all my negativity is captured in that. So, on this gloomy day I found an old picture of mine and made its negative. Can you figure out a one line code to do this?




Me and my alter ego

Tuesday, November 29, 2011

A quick divergence to the cutest Laplacian in the world.

Let $\mathbf{u}=(u_1, u_2)$ be a vector valued function. Then the divergence operator: $\mbox{div }\mathbf{u}:=\frac{du_1}{dx}+\frac{du_2}{dx}$ is a very useful thing to have on our side. Again, I will leave this little code upto you. It is really a one liner!

Note, that if $\nabla u=(u_x, u_y)$ then $\mbox{div}(\nabla u)$ gives the Laplacian $\Delta u$ of the image $u$. This could be useful.

Here is the Laplacian of Boo displayed between -50 to 50.
i.e. use the commands to display the Laplacian d :
>> m=-50; M=50; figure; imshow((d-m)/(M-m));

Laplacian of Boo (-50 to 50)


Here is the original.

Boo, the cutest dog in the world

Monday, November 28, 2011

The edge of darkness is upon us!

I assume that now we know how to find edges in a given image u. It's simple, just take the derivative!

Today, I want to find the smooth regions. Well, it's kind of silly, as now that we have edges, the rest of the image is kind of edgeless. Right... but I would like to assign a number from 0 to 1 to it ... where 1 indicates a flat region and zero indicates an edge, in other words I want to make the edges appear dark.

There are many ways to do it. One of the ways to do it is to look at the function $$g(x, y):=\frac{1}{\sqrt{1+|K*\nabla u(x, y)|}}.$$Where, K is your favourite smoothing kernel. (I will leave it to you to code this as an exercise. It is a four liner.)

Staring at this function is a refreshing activity, something that I love to do in my spare time.


Boo in the edge of darkness

Lenna in the edge of darkness

Barbara in the edge of darkness






Sunday, November 27, 2011

Boo, the smoothest dog in the world!

Yesterday night I was too sleepy to code, so I wrote a code for filtering with a kernel that is more than 3x3. Btw, I still like my kernel to be of odd sized and square.

Let's use the image of Boo, the cutest dog in the world and make it the smoothest dog in the world. By using a 5x5 averaging kernel.


>> h=ones(5)/25;
>> u=double(imread('boo.jpg'));
>> uf=ifilter(u, h);m=min(min(min(uf))); M=max(max(max(uf))); figure; imshow((uf-m)/(M-m));

Boo, the cutest dog in the world!

Boo, after smoothing with a 5x5 kernel becomes the smoothest dog in the world!

Saturday, November 26, 2011

Reflections on convolutions and animal testing

Convolution is a very tool important in image processing. There are inbuilt code to perform convolution in Matlab. But while solving many problems in image processing we assume Neumann boundary conditions. For this reason we reflected the boundary with our cute little code expand, while we performed our algorithms in the interior of the image. Finally we got rid of the extra image boundary by our shrink program. Many a times filtering is realized by performing a convolution of some kernel K with the image u.

In a discrete case, the convolution K*u is essentially the "masking process" as described in my blog about the Sobel filtering.

For example, the Laplacian operator, or the Sobel operator we have discussed before could be coded elegantly using Matlab's inbuilt filter2 command (which uses conv2).

I don't like two things about using using these two in-built commands for filtering.
  1. Matlab uses "zero padding" in these in built commands, whereas we need a reflecting boundary. i.e. it adds zeros on the boundary of an image.
  2. These commands can be used for only matrices.
Well, these are really trivial issues, and we can take care of them by writing our own little function say ifilter to avoid these issues.

This code performs a convolution of the image u with any 3x3 kernel h with Neumann boundary conditions. i.e. This code alone can find gradients, perform Sobel, Prewitt, Roberts filtering, or any other filtering with 3x3 window in a very nice, elegant way ! 

Of course you can write a more generalized code that is not restricted to 3x3 window. I won't do it because it is 3.00 am and I can hardly stay awake.

Let's see which filter should I chose... Let's design a filter that picks up derivatives at 45 degrees angle.

          0     1     0
h =   -1     0     1
          0    -1     0

We can normalized the above filter by dividing it by 2.

Let us do some animal testing now. I want to use an image of a Boston terrier.

Boston terrier
Here is the result of the filter, after using the following commands:

>> u=imread('bostonterrier.png');
>> h=(1/2)*[0 -1 0; -1 0 1; 0 1 0];
>> uf=ifilter(u, h);m=min(min(min(uf))); M=max(max(max(uf))); imshow((uf-m)/(M-m));

Directional derivative

               
Laplacian dog

Last time we used Laplacian, lets use ifilter to find the Laplacian of the Boson terrier.


>> h=[0 1 0; 1 -4 1; 0 1 0];
>> uf=ifilter(u, h);m=min(min(min(uf))); M=max(max(max(uf))); imshow((uf-m)/(M-m));



Laplacian dog


Prewitt operator

Here is the classical Prewitt operator implemented using ifilter


>> hx=(1/3)*[-1 -1 -1; 0 0 0; 1 1 1]


   -0.3333   -0.3333   -0.3333
         0         0         0
    0.3333    0.3333    0.3333


>> hy=(1/3)*[-1 0 1; -1 0 1; -1 0 1]


   -0.3333         0    0.3333
   -0.3333         0    0.3333
   -0.3333         0    0.3333


>> ux=ifilter(u, hx); uy=ifilter(u, hy); prewitt=sqrt(ux.^2+uy.^2);
>> m=min(min(min(uf))); M=max(max(max(prewitt))); imshow((prewitt-m)/(M-m));


Prewitt dog

Friday, November 25, 2011

Heat equation removes noise ... and many other things

Heat equation $\frac{\partial u}{\partial t}=\Delta u$ is extremely useful for smoothing images,
where $u(x, 0)=f(x)$ i.e. $u$ is initialized to the given original image $f$, the image to be smoothed, we also impose Nuemann boundary conditions. Smoothing an image removes noise in image $f$. The problem with heat equation is though that it can not distinguish between noise and useful features like edges. Plus you never know when to stop! It just keeps on smoothing.

The boundary condition that we could use is Neumann boundary condition. i.e. the normal derivatives at the boundary is zero. The way to maintain this is to reflect the boundary, run the heat equation in the interior of the image, then shrink it back to the original size and repeat.

The Laplacian operator $\Delta u=u_{xx}+u_{yy}$ is easy to discretize:
$\Delta u \equiv Lu=(u_{i, j-1}+u_{i-1, j}+u_{i, j+1}+u_{i+1, j}-4u_{i, j})/h^2.$

It's a five point operator,

$$
Lu=\frac{1}{h^2}\left[\begin{array}{ccc}0 & 1 & 0 \\1 & -4\,\,\,\,\,\, & 1 \\0 & 1 & 0\end{array}\right]
$$
Before we perform the heat equation on an image $f$, we should see how the image really looks like.


>> f=imread('eyes.jpg');
>> figure; imshow(f);



This is what we get...


Image f

Let us try to see view the image as a graph of a function. This image being an RGB image we have to change it to grayscale before viewing it.

>> figure; mesh(double(rgb2gray(f)));


This command gives the following output.

Image f viewed as a graph of a function


In the above image, the magnitude of the grayscale is given by with blue indicating small magnitude and red color indicating large magnitide. This is a very "spiky" image. If we run heat equation on this image we should get a smoother image.

Here is the code I wrote for the heat equation. By now I assume that you have been convinced that the for loops are bad, so one can write the code heat_2fast without for loops.

If you write the following in the command window:


>> u=heat_2fast(f, 1, 0.25, 1); figure; imshow(u/255);

Output of the heat equation at t=1.

Let us try to see view the image as a graph of a function:



>> v=rgb2gray(u/255);
>> mesh(v);

Result of heat equation, it makes the image smooth !


The reason why heat equation is called a diffusion equation


Simply put heat equation diffuses the function f. As a matter of fact it diffuses the function f  iso-tropically. i.e. equally in all directions.  Let us see what happens if we run the heat equation for =1000. This is what I got as a result:


The result of heat equation after a long time

This really flattened things out, huh? If we see this as a graph of a function we get the following.
Heat equation FTW!

This is what happens if you run the heat equation for t=1000. If you keep it running for more time eventually things become really flat like Kansas.

Thursday, November 24, 2011

A code faster than the speed of light ...

Well, may be it's an exaggeration, but now that the speed of light can be exceeded (may be) we might as well use that phrase.

So we had fun with the Sobel filter yesterday. But remember we used nested for loops. If you are serious about faster computations, you should always avoid the for loops. Why not use the magical command of circshift instead in coding the sobel filter?

Here is a comparison of the performance of Sobel filtering with and without using the for loops.
>> u=imread('barbara.png'); t1=cputime; [A1 ux1, uy1]=sobel_for(u); t1=cputime-t1


t1 =


    5.0500


>> u=imread('barbara.png'); t=cputime; [A ux, uy]=sobel(u); t=cputime-t


t =


    0.0700


>> t1/t


ans =


   72.1429

This is very nice the new code is 72 times faster that the previous code. Me likey!

Smart display of images

The command likes the (double) images if they are scaled from 0 to 1.
So we do it as follows:
m=min(min(min(A))); M=max(max(max(A))); figure; imshow((A-m)/(M-m));
m=min(min(min(ux))); M=max(max(max(ux))); figure; imshow((ux-m)/(M-m));
m=min(min(min(uy))); M=max(max(max(uy))); figure; imshow((uy-m)/(M-m));
Here is the output of the images:
Output of the Sobel filter

The x-component of the Sobel filter

The y-component of the Sobel filter



Wednesday, November 23, 2011

Sobel filter is a nice filter

I <3 Sobel filter

One of the first edge detection filters one encounters in image processing is Sobel filter. It is an adorable filter. It consists of the following two kernels
$$
S_x=\left[\begin{array}{ccc}-1 & -2 & -1 \\\,0 & \,0 &\,0 \\\,1 & \,2 & \,1\end{array}\right] \mbox{ and }S_y=\left[\begin{array}{ccc}-1 & 0 & 1 \\ -2 & 0 &2 \\-1 & 0 & 1\end{array}\right]
$$
(Remember our convention that the origin is at the top-left corner, $x$-axis is going down and $y$-axis is going to the right.)

You align the central zero of these operators and put them on top of each image pixel multiply the 9 pixels of the image with the 9 pixels of the operator and add the result. In other words convolve the image $u$ with $S_x$ and $S_y$. The results $u*S_x$ and $u*S_y$ give the location of the horizontal and vertical edge.

Another way to look at it is to observe the following:
\begin{align*}
&(u*S_x)(i, j)\\
&= u(i+1, j-1)-u(i-1, j-1)+2u(i+1, j)-2u(i-1, j)+u(i+1, j+1)-u(i-1, j+1)\\
&= 2\Big(\frac{u(i+1, j-1)-u(i-1, j-1)}{2}\Big)\\
& \, +4 \Big(\frac{u(i+1, j)-u(i-1, j)}{2}\Big)\\
& \, +2\Big(\frac{u(i+1, j+1)-u(i-1, j+1)}{2}\Big).\\
\end{align*}
It looks like there are three central difference schemes here.
Indeed,

$(u*S_x)(i, j)=2 D_xu(i, j-1)+4 D_xu(i, j) + 2 D_xu(i, j+1)$

and,

$(u*S_y)(i, j)=2 D_yu(i-1, j)+4 D_yu(i, j) + 2 D_yu(i+1, j),$

where $D_x$ and $D_y$ denote central difference schemes in the $x-$ and $y-$ directions respectively. I like to normalize theses operators by dividing by 8, which makes it a weighted average of three central difference schemes. i.e.
\begin{align*}
(u*S_x)(i, j)&\equiv \frac{D_xu(i, j-1)+2 D_xu(i, j) + D_xu(i, j+1)}{4}, \\
\\
(u*S_y)(i, j)&\equiv \frac{D_yu(i-1, j)+2 D_yu(i, j) + D_yu(i+1, j)}{4}.
\end{align*}
For coding Sobel filter one should be careful at the image boundary. I like to reflect the boundary. This is achieved by expanding the image u by one pixel and computing the convolution in the interior and then shrinking the image by one pixel.

So let's write the codes for expanding the image and shrinking the image. It would be useful to write these codes so that we could use them for expanding and shrinking by n number of pixels. Trust me, these codes will come handy later.

Shrinking an image


%==========================================================================

function u=shrink(U, n)

% Author: Prashant Athavale
% Date: 11/23/2011
% Please acknowledge my name, if you use this code.
% This program shrinks the image by n by removing its boundary n
% times.

%==========================================================================

if nargin==1
    n=1;
end

[M, N, P]=size(U);
if P==1
    u=U(n+1:M-n, n+1:N-n);
elseif P==3
    R=U(:,:,1); r=R(n+1:M-n, n+1:N-n);
    G=U(:,:,2); g=G(n+1:M-n, n+1:N-n);
    B=U(:,:,3); b=B(n+1:M-n, n+1:N-n);
    u=cat(3, r, g);
    u=cat(3, u, b);
end

end % end of the function shrink

%==========================================================================



Remarks

There are a few good things to note here. The line u=U(n+1:M-n, n+1:N-n); gets rid of the n-pixel boundaries of a grayscale image U.

Can you write a code for expanding the image?

If it is an RGB image then we have to do this separately for each channel, and then concatenate the channels. We could use the command cat for this. The number 3 in cat(3, r, g) gives the dimension in which the matrices are joined together. 


Sobel filter


Lastly, we code the Sobel filter itself.




%==========================================================================

function [A, ux, uy]=sobel_for(u, Abs)

% Author: Prashant Athavale
% Date: 11/23/2011
% Please acknowledge my name, if you use this code.
% This function performs sobel filtering.

%==========================================================================

if nargin==1
    Abs=0;
end

u=double(u);

% Let us expand the image u by 1 unit
u=expand(u);
[M, N, P]=size(u);

% preallocation of memory for faster implementation
ux=zeros(size(u));
uy=zeros(size(u));

% implementation of the for loop
t=cputime;
for i=2:M-1
    for j=2:N-1
        ux(i, j)=(u(i+1, j-1)-u(i-1, j-1)...
            +2*u(i+1, j)-2*u(i-1, j)...
            +u(i+1, j+1)-u(i-1, j+1))/8;

        uy(i, j)=(u(i-1, j+1)-u(i-1, j-1)...
            +2*u(i, j+1)-2*u(i, j-1)...
            +u(i+1, j+1)-u(i+1, j-1))/8;
    end
end

% Let us shrink the images back to the original size
ux=shrink(ux);
uy=shrink(uy);

if Abs~=0
    if P==1
        A=(abs(ux)+abs(uy))/2;
    else

        A=zeros(size(ux));
        for i=1:P
            A(:,:,i)=(abs(ux(:,:,i))+abs(uy(:,:,i)))/2;
        end
    end
else
    A=sqrt(ux.*ux+uy.*uy);
end

end % end of the function sobel_for

%==========================================================================



Ok, so I named this code sobel_for as I am using for loop. Can you code this without using them? Does that make the code faster?


Results

For this filtering I used the following image:

Aishwarya Rai 

For the following code ...

u=imread('eyes.jpg');
A=sobel_for(u, 1);
figure; imshow(A/92);


 I got the following result ... 


Result of Sobel filtering







Tuesday, November 22, 2011

Image gradient with central difference is fun

We have computed image derivatives with forward finite difference. Let us now take partial derivatives with central finite difference method.

du/dx:=(u(x+h, y)-u(x-h, y))/2h and du/dy:=(u(x, y+h)-u(x, y-h))/2h


Having written the codes one can use them to compute the image difference with central difference scheme.


Image gradient with central difference 


%==========================================================================

function [Gx, Gy]=gradient_central(u)

% Author: Prashant Athavale
% Date: 11/22/2011
% Please acknowledge my name if you use this code, thank you.

% This function computes the gradient of the image u.
% It uses central difference approximation of the derivatives 
% and thus the name gradient_central

%==========================================================================

if strcmp(class(u),'uint8')
    u=double(u);
end
Gx=Dfx_central(u);
Gy=Dfy_central(u);

end % end of the function gradient_central

%==========================================================================


%==========================================================================

function Gm=grad_central_magnitude(u)

% Author: Prashant Athavale
% Date: 11/22/2011
% Please acknowledge my name if you use this code, thank you.

% This function takes the image stored in u and gives the magnitude of the
% gradient

%==========================================================================

if strcmp(class(u),'uint8')
    u=double(u);
end
[Gx, Gy]=gradient_central(u);
Gm=sqrt(Gx.*Gx+Gy.*Gy);

end % end of the function grad_central_magnitude

%==========================================================================



Image gradient results with central difference

Here are the results of the above codes with the following commands in the command window


u=imread('barbara.png');

Gm=grad_central_magnitude(u);

ucx=Dfx_central(u); figure; imshow(ucx/255+0.4);

ucy=Dfy_central(u); figure; imshow(ucy/255+0.4);
figure; imshow(Gm/255+0.4);


Partial derivative in x-direction with central difference


Partial derivative in y-direction with central difference




Magnitude of the gradient with central difference


Our eyes are most sensitive to gradients

Our eyes, and our sense in general, are most sensitive to gradients. May be that is the reason when we are asked to draw a picture from memory, we tend to draw the sketch involving primarily the edges. It is also the reason why it is difficult for a person from California to get adjusted to the winter in Canada.

Anyway we are ready to talk about gradient (u_x, u_y) of an image u. We already have the code Dfx_plus for computing the partial derivative in the x-direction.


Image gradient 

Now let's compute the gradient and store it in matrices Gx and Gy. Here is the trivial code to do that.

%==========================================================================

function [Gx, Gy]=gradient_plus(u)

% Author: Prashant Athavale
% Date: 1122/2011
% Please acknowledge my name if you use this code, thank you.

% This function computes the gradient f the image u.
% It uses forward approximation of the derivatives and thus the name
% gradient_plus

%==========================================================================

if strcmp(class(u),'uint8')
    u=double(u);
end
Gx=Dfx_plus(u);
Gy=Dfy_plus(u);
end

%==========================================================================

Magnitude of the image gradient


Let us now write another code for computing the magnitude of the gradient.

%==========================================================================

function Gm=grad_plus_magnitude(u)

% Author: Prashant Athavale
% Date: 11/22/2011
% Please acknowledge my name if you use this code, thank you.

% This function takes the image stored in u and gives the magnitude of the
% gradient

%==========================================================================

if strcmp(class(u),'uint8')
    u=double(u);
end
[Gx, Gy]=gradient_plus(u);
Gm=sqrt(Gx.*Gx+Gy.*Gy);
end

%==========================================================================


Remark 1: Pointwise operation

Note in the above code you see the line Gx.*Gx. Here the .* gives point wise multiplication of the entries in the matrix Gx. This is useful as one more time we could avoid a for loop, making things faster.

Let's implement the following in the command window.

>> u=imread('barbara.png');
>> Gm=grad_plus_magnitude(u);
>> figure; imshow(Gm/255+0.5);
>> edges=(Gm>15);
>> figure; imshow(edges);

Remark 2: Logical assignments

Note the line edges=(Gm>15). Here is another instance we avoided a for loop. What it does is, it creates a matrix edges of class logical. The value edges(i, j)=1 if Gm(i, j) > 15. Write a code using a nested for loop to do this and see which one is faster.


Results

Following are the images I got for Gm and edges as a result of the above code. This is how a gradient looks like. I don't know about you, but I was very happy when I saw a gradient of an image for the first time.

Magnitude of the gradient in the image Barbara

Edges, defined as the points where gradient is more than 15

Partial derivatives with backward difference scheme


The partial derivatives with backward difference scheme are given by

u_x:=(u(x, y)-u(x-h, y))/h  and u_y:=(u(x, y)-u(x, y-h))/h .

I recommend that you code these as a practice.