# How Image Edge Detection Are Done/Processed? - FRIENDOTECH

About millions of photos are captured in a day and to make it look better, we usually use many filters in our photos. There are many apps available on Google Play or iOS which gives us photos of our kind.

Nowadays, almost every apps owners are advertising their apps with one of the most hottest buzzword, "Artificial Intelligence". While these editing apps are a small example of AI, but they do their job pretty well.

These apps always do something which make us think about their working algorithms. Irrespective of the image colors, these apps can easily detect the edges of the images and process the image according to your need. Even portrait mode photography can be easily done by following some complex algorithms only.

Don't you ever wanted to know how these apps detect the edges? Even though this explanation is a bit complex but after reading it, you'll at least come to know about the algorithms followed by these photo editing apps.

Those who are Mathematics student, might be able to understand it easily as these algorithms use "

*CALCULUS".*

Consider a black-&-white image

Here's a picture zoomed on a small region.

As you can see it contains many pixels (Even if you're not able to see them, let me clear that images are made up of small pixels).

It can be

**represented as 2D matrix**with following constrains:- in
Aij ${A}_{ij}$ i represents x coordinate of pixel, j represents y coordinate - top-left point is coordinate (0,0)
- x i.e i increases on moving right , and y i.e j increases on moving downward
- value of
Aij ${A}_{ij}$ ranges from 0 to 255, 0 means black 255 means white

So

**matrix**for this**small region**will be
Now consider

*only one row of the matrix .*
i.e: something like this

represented as :

If we plot it on a graph :

it will be like

- In the below graphs:
- X axis : x coordinate of pixel
- Y axis : value of pixel

Remember high value means more white low value means more black ranging from 0–255.

Now comes the Interesting Part:

what if we differentiate this graph considering it a function y = f(x).

here y is value Aij ${A}_{ij}$ and x is i of matrix (here y is different)

So let us

**plot its derivative**
As you can see at the point of change in picture from white to black the value of derivative is suddenly increased.

what if we further differentiate , i.e

**double differentiation**
so you can see a sudden bump in region of change.

let us mark the point in image row:

Now if we apply this thing for

**all rows and mark points of high value of double derivative,****it'll along the edge in image**
Similarly also take it for all columns and you will get complete edges in images.

**Part 2 Mathematical Implementation**( Its more amazing than above)

How this differentiation can be applied to images by computer:

Some mathematicians found a phenomenon called

**Convolution**let me explain that first:*Consider a large NxN matrix and a small 3x3 matrix:*

Here if

**dot product**of small matrix is done with all 3x3 sized parts of Big matrix. Dot product means each element is**multiplied**by its r**espective elements**eg. 131*(-1) , 162*0 , 232*1 and so on .
The result saved in another matrix .

This process is called

**Convolution**, here 3x3 matrix is Kernel , it can be even larger , but most used one is 3x3.
The phenomenon is that if a large matrix is convoluted with a Kernel

**regions similar to kernel gets highlighted**(value increases) in resultant matrix whereas**non similar regions become dark.**
Kernels representing double differentiation are:

These above are also called

**Sobel Kernels**

*Part 3 C++ based OpenCV implementation*

- #include<opencv2/opencv.hpp>
- #include <fstream>
- using namespace cv;
- float data1[][3]={-1,0,1,
- -2,0,2,
- -1,0,1};
- float data2[][3]={1,2,1,
- 0,0,0,
- -1,-2,-1};
- int main()
- {
- Mat frame;
- VideoCapture cap(0);
- Mat kernel1(3,3,CV_32FC1,&data1);
- Mat kernel2(3,3,CV_32FC1,&data2);
- kernel1 = kernel1*2;
- kernel2 = kernel2*2;
- while(1)
- {
- cap>>frame;
- cvtColor(frame,frame,COLOR_BGR2GRAY);
- Mat out1,out2,out;
- filter2D(frame,out1,-1,kernel1);
- filter2D(frame,out2,-1,kernel2);
- out = out1/2+out2/2;
- threshold(out,out,80,255,THRESH_BINARY);
- imshow("Input",frame);
- imshow("Output",out);
- waitKey(1);
- }
- }

**Output:**

*Many advancements can be done by removing noise, using Gaussian derivatives , Canny edge detection.*

Colors are usually represented as

**RGB**values (here in OpenCV it is BGR)*B->Blue,G->Green,R->Red*edge detection is usually done after converting colored image to black-and-white

But there is one more method, Image can be

**converted to HSV color-space**Here H stands for**Hue**, S for**Saturation**, V for**Value**i.e H tells the color , S tells intensity , V tells brightness
Source:

**HSL and HSV - Wikipedia**
So if we only take H or Hue

**we can find edges without getting affected by shadows or lighting.****Post Credit: Bhanu Dutta Parashar (Quoran)**

YouTube is everyone's favorite. So why don't you try this new app available for Android which let you enjoy all your favorite channels on the go!

**YouTube channels - All In One!**

**Hope this made you understand how image edge detection are being processed.**

Share it!!!

Follow Us:

Subscribe us for more...

## 0 comments:

## Post a Comment