Software engineering


YOLO — You Only Look Once


Download 1.59 Mb.
bet4/5
Sana06.02.2023
Hajmi1.59 Mb.
#1169419
1   2   3   4   5

YOLO — You Only Look Once


All of the previous object detection algorithms use regions to localize the object within the image. The network does not look at the complete image. Instead, parts of the image which have high probabilities of containing the object. YOLO or You Only Look Once is an object detection algorithm much different from the region based algorithms seen above. In YOLO a single convolutional network predicts the bounding boxes and the class probabilities for these boxes.

YOLO
How YOLO works is that we take an image and split it into an SxS grid, within each of the grid we take m bounding boxes. For each of the bounding box, the network outputs a class probability and offset values for the bounding box. The bounding boxes having the class probability above a threshold value is selected and used to locate the object within the image.
YOLO is orders of magnitude faster(45 frames per second) than other object detection algorithms. The limitation of YOLO algorithm is that it struggles with small objects within the image, for example it might have difficulties in detecting a flock of birds. This is due to the spatial constraints of the algorithm.
Classification and object detection are the main parts of computer vision. Classification is finding what is in an image and object detection and localisation is finding where is that object in that image. Detection is a more complex problem to solve as we need to find the coordinates of the object in an image.
To Solve this problem R-CNN was introduced by Ross Girshick, Jeff Donahue, Trevor Darrell and Jitendra Malik in 2014. R-CNN stands for Regions with CNN. In R-CNN instead of running classification on huge number of regions we pass the image through selective search and select first 2000 region proposal from the result and run classification on that. In this way instead of classifying huge number of regions we need to just classify first 2000 regions. This makes this algorithm fast compared to previous techniques of object detection. There are 4 steps in R-CNN. They are as follows :-

  1. Pass the image through selective search and generate region proposal.

  2. Calculate IOU (intersection over union) on proposed region with ground truth data and add label to the proposed regions.

  3. Do transfer learning using the proposed regions with the labels.

  4. Pass the test image to selective search and then pass the first 2000 proposed regions from the trained model and predict the class of those regions.


R-CNN
I am going to implement full R-CNN from scratch in Keras using Airplane data-set from http://www.escience.cn/people/JunweiHan/NWPU-RESISC45.html . To get the annotated data-set you can download it from link below. The code for implemented RCNN can also be found in the below mentioned repository.
Once you have downloaded the dataset you can proceed with the steps written below.
import os,cv2,keras
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
First step is to import all the libraries which will be needed to implement R-CNN. We need cv2 to perform selective search on the images. To use selective search we need to download opencv-contrib-python. To download that just run pip install opencv-contrib-python in the terminal and install it from pypi.
ss = cv2.ximgproc.segmentation.createSelectiveSearchSegmentation()
After downloading opencv-contrib we need to initialise selective search. For that we have added the above step.
def get_iou(bb1, bb2):
assert bb1['x1'] < bb1['x2']
assert bb1['y1'] < bb1['y2']
assert bb2['x1'] < bb2['x2']
assert bb2['y1'] < bb2['y2'] x_left = max(bb1['x1'], bb2['x1'])
y_top = max(bb1['y1'], bb2['y1'])
x_right = min(bb1['x2'], bb2['x2'])
y_bottom = min(bb1['y2'], bb2['y2']) if x_right < x_left or y_bottom < y_top:
return 0.0 intersection_area = (x_right - x_left) * (y_bottom - y_top) bb1_area = (bb1['x2'] - bb1['x1']) * (bb1['y2'] - bb1['y1'])
bb2_area = (bb2['x2'] - bb2['x1']) * (bb2['y2'] - bb2['y1']) iou = intersection_area / float(bb1_area + bb2_area - intersection_area)
assert iou >= 0.0
assert iou <= 1.0
return iou
Now we are initialising the function to calculate IOU (Intersection Over Union) of the ground truth box from the box computed by selective search. To understand more about calculating IOU you can refer to the link below.
train_images=[]
train_labels=[]
for e,i in enumerate(os.listdir(annot)):
try:
if i.startswith("airplane"):
filename = i.split(".")[0]+".jpg"
print(e,filename)
image = cv2.imread(os.path.join(path,filename))
df = pd.read_csv(os.path.join(annot,i))
gtvalues=[]
for row in df.iterrows():
x1 = int(row[1][0].split(" ")[0])
y1 = int(row[1][0].split(" ")[1])
x2 = int(row[1][0].split(" ")[2])
y2 = int(row[1][0].split(" ")[3])
gtvalues.append({"x1":x1,"x2":x2,"y1":y1,"y2":y2})
ss.setBaseImage(image)
ss.switchToSelectiveSearchFast()
ssresults = ss.process()
imout = image.copy()
counter = 0
falsecounter = 0
flag = 0
fflag = 0
bflag = 0
for e,result in enumerate(ssresults):
if e < 2000 and flag == 0:
for gtval in gtvalues:
x,y,w,h = result
iou = get_iou(gtval,{"x1":x,"x2":x+w,"y1":y,"y2":y+h})
if counter < 30:
if iou > 0.70:
timage = imout[y:y+h,x:x+w]
resized = cv2.resize(timage, (224,224), interpolation = cv2.INTER_AREA)
train_images.append(resized)
train_labels.append(1)
counter += 1
else :
fflag =1
if falsecounter <30:
if iou < 0.3:
timage = imout[y:y+h,x:x+w]
resized = cv2.resize(timage, (224,224), interpolation = cv2.INTER_AREA)
train_images.append(resized)
train_labels.append(0)
falsecounter += 1
else :
bflag = 1
if fflag == 1 and bflag == 1:
print("inside")
flag = 1
except Exception as e:
print(e)
print("error in "+filename)
continue

Running selective search on an image and getting proposed regions
The above code is pre-processing and creating the data-set to pass to the model. As in this case we can have 2 classes. These classes are that whether the proposed region can be a foreground (i.e. Airplane) or a background. So we will set the label of foreground (i.e. Airplane) as 1 and the label of background as 0. The following steps are being performed in the above code block.

  1. Loop over the image folder and set each image one by one as the base for selective search using code ss.setBaseImage(image)

  2. Initialising fast selective search and getting proposed regions using using code ss.switchToSelectiveSearchFast() and ssresults = ss.process()

  3. Iterating over all the first 2000 results passed by selective search and calculating IOU of the proposed region and annotated region using the get_iou() function created above.

  4. Now as one image can many negative sample (i.e. background) and just some positive sample (i.e. airplane) so we need to make sure that we have good proportion of both positive and negative sample to train our model. Therefore we have set that we will collect maximum of 30 negative sample (i.e. background) and positive sample (i.e. airplane) from one image.

After running the above code snippet our training data will be ready. List train_images=[] will contain all the images and train_labels=[] will contain all the labels marking airplane images as 1 and non airplane images (i.e. background images) as 0.

Positive sample on right, Negative sample on left
X_new = np.array(train_images)
y_new = np.array(train_labels)
After completing the process of creating the dataset we will convert the array to numpy array so that we can traverse it easily and pass the datatset to the model in an efficient way.
from keras.layers import Dense
from keras import Model
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import Adamfrom keras.applications.vgg16 import VGG16
vggmodel = VGG16(weights='imagenet', include_top=True)
Now we will do transfer learning on the imagenet weight. We will import VGG16 model and also put the imagenet weight in the model. To learn more about transfer learning you can refer to the article on link below.

Download 1.59 Mb.

Do'stlaringiz bilan baham:
1   2   3   4   5




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling