OpenCV Cascade Image Testing with Python

This post will cover testing the cascade trained by OpenCV.  This post assumes you have read all the previous posts of the clover project.  A lot is assumed in this post.  I am working the Raspberry Pi that was setup in an earlier post.  I am using cascade file(s) from OpenCV training, also from a previous post.

The purpose of this process is to see how the cascade recognizes objects it WAS trained with.  This method can be used to see how efficient the cascade recognizes non-trained images, but for this post, I am only concerned about the training.

I setup a total of 10 images from the training collection.  One of the images has many samples placed into it, thus call TestAll.jpg.  Below are the images I used for this post.  These are un-prepped images (the way they were before I cropped them for training).  Some are negative images.

The purpose of this project and this post is not to teach Python.  Python is a great scripting / emulated language and serves our testing purpose well.  I have included the code I used below and I will briefly discuss it. Basically, I simply wanted to see the difference between HAAR and LBP  trained cascades for the clover project.

import cv2
import numpy as np


clover_cascadeHAAR = cv2.CascadeClassifier('cascade5.xml')
clover_cascadeLBP = cv2.CascadeClassifier('cascadeNLBP16.xml')

scaleHAAR = 3
neighborsHAAR = 50

scaleLBP = 1.1
neighborsLBP = 5

file_names = ['test1.jpg'
for file_name in file_names:
imgTest = cv2.imread(file_name, cv2.IMREAD_COLOR)
img500x500 = cv2.resize(imgTest,(600,600))

imgHAAR = cv2.cvtColor(img500x500, cv2.COLOR_BGR2GRAY)
imgLBP = cv2.cvtColor(img500x500, cv2.COLOR_BGR2GRAY)

cloverHAAR = clover_cascadeHAAR.detectMultiScale(imgHAAR, scaleFactor=scaleHAAR, minNeighbors=neighborsHAAR)
cloverLBP = clover_cascadeLBP.detectMultiScale(imgLBP, scaleFactor=scaleLBP, minNeighbors=neighborsLBP)

for (x,y,w,h) in cloverHAAR:

for (x,y,w,h) in cloverLBP:

cv2.putText(imgHAAR,'HAAR Cascade',(50,50),cv2.FONT_HERSHEY_SIMPLEX, 1, (0,255,0),2,cv2.LINE_AA)
cv2.putText(imgLBP,'LBP Cascade',(50,50),cv2.FONT_HERSHEY_SIMPLEX, 1, (0,255,0),2,cv2.LINE_AA)

# Uncomment if you prefer to see them instead of save them
#cv2.imshow('img HAAR',imgHAAR)
#cv2.imshow('img LBP',imgLBP)
#cv2.imshow('img LBP2',imgLBP2)

cv2.imwrite('_LBP_'+file_name, imgLBP)


This program uses two cascade files, one for HAAR and one for LBP.  It then applies the detect method for each of the cascades to the test images.  The program adds the label to each image detailing what method was used.  Lastly, it saves the new image.  The images are displayed below:

You can tell from these sets, the LBP’s cascade has much more training.  Thus this example here is not a fair test between the two.  Take time to look over the image sets and learn.  See things like what it did or didn’t detect.  False positives are there, and some 4 leaf clovers were missed.

The LBP doesn’t get much better than what is shown here.  It is up to you (and me) as the programmer/scientist to optimize what we have.  The optimization will be written in c++ in a later post.  I feel it is important to see these images and the process.  When trying to build a generic object recognition (like any four leaf clover), it is messy.  Unlike a stop sign that is always the same, a clover is not.  The four leaves could be the same size or not.  Some can be well rounded, others more slender.  The coloring can vary and so can lighting.

Think about the challenges and how you would go about approaching them.  In a future post or possibly several future posts, I will discuss a couple of the ways I approached it and how well they worked or didn’t work.

As a motivation, in the next post, I have included a video from future work (Work that is future to where we are in the process at this point).  The video, though short, will show what it is like when a methodology is applied.