Video of Clover Catch Filter Applied

I am jumping ahead a bit in our series, but this is for the purpose of motivation with applying a methodology and working to create your own algorithm.

The following video was taken with my cell phone, thus I apologize for the bouncing behavior.  The first half of the video shows the application using the default cascade file on the raw video feed (except the video was changed to gray scale for quicker mapping, and mapped to the live color).  The second half, you will see a dramatic change as I implement my form of video prep along with my first attempt at a video algorithm.  If a picture is worth a thousand words, then a video must be worth a novel, as these results speak for themselves.

It is exciting, and I encourage you to experiment research options. Also, and very important, is to allow yourself to fail many times!

If this type of result doesn’t excite you, you probably shouldn’t be going through my series!  Honestly, folks, this is like getting that hole in one on a par 3.  It is what keeps you in the game after missing it all those other times.

OpenCV Cascade Image Testing with Python

This post will cover testing the cascade trained by OpenCV.  This post assumes you have read all the previous posts of the clover project.  A lot is assumed in this post.  I am working the Raspberry Pi that was setup in an earlier post.  I am using cascade file(s) from OpenCV training, also from a previous post.

The purpose of this process is to see how the cascade recognizes objects it WAS trained with.  This method can be used to see how efficient the cascade recognizes non-trained images, but for this post, I am only concerned about the training.

I setup a total of 10 images from the training collection.  One of the images has many samples placed into it, thus call TestAll.jpg.  Below are the images I used for this post.  These are un-prepped images (the way they were before I cropped them for training).  Some are negative images.

The purpose of this project and this post is not to teach Python.  Python is a great scripting / emulated language and serves our testing purpose well.  I have included the code I used below and I will briefly discuss it. Basically, I simply wanted to see the difference between HAAR and LBP  trained cascades for the clover project.


import cv2
import numpy as np

print('started')

clover_cascadeHAAR = cv2.CascadeClassifier('cascade5.xml')
clover_cascadeLBP = cv2.CascadeClassifier('cascadeNLBP16.xml')

scaleHAAR = 3
neighborsHAAR = 50

scaleLBP = 1.1
neighborsLBP = 5

file_names = ['test1.jpg'
,'test2.jpg'
,'test3.jpg'
,'test4.jpg'
,'test5.jpg'
,'test6.jpg'
,'test7.jpg'
,'test8.jpg'
,'test9.jpg'
,'TestAll.jpg']
for file_name in file_names:
imgTest = cv2.imread(file_name, cv2.IMREAD_COLOR)
img500x500 = cv2.resize(imgTest,(600,600))

imgHAAR = cv2.cvtColor(img500x500, cv2.COLOR_BGR2GRAY)
imgLBP = cv2.cvtColor(img500x500, cv2.COLOR_BGR2GRAY)

cloverHAAR = clover_cascadeHAAR.detectMultiScale(imgHAAR, scaleFactor=scaleHAAR, minNeighbors=neighborsHAAR)
cloverLBP = clover_cascadeLBP.detectMultiScale(imgLBP, scaleFactor=scaleLBP, minNeighbors=neighborsLBP)

for (x,y,w,h) in cloverHAAR:
cv2.rectangle(imgHAAR,(x,y),(x+w,y+h),(0,255,0),2)

for (x,y,w,h) in cloverLBP:
cv2.rectangle(imgLBP,(x,y),(x+w,y+h),(0,255,0),2)

cv2.putText(imgHAAR,'HAAR Cascade',(50,50),cv2.FONT_HERSHEY_SIMPLEX, 1, (0,255,0),2,cv2.LINE_AA)
cv2.putText(imgLBP,'LBP Cascade',(50,50),cv2.FONT_HERSHEY_SIMPLEX, 1, (0,255,0),2,cv2.LINE_AA)

# Uncomment if you prefer to see them instead of save them
#cv2.imshow('img HAAR',imgHAAR)
#cv2.imshow('img LBP',imgLBP)
#cv2.imshow('img LBP2',imgLBP2)

cv2.imwrite('_HAAR_'+file_name,imgHAAR)
cv2.imwrite('_LBP_'+file_name, imgLBP)
print(file_name)

print('finished')

This program uses two cascade files, one for HAAR and one for LBP.  It then applies the detect method for each of the cascades to the test images.  The program adds the label to each image detailing what method was used.  Lastly, it saves the new image.  The images are displayed below:

You can tell from these sets, the LBP’s cascade has much more training.  Thus this example here is not a fair test between the two.  Take time to look over the image sets and learn.  See things like what it did or didn’t detect.  False positives are there, and some 4 leaf clovers were missed.

The LBP doesn’t get much better than what is shown here.  It is up to you (and me) as the programmer/scientist to optimize what we have.  The optimization will be written in c++ in a later post.  I feel it is important to see these images and the process.  When trying to build a generic object recognition (like any four leaf clover), it is messy.  Unlike a stop sign that is always the same, a clover is not.  The four leaves could be the same size or not.  Some can be well rounded, others more slender.  The coloring can vary and so can lighting.

Think about the challenges and how you would go about approaching them.  In a future post or possibly several future posts, I will discuss a couple of the ways I approached it and how well they worked or didn’t work.

As a motivation, in the next post, I have included a video from future work (Work that is future to where we are in the process at this point).  The video, though short, will show what it is like when a methodology is applied.

Setup OpenCV on Raspberry Pi 3 B

Raspberry Pi is a hobbyist’s dream.  It is a small compact computer base for under $40.  Yes!  Under $40.  Below is a professional product image:

For the clover project, I setup four Raspberries.  I am a DIY type of person and I love doing a lot with a little (money-wise).  Thus, my setup is pictured below.  Please note the wonderful, high-quality case work!

These are the Raspberry PI 3 B, which has built-in wireless connectivity at 100MB, super nice for under $40. IKR!

If you want to set up virtual Python and OpenCV on Raspberry Jessie, the best instructions I have seen are here.  Adrian Rosebrock has some excellent Python and OpenCV resources for the Raspberry.  His site is worth marking, especially if you are a beginner.  If you do use Adrian’s instructions (and they are nice, he sets up a nice virtual development environment).  If you wish to run Python Idle in the virtual environment a handy command prompt to open it is:

python -m idlelib

This command must be run in the virtual environment.  This opens Python Idle with all the configurations of the virtual development environment.  If you wish to use OpenCV with Python on Raspberry, I suggest Adrian’s post is your best direction.  My purpose was to use the Raspberry for training and some testing.  My walk through below may not meet all your needs if you’re a heavy Python user.

My approach is simple.  Ubuntu and Jessie (the Raspberry flavored Linux) have much in common (Debian).  Therefore I used the same setup steps I used in setting up OpenCV on Ubuntu.  I will walk you through them again here with added emphasis and a few changes for the Raspberry.

If you plan to use the Raspberry for any cascade training, you will discover the 1GIG limit is, well… limiting.  Especially for cascade training.  I used a work around by using zRam.  The GitHub link for it is here.  I have added the information below.  It doesn’t exactly add RAM to the Raspberry, but it does give you extra swap space via a processor.  The end result is 2 GIG of available RAM for a program instead of 1GIG.  This makes all the difference in the world for cascade training.  You will not be able to do HPC type of training, but you will be able to use a $40 piece of hardware to do training!  Who cares if it takes two to five times as long, this is development, research stage, and hobby, not Wall Street.

Script to enable zram for raspberry pi

Download the script and copy to /usr/bin/ folder

sudo wget -O /usr/bin/zram.sh https://raw.githubusercontent.com/novaspirit/rpi_zram/master/zram.sh

make file executable

sudo chmod +x /usr/bin/zram.sh

edit /etc/rc.local file to run script on boot

sudo nano /etc/rc.local

add line before exit 0

/usr/bin/zram.sh &

When you reboot your Raspberry, you can use top or htop to see the newly available memory in swap.  Very nice for times when the 1GIG limitation does not allow a program to run.

Setting Up OpenCV on Raspberry PI

Login to your raspberry.  Open a terminal and run

sudo apt-get update

sudo apt-get upgrade

This will update your server to the latest state.  On Raspberry, these steps can take a bit longer.  The total process took me about 2 hours with the largest amount of time waiting for OpenCV to compile.  If you are using SSH or VNC, you can work through the process while doing other things (multi-task)… this concept makes your current or future employers very happy!

Next, create a directory called MyWorkspace.  I created mine in the /home/pi directory.

sudo mkdir MyWorkspace

Change directory into the MyWorkspace directory and run the following commands. If you do not have git installed, you can install by typing in “sudo apt-get install git” and pressing return.  Don’t forget use sudo, Raspberry is a stickler about rights and permissions.  You will notice in the screen capture below I left the command out in the walk through.

sudo git clone https://github.com/Itseez/opencv.git

My current screen shot:

 

Next, install the necessary libraries and compilers by running the following commands.

sudo apt-get install build-essential

sudo apt-get install cmake libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev

sudo apt-get install python-dev python-numpy libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev

sudo apt-get install libopencv-dev

That’s it… you have successfully installed OpenCV on your Raspberry PI!  If you want to verify the installation, type in opencv_createsamples at the command prompt.  You should see a list of options when using opencv_createsamples as shown below.

BUT, OpenCV has not been installed to use with Python yet.  That requires a little more effort and a decent amount of processing time.

In the MyWorkspace/opencv directory, make a directory named build and cd into it.   Then run the following the command:

cmake -D CMAKE_BUILD_TYPE=RELEASE \

-D CMAKE_INSTALL_PREFIX=/usr/local ..

This will take a couple of minutes to run.  Note, there are other parts one can install for opencv, I only needed the base for what I was doing, so I used the minimum.  Again, if you need a full installation, I suggest Adrian’s instructions I mentioned before.

Next, we build.  This will take a long time, probably over an hour.  Adrian’s post does give us information on how to use all 4 processors for the build.  This option has the possibility (though never happen with me) to fail.  If it does, simply don’t use the 4 compiler option of (-j4 ).  The make command:

sudo make -j4

After a long compile, you will need to run

sudo make install

to place the files in their proper location.  Then run

sudo ldconfig

Lastly, you will need to change to the following directory:

cd /usr/local/lib/python3.4/dist-packages/

and change the name of the cv2.cpython-34m.so file to cv2.so, this is done with the following command.

sudo mv cv2.cpython-34m.so cv2.so

At this point, you can import cv2 into Python as shown in the screen capture below:

Lastly, let me re-emphasize, this was to meet my needs for this project.  If you need more bells and whistles with OpenCV for Python on Raspberry, I strongly suggest you follow Adrian’s post.

OpenCV Cascade Training Part 3

It is now time to train the cascade!  Negative images have been prepared, positive images created, and the vector file generated in preparation.  To train the cascade, the opencv_traincascade command is used.  There should be a data folder created in the workspace to store the training data.  In this directory, each stage will place a temporary xml file, and at the end of the training, combine all the stage information into a single cascade.xml file.  It is important to keep the stage.xml files.  If something happens while the training is running, the stage files provide a “save point” in the process.  So if 5 stages were complete and the computer froze, you would be able to pick up at stage 5 (and this can save you a lot of hours of training).

The command with arguments I used:

 opencv_traincascade -data data -vec all.vec -bg bg.txt -numPos 20000 -numNeg 10000 -numStages 21 -w 50 -h 50

This will take a LONG time with this many samples and stages.  On my computer (12 GIG RAM, 5 dedicated CPUs) this took over 700 hours of processing time!  I suggest you start with a small number of positive samples.  You may desire to reduce the size to 20×20 instead of 50×50.  If your computer doesn’t have a lot of memory, you will not have a choice but to use smaller image sizes.

Once you have your training complete, in the data directory will be files like shown below.

Note:  I have several cascadeX.xml files instead of one cascade.xml file.  This is because I stopped the process several times and “checked” the accuracy of the run.  I checked it so I could monitor the success/failure of the training and make any adjustments if necessary.  What adjustments?  Items like better training images, max angles to create samples etc…  After 5 stages (about 70 hours per stage), the test recognition looked like the images below.  (We will discuss how to program and run these tests in a different post).

The same cascade file used in a live video feed of the test is seen below:

As can be seen, the training is going well at only 5 stages in.  It should be noted all images in this set were used for training.  Thus far we have not tested the training against an untrained image or set.

The amount of time to train a cascade file seems extreme.  It was at this point (stage 5 in HAAR training) that I decided to try training with the same set of positive images with another OpenCV option.  It is the LBP option of the training call.  Use the same training call, but with the featureType argument.

opencv_traincascade -data data -vec all.vec -bg bg.txt -featureType LBP -numPos 20000 -numNeg 10000 -numStages 21 -w 50 -h 50

This method is not necessarily as accurate as HAAR but is MUCH faster to train.  In fact, I ran 17 stages of training with LBP in under 72 hours!  The HAAR training took 72 hours for EACH stage.  A tremendous difference.  Below are the same images as above, but with the LBP option.

and the LBP video capture after 5 stages.

(My apologies for the low-quality image).  You will note it is doing as well or better than the HAAR.  The busy gray LBP photo is because the scan size is large enough to pick up the same positive area multiple times.  Thus there are two boxes around some positive hits.  But overwhelmingly the LBP is working better for the clovers than the HAAR.  Below are 9 test with the HAAR results on the left and LBP results on the right.

 

Test 1
Test 2
Test 3
Test 4
Test 5
Test 6
Test 7
Test 8
Test 9

In test 2, HAAR picked up the 4 leaf clover in the lower left corner and LBP did not.  But in test 2, it is difficult to say for sure if HAAR picked up the 4 leaf since it is not in the center of the box, but none-the-less it is marked.

In test 4, the LBP is much cleaner.

In test 9, LBP picked up the clover in the center and HAAR did not.

All HAAR tests were run with the same settings.  All LBP tests were run with the same sittings.  HAAR and LBP were optimized to best meet a general scan.  The images in the test were untrained images.

Overall, my opinion is that LBP is a better option for the clovers.  It is a fast trainer with equal or better performance.  For the cascade xml file, the LBP will be used.  I suggest you run a test on your images and do the same type of comparison.  HAAR and LBP take a different approach (for detailed information, search for their published papers).  LBP brings a huge advantage in the speed of training.

OpenCV Cascade Training Part 2

Creating Positive Samples

When gathering and creating a sample image it is important (unless you desire to do a LOT of work) to crop your samples so they are the same size.  Also, they need to contain the largest possible sample within the frame.  A 50×50 sample does not work properly if it only has a 20×20 clover in it.  If you choose not to do it this way, you have to mark the location with coordinates for every sample file… not happening in my world, I have better things to do.  So, I created the base 50 to be 50×50 and then created the samples using the method described below.

The positive sample images (the ones you manually created) should be placed in the MyWorkspace directory.  I named my positive images 1x.jpg, 2x.jpg … 50x.jpg.   Once the directory structure is setup and all the positive samples are added, the following command sequence will need to run for EACH of the sample images.  In my case the command for creating the positive images for 1x.jpg is:

opencv_createsamples -img 1x.jpg -bg bg.txt -info info/info1.lst -pngoutput info -maxxangle 0.25 -maxyangle 0.25 -maxzangle 0.25 -num 2000

Note the 1x.jpg and the info1.lst, these need to be changed each time to match the number of positive samples.  For example, the next command would be:

opencv_createsamples -img 2x.jpg -bg bg.txt -info info/info2.lst -pngoutput info -maxxangle 0.25 -maxyangle 0.25 -maxzangle 0.25 -num 2000

This will create info1.lst – info50.lst files in the info directory along with all the sample images used for training.  I realize this could be placed into a batch script, please feel free to follow that route.  I simply used the up arrow and completed them at one setting while doing other work, but to keep the love going, I created a batch file CreateImageBatch to get you started.  Do remember to make it an sh file that is executable before running it.  To make it executable, use the command “sudo chmod +x filename.sh” where filename is the name you give to your sh file. 

The images in the info folder are automatically created and the location of the positive sample mapped to the info file.  THANK YOU, CREATORS OF OpenCV!!  This makes life super easy.  A sampling of my images look like this:

The info list files need to be concatenated to a single all.lst file.   Below is a quick screen shot of my folder contents (I moved the .lst files to MyWorkspace for simplicity, they do not have to be moved. If you do move them, the all.lst file must be copied to the info directory before running the classifier)

Once you have all the .lst files in the same directory, you can run the following command in the same directory and it will concatenate all the files for you into a single file named all.lst.

cat *.lst > all.lst

Next, we need to create the positive vector file.  This is the output file containing the positive samples for training.  I used 50×50 size for training.  This is a rather large size and depending on your system’s configuration you could run out of memory.  For beta testing, 20×20 usually works well.  It is important to use the same size when training the cascade, so make note of what size you use.  To create the vector file for my images I used the following command.

opencv_createsamples -info info/all.lst -num 50000 -w 50 -h 50 -vec all.vec

At this point, we have 50,000 positive sample images created in the info directory, the bg.txt file containing information about the negative images, the all.lst file containing information about positive images, and the all.vec file used by the trainer.  We are now ready to train our application… in other words, build the much sought after cascade xml file for object detection!  That will be discussed in part 3.

OpenCV Cascade Training Part 1

This post assumes you have OpenCV installed on your computer as described in Installing OpenCV on Ubuntu.  If you have not, I highly recommend you go back and ensure you have all the proper settings.  The information below shows the cascade classifier training applied to a real life scenario.  The full OpenCV documentation for the classifier is here, should you need more information.

The Plan

You will need a lot of images.  Really… a lot of images unless you only desire one particular item to be recognized.  As I researched the best way to train for a “generic” four leaf clover I realized I needed in the neighborhood of 40 to 50 thousand images.  Yikes!  I don’t have enough family and friends to hunt for 40 or 50 images, must less 50,000!  Secondly, each image would have to have the location of the four leaf clover in the image added to a data file.  This is seemingly impossible.

This is where I decided to try to utilize OpenCV’s positive image creation.   If you have a single positive image it will create “test” positive images from the single image.  This method is great IF you only want to recognize a SPECIFIC clover.  For instance, unless the next four leaf clover looked very close to the single positive image, it would not recognize it.  Consider the following images of a four leaf clover.

If the first image on the left was used to train the cascade, it is very possible the other images may not be recognized.  At least not without recognizing a bunch of unwanted clovers (you know the unwanted 3 leaf type).  Even worse, when there are a LOT of 3 leaf clovers all mixed in together. Like the picture below, only one of the marked areas is a four leaf clover.

This may seem trivial to the human eye, but not trivial to the program.  We only want it to pick one out of all the clutter, the correct one…

So how does one go about this?  My solution is to gather 50 to 100 four leaf clover images.  Crop the image down to only contain the four leaf clover and resize them to 100×100.  From this set of positive images, use OpenCV to create a set of 2000 images from each positive image.  This gives a total of 80,000-100,000 positive images!  There are other ways to achieve this, but this was the most straight forward approach I could think of with the tools at my disposal.

A common method for working with video is to convert the stream to grayscale.  While in grayscale, perform all the object recognition and map the findings to the color image.  It is much quicker to work with an array of 256 numbers than three arrays of 256 numbers.  Grayscale is used throughout this training process for optimization purposes.

The Preparation

The MyWorkspace folder (where OpenCV was installed) needs to have the following folder layout.

The first step is to obtain a bunch (a large bunch) of negative images.  I didn’t want just any images, I wanted images that were close to what I was training the cascade on.  Therefore I looked for fields of grass, forest, landscapes etc… until I had roughly 40,000 images stockpiled.  A good place to start your hunt is at image-net.org.  There you can search for images of a specific type (Trees, Grass, etc.).  There is a bit of a trick, they provide you the URL to each image.  You must write code to pull them in from the internet.  Python is a great tool for such jobs and as such, I have added a base framework code to do just that.

import numpy as np
import cv2
import urllib.request
import os

# The link is the wnid number from image-net.org
def get_url_images(link, save_dir, w, h):
    global pic_num
    url_link = "http://image-net.org/api/text/imagenet.synset.geturls?wnid=" + link
    
    image_links = urllib.request.urlopen(url_link).read().decode()
    
    if not os.path.exists(save_dir):
        os.makedirs(save_dir)
    for i in image_links.split('\n'):
        try:
            urllib.request.urlretrieve(i, save_dir+"/"+str(pic_num)+".jpg")
            # uncomment the next line to change them to gray scale
            img = cv2.imread( save_dir+"/"+str(pic_num)+".jpg", cv2.IMREAD_GRAYSCALE)
            new_image = cv2.resize(img,(w,h))
            cv2.imwrite( save_dir+"/"+str(pic_num)+".jpg", new_image)
            pic_num +=1
            
        except Exception as e:
            print(str(e))

def set_image_size(path, w, h):
    for file_type in [path]:
        for img in os.listdir(file_type):
            try:
                str_path = str(file_type)+'/'+str(img)
                # uncomment the next line to change them to gray scale
                imgGray = cv2.imread(str_path, cv2.IMREAD_GRAYSCALE)
                new_image = cv2.resize(imgGray,(w,h))
                cv2.imwrite(str_path, new_image)
                
            except Exception as e:
                print(str(e))
                

# collect images sets from image-net.org
global pic_num
pic_num = 1
# Add links to the array to gather more images
myLinks = ["n11752937","n12102133"]
for link in myLinks:
    get_url_images(link, "myDir", 200,200)

# Change the image size in a directory called newSize
#set_image_size("newSize", 100,100)

Once you have an adequate number of negative images, place them in the MyWorkspace\neg folder.  The next component of the classifier is to build a background description file.  This is a Background description file, it contains a list of images which are used as a background for randomly distorted versions of the object.  This file will be called bg.txt and will need to be created (Python is a good candidate to do this chore also).  The file must contain a list of all the files in the neg folder.  The first few lines of my file look like this:

It is important to note, if you are creating this file in windows then transferring the file to a Linux machine, you will most likely have errors when you execute the classifier.  The reason for the error is the difference between window’s and Linux’s end of line deliminator.  I suggest using an application called  dos2unix.  You can install it with apt-get.  The command line to change the bg.txt file is:

dos2unix bg.txt

This little command will save you a lot of trouble shooting, as the classifier simply fails and does not tell you why, when bg.txt is not in Linux style.

Setup OpenCV on Ubuntu Server

This post assumes you have Ubuntu server setup as walked through in previous post.

Install OpenCV

Login to your Ubuntu server and run

sudo apt-get update

sudo apt-get upgrade

This will update your server to the latest state.

Next, create a directory called MyWorkspace.  I created mine in my /home/rauastin directory.  Change directory into the MyWorkspace directory and run the following commands. If you do not have git installed, you can install by typing in “sudo apt-get install git” and pressing return.

git clone https://github.com/Itseez/opencv.git

Next, install the necessary libraries and compilers.  Run the following commands:

sudo apt-get install build-essential

sudo apt-get install cmake libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev

sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev

If you are using Ubuntu 17x you will need to add an older library reference.  Use

echo "deb http://us.archive.ubuntu.com/ubuntu/ yakkety universe" | sudo tee -a /etc/apt/sources.list

and the run sudo apt-get update, then rerun the failed statement from above.  Next:

sudo apt-get install libopencv-dev

That’s it… you have successfully install OpenCV on your Ubuntu Server!  If you want to verify the installation, type in opencv_createsamples at the command prompt.  You should see a list of options when using opencv_createsamples as shown below:

 

 

Setup A Virtual Linux Box (Ubuntu Server) With Oracle VM VirtualBox Part 2

In this post, the Ubuntu OS will be installed on the VM setup in part 1.   Your optical drives should look like the following:

Now, click on the server in the left panel so it is highlighted, then press start on the top toolbar.  The new screen may ask you which drive to boot from.  If it does, choose the one with the iso file in it.

The setup screen will start with asking for your language, choose the language of your choice.  I clicked English (for obvious reasons).  Note:  your screen may startup with a list of options (like create disk, install Ubuntu Server, etc., if it does, simply select Install Unbuntu Server.

Next, select location.  Next, I selected NO to configure keyboard layout and accepted the next two defaults concerning the keyboard.  The installation will then auto detect hardware:

Next, it requests the hostname, I am calling this server UbuntuServer:

Next, enter the name of a user, I have entered the first letter of my first name and my last name.  Feel free to use whatever you desire, but do remember it for future use.

I entered the same value for my username.

The next screen is for your password.  Use appropriate password strength for your password then continue.  I chose not to use encryption for my home directory.  This server is needed for speed, not security, I don’t wish it to be slowed down in any way.  Choose no for Encrypt your home directory.

Configure the clock next, I took defaults, as it is not important for our task at hand.   Next, the remaining hardware is detected and you should come to a screen asking about partition disks.  I selected the default (Guided-use entire disk and set up LVM).  The next screen I accepted the default (SCSI3).

Next screen select <yes> to write the changes to the disk and configure LVM.

Next screen is the disk partition screen.  I chose the full size of the disk and when prompted I selected yes to write the changes to the disk. The install continues and may take a few minutes.   There may be a request for information about a proxy.  I do not have one setup, therefore I left it blank and continued.

The process will continue until asked if you wish to have automatic updates.  I selected no automatic updates.  On the software selection screen, I selected standard system utilities and openSSH server as shown below:

After several minutes of installing, you will be asked about install GRUB.  I selected yes since this is a VM and I only have one drive set up.  Lastly, the screen below will appear:

Select continue, the system will finish installation, reboot, and greet you with a login screen.  You have successfully installed your Oracle VirtualBox Ubuntu Server!  Next, we install OpenCV on the server.

 

Setup A Virtual Linux Box (Ubuntu Server) With Oracle VM VirtualBox Part 1

Install Oracle’s VM VirtualBox

This is a walk through to setup a Linux box (Ubuntu) on a windows desktop with Oracle VM VirtualBox.  For this setup, a windows 7 64 bit OS is used.

Start by going to Oracle’s VirtualBox website and downloading the appropriate install for your OS.

Oracle VirtualBox
Oracle’s VirtualBox Website

Once you have downloaded the file, install the application accepting all the defaults.  You should have a new icon show up in your window’s program list that looks like this:

Open the application, you should be in the Oracle VM VirtualBox Manager app.  The Top tool bar looks like this:

Setup a New VM

From the toolbar click on New.  The following screen will appear:

Name the new machine an appropriate name.  I have named mine UbuntuServer.  Next, choose Linux from the type drop-down.

I have chosen to use Ubuntu(32-bit) for my install.  Some of the applications I plan to use may work better under 32-bit.  This is an optional call, but the setup will assume 32-bit is chosen.

Next, click “Expert Mode” and arrive at this screen.  The configuration may look different depending on your computer’s setup.

I chose 8 GIG for Memory size and opted to create a virtual hard disk now.  Click create and the next window is:

I chose VDI (VirtualBox Disk Image) with fixed size.   Click create and wait for the new VM to be created.  Once it is finished you should see a new server in your list on the left-hand side of the Oracle VM VirtualBox Manager window. As shown below:

I have two shown in my window, the first is from a previous Ubuntu setup.

Configure the New VM Memory and CPU

It is time now to configure your new VM.  If you have more than one core you can take advantage of them by clicking on the system box:

Arriving at this dialog box:

From this screen select the Enable I/O APIC option.  This is OPTIONAL and will not affect the setup if you choose not to use it.  My setup benefits from it and I chose to use it.  VirtualBox documentation shows the following information:

Click the Processor tab and view the processor setup:

I chose 5 processors, which are cores and placed an execution cap of 100%.  Your configuration will most likely be different.  Be sure not to use all of your processors for the virtual machine.  If your system has more than one core and your selection only allows one core, be sure to check your motherboard’s documentation.  It may have hyper support turned off or not available.  If so, one processor is all you will be able to configure.

Configure the Storage and Optical Disk 

Now, move to the storage and optical disk setup.  Click the storage tab on the left-hand side and see the following screen:

Under the optical drive (where it says empty), click and point to your iso file for Ubuntu.  If you have not already downloaded the iso file, you should do that now and save it to a location on your computer.  Then, point the optical drive to that file location.  You point the optical drive to that file by clicking on the disk icon beside the Optical Drive under Attributes.  From the drop-down list select “Choose Virtual Optical Disk file” and locate and select your Ubuntu installation iso file.

Secondly, I suggest adding a second optical drive by clicking the plus sign beside the Controller: IDE (You must first click the Controller: IDE, then the icon to add an optical disk will appear).  Next, point the newly created drive to a file called  vBoxGuestAdditions.iso.  It comes with Oracle’s VM and should be located in the Program Files\Oracle\VirtualBox directory.  In the end, your storage screen should look something like this:

Network Configuration

The last step in part 1 is to setup the network so you can connect to it.  I have chosen to use my wireless network on the host machine (The computer you are setting up the VM on).  Click the Network tab and see the following screen:

In the attached to selection pick Birdged Adapter, the name selection should auto fill with selections available from your system.  Choose the appropriate wireless connection.  Click advanced and change Promiscuous Mode to Allow All.  The screen should look similar to this:

At this point, the VM is configured.  In part 2 the OS will be installed.

The Clover Project

The Clover Project is an integration of public/free software to develop an application using machine learning (opencv), Python, c++, and Xamarian.  The project also utilizes Raspberry PI 3 B to prototype the application.

The goal of the project is to allow a user to video a patch of clovers and the AI will detect all four leaf clovers within the specified patch.   I must admit I come into this endeavor as a programmer and scientist.  Before this project, I had never been exposed to opencv, Xamarian, or video capture and object recognition.  So for those of you feeling this is impossible, take heart.  If you are willing to spend some time learning, laughing at yourself, and you don’t mind learning from failures,  this series will get you on the way to having novice level fun with video capture and machine learning.  My hope for the application is shown in the prototype image below.  A video (or image) is processed by the application to recognize and identify a four leaf clover.

Four Leaf Clover Recognition
Four Leaf Clover Recognition

The internet is full of resources for this type of project.  However, few sites pull them all together to form a complete “life cycle” or evolution, showing the entire process from an idea to a product. My hope with this series is to help the novice understand the steps, research, and integration of technology to see their app go from being an idea to a reality.

In starting this project I am a one person shop, but I have utilized many of the resources on the net. In the upcoming post, I will attempt to do an excellent job at recognizing each location I used. This project would not have moved beyond concept without these awesome resources.

The Breakout


In this section we breakout a list of semi-sequential activities to accomplish our goal.  Each of these sections will have its own detailed post with supporting examples and discussion.

Image Collection

Images for training are an important part of the process.  A good sample size is necessary (my plan is to use 40,000 images of four leaf clovers) for any training application.  Thankfully there are methods to perturbate a small set of images.  A second set of “untrained” positive images will be used to evaluate the training.   This will be discussed in detail in the post covering image collection and training preparation.

Cascade Training with OpenCV

Firstly, This step covers the setting up of OpenCV on a virtual Linux machine. Secondly, I will show how to train (build a cascade XML file) with opencv for use with image detection.  This section will be covered over several blog posts and include how to setup opencv on a Raspberry PI for use with Python ( a great way to test a cascade file on the cheap).  At the end of this process, a working cascade file for four leaf clovers will be trained and created.

OpenCV in Python, c++ and .NET 

Several rudimentary programs will be written and discussed using opencv, Python, c++, .NET, and Xamarian.  Each will be briefly discussed. Depending on an application’s needs, each offers a unique advantage or disadvantage.

The App for Android

The final outcome is an app for Android (and possibly iOS) systems.  The desire is to provide the reader a loose framework for one person’s journey. From this framework, you can launch your own dream app and encourage others.  All the source codes, links, etc. will be shared.  The only item not shared will be the cascade file for the clover detection.  This is an item each needs the joy and pain of creating for themselves.  By the time this training is complete, it will take over one thousand hours of computing time with 5 cores, 12 GIG ram and an SSD.