TensorFlow Chicken Recognition in Google Colaboratory

TensorFlow Chicken Recognition in Google Colaboratory


I am working with a healthcare customer at the moment to improve machine learning for the benefit of subscriber physical and financial health. The goal is to run an increasing number of models and to shorten the time to discovery and health intervention. This was the perfect push for me to get my hands back on machine learning so that I could better understand the work our experts are delivering. I thought that sharing my steps to learning might be helpful to others.


Acknowledgements 


As I am very new to both Python and machine learning, I could not have been successful without the assistance of these people and resources:

  • Paige Bailey, Google AI Developer Advocate (@dynamicwebpaige). Paige is very talented and equally generous with her talent and time.
  • Lin JungHsuan and his Medium article that my chicken recognition project is based on.
  • Google Colaboratory. A working, hosted, free, Python/TensorFlow notebook environment with CPU, GPU, and TPU runtime options.

Google Colaboratory


Most examples of machine learning are written in Python. Python can sometimes be a bear to get running. My daily desktop is a Mac which has Python 2 installed as part of the OS which cannot be removed. Python 3 is the version of choice for machine learning compatibility. Getting both versions to behave on a Mac is doable but proved to be very difficult for me. Google Colaboratory let me just get to work with Python and to learn - exactly what Platform as a Service (PaaS) is meant for.

I read and played with every single Colaboratory tutorial in the Getting Started section on this page before beginning my own project. I'm glad I did. The section on using Google Drive for storage proved very useful in my project.


An Excellent Image Recognition Sample Project

Lin JungHsuan's article laid out all the steps for me. I'm not sure I could have done this without the guidance provided by that article. My project is a copy of Lin's project with some small changes.


Chickens


I needed something to recognize. In May 2018, we started raising six chicks who are now fully grown, quite social and laying eggs every day. I have plenty of photos of them to use as test images plus recognizing chickens just sounded fun. Fun definitely helps with learning.


Input

For TensorFlow image recognition to work, you need to feed the training program multiple folders of images to learn from. Each folder holds similar images and the folder name is used for that image class. I used Google Image Search to search and download images of chickens and huskies to create folders of training data. I used this Chrome Extension to make bulk image downloading easier.

Output

Once the model has been trained, a new image is compared to the model for classification and a classification confidence score. For example, the above photo of me with Zaja on my shoulder was classified as a photo of a chicken with 92.6% confidence.


Description of My Colaboratory Notebook

You should be able to access my notebook on Google Drive here. I will step through the notebook section by section.

TensorFlow and TPU Check


It turns out that a few of the TensorFlow (TF) 1.12 functions used in Lin JungHsuan's example have been deprecated and do not work in TF 2.0. I wanted to make sure I was using TF 1.12.

Also, Colaboratory generously allows one to choose to run code on CPU, GPU or TPU runtimes. I chose TPU as my runtime in Colaboratory and checked TPU use in the code.


Google Drive for Persistent Image Store



Google Colaboratory is free and awesome, but not persistent. Google will reset your runtime when you are not using it including any files you have uploaded into the runtime. Thankfully, Google Drive is supported as persistent storage. When your runtime gets reset, just reconnect your Google Drive. I am using Google drive to hold my training images as well as my python scripts.


Get the retrain.py script


Unfortunately, the retrain.py referenced in Lin JungHsuan's Medium article caused issues for me. I am sure the issues are a result of my inexperience. I did find another version of retrain.py that did work though. The full wget command is in my notebook or you can copy it from here: !wget https://raw.githubusercontent.com/tensorflow/tensorflow/c565660e008cf666c582668cb0d0937ca86e71fb/tensorflow/examples/image_retraining/retrain.py

Use the retrain.py code to classify the images in your image folders


Point retrain.py at the parent directory that houses your multiple image folders. I have learned that most of the command line options are optional except --image_dir, but I used them according to the Medium article. Here is my full retrain.py syntax: !python ./retrain.py --bottleneck_dir=./bottlenecks --how_many_training_steps 500 --model_dir=./inception --output_graph=./retrained_graph.pb --output_labels=./retrained_labels.txt --image_dir /content/gdrive/My\ Drive/Personal/TensorFlow/

This Python script will create two files which will be used by the label_image.py script later to classify a new image. Those two files are retrained_labels.txt and retrained_graph.pb. Retrained_labels.txt is a very simple text file and contains one line for the name of each of your image folders. Retrained_graph.pb, according to the code, is a "trained graph and labels with the weights stored as constants". I need to do a bit more reading to understand that completely.

The retrain.py runs for a little while classifying the images and creating the graph. Major phases are:
  • The Inception-V3 model is downloaded
  • "Bottlenecks" are created for each image.
  • Train the image recognition model in the number of steps specified and periodically report on accuracy

Display the image to be classified



I thought it would be really useful to open the new image file successfully before asking the model to classify it. This was a good way for me to test my file path.

Classify the new image against the model just created

The moment we have been waiting for. Will the model classify the new image as containing a chicken? How confident will the model be that this is an image containing a chicken? Thankfully, the model was 92.6% confident that the new image contained a chicken. Yay! 


There is no wget link for the label_image.py script. You can copy the short list of commands from the Medium article or from my notebook. The full syntax that I used when running label_image.py is: !python /content/label_image.py /content/gdrive/My\ Drive/Personal/TensorFlow/d_chicken_2.jpg

Thank You for Reading


Bravo to you for making it this far. Machine Learning is heavy stuff full of PhD-level concepts that I do not completely understand. It is nice to know that there are good examples and free, hosted environments out there to make experimentation easier. Oh, and the reason there are TensorFlow socks in the title image is that Paige Bailey was kind enough to provide me a discount on some stylin' TF socks for challenging myself and sharing the results. Thanks Paige.

As always, I truly welcome your feedback.

Comments