Computer Vision Paris Talk


This is a condensed version of my talk about adding some image detection onto the Raspberry Pi. With it only being 20 minutes I had to skim through quite a bit but hopefully it worked.

I guess I am now an international speaker!


Original Blog Series: Here
Code: Main, MobileNetV2Base, and PiCameraManager
Presentation: ImageDetection/ComputerVisionParis.pptx

TensorFlow Lite on Android

Today, I am going to walk through my process of moving my Keras trained model from my desktop onto my Android phone.


There are certain processes that I want to run against my model while I am away from my computer that I can’t do from the web. These range from some data collection to some larger data processing tasks. For example, I don’t want my web site to scrape all the spreads every time someone goes to it.

How I Got Here

After I created my deep learning model to predict NCAA basketball scores in Google Colab I decided I needed to deploy it a few more places. My first change was using TensorFlow.JS to deploy on the web. After that, I pulled the model down to my desktop and wrapped it in a larger .NET application. Today, I decided that I wanted to create an Android application that would allow me to have my model on my local device instead of calling out to the web. It will also allow me to run a more robust set of commands that I couldn’t do on the web.

TensorFlow Lite

Part of being a GDE allows me to get access to certain groups and people at Google. One of the groups that I am part of is the ML on Mobile group that works on TensorFlow Lite (link). We have had a few meetings and that was the final push I needed to carve out some time and do this project.

This library is fantastic. Since all of my stuff was done in TFv2 and Keras it would be a simple conversion and then learning the Java API calls.


First, I had to convert my Keras model to a TF Lite model. This was as easy as the following commands:

#Convert the model
converter = tf.lite.TFLiteConverter.from_keras_model(restored_model)
tflite_model = converter.convert()

#Save the TF Lite model.
with'model.tflite', 'wb') as f:

Second, I had to add that model to my ‘Assets’ folder in Android Studio.
Third, within Android Studio, I had to add code to the Gradle file to ensure the model doesn’t get compressed and to add the TensorFlow Lite libraries.

    aaptOptions {
        noCompress "tflite"

dependencies {
    implementation 'org.tensorflow:tensorflow-lite:2.2.0'

Now, I needed to actually create the code that will interact with my model.

Android Calling Code

The key class here is the Interpreter class. This is the class that takes in your model and runs all of the predictions. In my case, I fought this like crazy. It started with trying to figure out how to turn the stream I got from the Assets folder into a File object (I had to write it to local storage, fyi). Then, I had to figure out what the input and output objects were going to be.

I have a simple model in that it takes in 6 decimals and outputs a single number. So, my input was a simple float[] and my output was float[][]. Here is the code:

File mdl = CreateFile(); //Creates the TF Lite model if it doesn't exist
Interpreter intp = new Interpreter(mdl);
//Create a 6 element float array. NOTE: I needed to do some normalization.
float[] inputs = BuildInputArray( 72.1,63.8, -5.1, 65.1, 70.8, -2.3);
//Create the output array that returns a Tensor
float[][] out = new float[1][1];
//Run the prediction, out);
//Close the model
//Get the results
float results = out[0][0];


Well, there are my simple steps to get your trained model onto a mobile device. In the future, I will be adding a bunch of features so that I can do all the work from my phone and not have to go to my desktop each day during the season to get my gambling picks.

Skynet w/ Raspberry Pi

This is the first in a multiple part series on adding some object detection to my Raspberry Pi.

Part 1: Introduction
Part 2: SD Card Setup
Part 3: Pi Install
Part 4: Software
Part 5: Raspberry Pi Camera
Part 6: Installing TensorFlow
Part 7: MobileNetV2
Part 8: Conclusion


I am going to recreate a really cool object detection project I found by Leigh Johnson (also a fellow ML GDE). Her project is called Portable Computer Vision: TensorFlow 2.0 on a Raspberry Pi. Now, she went WAY above and beyond what I am planning to do but we will see how it all works out.


I am going to use a version 3 and a camera that I had from a few years ago. I used it to create a baby monitor when my youngest was a baby. I created a python script that would take an image every 60 seconds and post it to a network drive. This drive would then feed an internal web site I created.


I am going to skip the process of training my own model and use an existing model (MobileNetV2).

I will take the existing model and use “transfer learning” and retrain with a custom classifier. I think I might even get frisky and have it do something special when it sees me!


As is standard practice with me, I will be using TensorFlow and Keras with the pretrained model. I will then convert the model to TensorFlow Lite.

Next Steps

In my next post I will work through getting the Raspberry Pi set up.


TensorFlow Mentorship

As part of my effort to stay out in the community I applied to be a mentor for Google’s Code-in for TensorFlow. The main idea is to get pre-college age kids interested in open source projects. It just so happened that TF was able to get approved (makes sense because they are backed by Google) to one of the projects.

If you know anyone let me know and I can get them signed up.

I am excited to see what comes about this and just hoping that I don’t get the kid that is smarter than I am.