Last week I gave my RL talk to the attendees of Prairie.Code. It did not go well.
The talk should really be 90 minutes and the reviews showed that people agreed. There was too much dense data that I needed to slow down and really cover in more detail.
I don’t think I will continue to do the talk as I have hit most local talks but if I do decide to give it again I need to expand on the base Q Learner and make sure that is understood.
My plan now is to create a talk about deploying models to the cloud. I want to get into the AI Engine and Tensorflow.js and how they work in the ecosystem.
I would also like to dig into MineRL and maybe use that as the engine to speak more on RL.
Anyway, I went in feeling rushed and I should have followed my gut and fixed it. Hopefully, it doesn’t keep me from speaking again.
After completing my first lab, I decided to check out the language lab. It was much of the same where most of it was simply calling APIs and seeing the results. I need to dig further into the libraries and get more technical.
Here is my public profile: Evan H.
As stated in my previous post, I was given 1000 credits (~$1000) for QwikLabs. Today, I finished my first “quest”. It was titled Intro to ML: Image Processing.
I will restate this but I LOVE how QwikLabs are set up. They give you an entire Google Cloud account so you don’t have to mess with your account and get unwanted billing and other changes. Once the lab is done, the account gets deleted and you go on your way.
This lab covered a few different aspects of Google Cloud. First, the console. If you are familiar with working in Linux this is a simple transition. Second, we work with the storage system called “Buckets”. We mess around with some pretty simple permissions as well as uploading files for processing.
The part that I liked was using the AI-Engine to host a trained model. It was a super simple model but since I failed last time it was cool to see it work as expected. Plus, they showed how to host and access externally. This will definitely be something I do once they start supporting TFv2.
The last few section was using the API to process images. The first was a simple recognition. This stood out because you could change the calling JSON and have it return internet articles that contained the same image. Second, we processed an image to determine people’s faces and possible emotions as well as landmarks. Finally, we processed a sign with some French text on it. We were able to translate it to english as well as add some more processing that would give us information (links, etc.) about what was printed.
Overall, VERY COOL first lab. I will get started on my next round of cloud training soon.
One of the many perks of being a Google Developer Expert is that we get credits for many of their products. The most recent one was 1,000 credits for QwikLabs. It is a training site that Google bought recently. The main feature that stuck out to me is that they create temp Google Cloud accounts so that you are working in a real environment without having to mess with your existing account or getting charge.
We also get Google Cloud credits and I will speak on that more in the future when the AI Engine supports TFv2 models and I can get my NCAA basketball predictor running.