Twin Cities Code Camp

Overall, it was a great experience. TWCC has done a great job in their 20+ years and this year was no different. I wasn’t able to stay the entire time but from everything I saw it was great. The facilities were perfect for an event this sized and it appeared that everyone was getting a long and there were multiple groups of people have conversations about the current topics that were presented.

The only downside was that I had to leave home at 5am to get to the start and hit some ice on the way up. Can’t fight mother nature!

GitHub Repo: https://github.com/ehennis/ReinforcementLearning

Powerpoint: TWCC.pptx

DDQN TensorFlow v2 Upgrade

In my previous blog post I showed how I upgraded my Black Scholes/Monte Carlo notebook to use TensorFlow v2. Today, I am going to show how I was EXTREMELY easily able to convert DDQN to the pre release of TensorFlow v2.

The notebook is located here: DDQN-TFv2.ipynb

Since I was mostly using Keras there were a few library changes but the code ran pretty much as is.

Double Deep Q Network

The fourth in my series on RL that I created in graduate school at Georgia Tech will be on the Q-Learning algorithm. I will use the algorithm to “solve” the OpenAI CartPole environment.

If you missed any of the previous blogs here is the first, second, and third.

Please go to my GitHub repo and get the 06-DDQN Juypter Notebook and follow along. It will make this a lot easier and will fill you in on any of the missing pieces that I leave out in this write up. Also, I can’t put code into these posts without some plugins that are not allowed on my current tier.

In 2016, Google DeepMind (pdf) found another optimization to their algorithm. They used the idea from the double Q-Learner and added a second neural network. In the double Q-Learner the Q tables were used at random. In this algorithm, they are used as 2 separate entities. You will use the “target” network to predict the next steps during experience replay and then update your “source” network.

Please, download the notebook and give it a try. I even challenge you at the end to beat my solution in fewer iterations.

Open in Google Colab06-DDQN.ipynb

Deep Q Network

The third in my series on RL that I created in graduate school at Georgia Tech will be on the Q-Learning algorithm. I will use the algorithm to “solve” the OpenAI CartPole environment.

If you missed any of the previous blogs here is the first and second.

Please go to my GitHub repo and get the 05-DQN Juypter Notebook and follow along. It will make this a lot easier and will fill you in on any of the missing pieces that I leave out in this write up. Also, I can’t put code into these posts without some plugins that are not allowed on my current tier.

I skipped over my neural network notebook as it is basically some background knowledge and not much code. If you are going through the series do go back and look through it.

In 2015, Google DeepMind (link) published a paper in Nature magazine that combined a neural network with RL for the first time. They understood that using function approximation from neural networks would open up this algorithm to a much larger environment. They used ONLY the raw pixels and the score for the inputs and were able to master quite a few Atari games.

Google DeepMind used a convolution layer to transcribe the pixels to input which I don’t do here. At some point, I might try and recreate some of their results.

There are a few key differences from Q-Learner and DQN. The first is that a Q-Learner would process at each step that current set of observable. DQN uses what they call experience replay. The algorithm stores up all of the observable and then at set times they would grab a batch of them and process on them. They would the fit that batch on the NN and use the built in back propagation to train the network.

Take a look at the notebook and I go through the algorithm against the same CartPole environment.

Please, download the notebook and give it a try. I even challenge you at the end to beat my solution in fewer iterations.

Open in Google Colab05-DQN.ipynb