Q-Learning with CartPole

The first in my series on RL that I created in graduate school at Georgia Tech will be on the Q-Learning algorithm. I will use the algorithm to “solve” 2 different gyms from OpenAI. The first is an altered FrozenLake and the second is the CartPole environment.

Just note that I am skipping over the first notebook as that is just introduction with MDPs and PI/VI.

Please go to my GitHub repo and get the 02-QLearning Juypter Notebook and follow along. It will make this a lot easier and will fill you in on any of the missing pieces that I leave out in this write up. Also, I can’t put code into these posts without some plugins that are not allowed on my current tier.

Quick Introduction: Q-learning is an RL technique. It was “discovered” in 1989 by Chris Watkins [web page] after working with Sutton and Barto’s book on reinforcement learning. During some research he came up with a new algorithm that didn’t need to model the environment like MDPs.

The next few segments of the notebook explain some hyper parameters, some methodologies, and finally show some pen and paper examples.

Next, I cover solving the FrozenLake example by creating a custom environment that removes the slippage. I do this to ensure that it is easy on the users to see the optimal solution without having to run many more iterations when their expected commands don’t do what they want.

This is a pretty straight forward example to get a grasp on the update rule as well as how the gym environments work.

Continuous Environments: This section is where I introduce an environment that can’t be held in memory. This requires us to “discretize” the environment. I go through some steps that will show the user the range of values for each of the observable variables. I then chunk those together to trim down the possible environment size into something manageable.

Finally, I put everything together and code up the algorithm with fairly good results.

Please, download the notebook and give it a try. I even challenge you at the end to beat my solution in fewer iterations.

Open in Google Colab: 02-QLearning.ipynb

3 thoughts on “Q-Learning with CartPole”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s