Q-Learning Algorithm: From Explanation to Implementation (2024)

In my today’s medium post, I will teach you how to implement the Q-Learning algorithm. But before that, I will first explain the idea behind Q-Learning and its limitation. Please be sure to have some Reinforcement Learning (RL) basics. Otherwise, please check my previous post about the intuition and the key math behind RL.

Well, let’s recall some definitions and equations that we need for implementing the Q-Learning algorithm.

In RL, we have an environment that we want to learn. For doing that, we build an agent who will interact with the environment through a trial-error process. At each time step t, the agent is at a certain state s_t and chooses an action a_t to perform. The environment runs the selected action and returns a reward to the agent. The higher is the reward, the better is the action. The environment also tells the agent whether he is done or not. So an episode can be represented as a sequence of state-action-reward.

Q-Learning Algorithm: From Explanation to Implementation (3)

The goal of the agent is to maximize the total rewards he will get from the environment. The function to maximize is called the expected discounted return function that we denote as G.

Q-Learning Algorithm: From Explanation to Implementation (4)

To do so, the agent needs to find an optimal policy 𝜋 which is a probability distribution of a given state over actions.

Under the optimal policy, the Bellman Optimality Equation is satisfied:

Q-Learning Algorithm: From Explanation to Implementation (5)

where q is the Action-Value function or Q-Value function.

All these functions are explained in my previous post.

In the Q-Learning algorithm, the goal is to learn iteratively the optimal Q-value function using the Bellman Optimality Equation. To do so, we store all the Q-values in a table that we will update at each time step using the Q-Learning iteration:

Q-Learning Algorithm: From Explanation to Implementation (6)

where α is the learning rate, an important hyperparameter that we need to tune since it controls the convergence.

Now, we would start implementing the Q-Learning algorithm. But, we need to talk about the exploration-exploitation trade-off. But Why? In the beginning, the agent has no idea about the environment. He is more likely to explore new things than to exploit his knowledge because…he has no knowledge. Through time steps, the agent will get more and more information about how the environment works and then, he is more likely to exploit his knowledge than exploring new things. If we skip this important step, the Q-Value function will converge to a local minimum which in most of the time, is far from the optimal Q-value function. To handle this, we will have a threshold which will decay every episode using exponential decay formula. By doing that, at every time step t, we will sample a variable uniformly over [0,1]. If the variable is smaller than the threshold, the agent will explore the environment. Otherwise, he will exploit his knowledge.

Q-Learning Algorithm: From Explanation to Implementation (7)

where N_0 is the initial value and λ, a constant called decay constant.

Below is an example of the exponential decay:

Q-Learning Algorithm: From Explanation to Implementation (8)

Alright, now we can start coding. Here, we will use the FrozenLake environment of the gym python library which provides many environments including Atari games and CartPole.

FrozenLake environment consists of a 4 by 4 grid representing a surface. The agent always starts from the state 0, [0,0] in the grid, and his goal is to reach the state 16, [4,4] in the grid. On his way, he could find some frozen surfaces or fall in a hole. If he falls, the episode is ended. When the agent reaches the goal, the reward is equal to one. Otherwise, it is equal to 0.

Q-Learning Algorithm: From Explanation to Implementation (9)

First, we import the needed libraries. Numpy for accessing and updating the Q-table and gym to use the FrozenLake environment.

import numpy as np
import gym

Then, we instantiate our environment and get its sizes.

env = gym.make("FrozenLake-v0")
n_observations = env.observation_space.n
n_actions = env.action_space.n

We need to create and initialize the Q-table to 0.

#Initialize the Q-table to 0
Q_table = np.zeros((n_observations,n_actions))
print(Q_table)
Q-Learning Algorithm: From Explanation to Implementation (10)

We define the different parameters and hyperparameters we talked about earlier in this post

To evaluate the agent training, we will store the total rewards he gets from the environment after each episode in a list that we will use after the training is finished.

Now let’s go to the main loop where all the process will happen

Please read all the comments to follow the algorithm.

Once our agent is trained, we will test his performance using the rewards per episode list. We will do that by evaluating his performance every 1000 episodes.

Q-Learning Algorithm: From Explanation to Implementation (11)

As we can notice, the performance of the agent is very bad in the beginning but he improved his efficiency through training.

Q-learning algorithm is a very efficient way for an agent to learn how the environment works. Otherwise, in the case where the state space, the action space or both of them are continuous, it would be impossible to store all the Q-values because it would need a huge amount of memory. The agent would also need many more episodes to learn about the environment. As a solution, we can use a Deep Neural Network (DNN) to approximate the Q-Value function since DNNs are known for their efficiency to approximate functions. We talk about Deep Q-Networks and this will be the topic of my next post.

I hope you understood the Q-Learning algorithm and enjoyed this post.

Thank you!

Q-Learning Algorithm: From Explanation to Implementation (2024)
Top Articles
Treasure Maps - Red Dead Redemption 2 Guide - IGN
Calumet Ravine Treasure Map Locations ~ Dragon Quest Heroes 2
No Hard Feelings (2023) Tickets & Showtimes
Top 11 Best Bloxburg House Ideas in Roblox - NeuralGamer
Dte Outage Map Woodhaven
Comforting Nectar Bee Swarm
Dee Dee Blanchard Crime Scene Photos
Craigslist Nj North Cars By Owner
Buckaroo Blog
Unit 1 Lesson 5 Practice Problems Answer Key
D10 Wrestling Facebook
Dr Manish Patel Mooresville Nc
All Obituaries | Buie's Funeral Home | Raeford NC funeral home and cremation
Wausau Marketplace
Hermitcraft Texture Pack
Indystar Obits
Weldmotor Vehicle.com
Jermiyah Pryear
Horn Rank
Обзор Joxi: Что это такое? Отзывы, аналоги, сайт и инструкции | APS
Harbor Freight Tax Exempt Portal
Ardie From Something Was Wrong Podcast
Dhs Clio Rd Flint Mi Phone Number
Things to do in Pearl City: Honolulu, HI Travel Guide by 10Best
Eegees Gift Card Balance
24 slang words teens and Gen Zers are using in 2020, and what they really mean
M3Gan Showtimes Near Cinemark North Hills And Xd
Lichen - 1.17.0 - Gemsbok! Antler Windchimes! Shoji Screens!
Www Violationinfo Com Login New Orleans
Maybe Meant To Be Chapter 43
oklahoma city community "puppies" - craigslist
Etowah County Sheriff Dept
Gwu Apps
Caderno 2 Aulas Medicina - Matemática
Why Gas Prices Are So High (Published 2022)
Dadeclerk
Fapello.clm
Sabrina Scharf Net Worth
Cranston Sewer Tax
Carteret County Busted Paper
Sofia Franklyn Leaks
Pain Out Maxx Kratom
3500 Orchard Place
Gonzalo Lira Net Worth
Walmart Front Door Wreaths
Shannon Sharpe Pointing Gif
Frank 26 Forum
How Did Natalie Earnheart Lose Weight
Access One Ummc
Electronics coupons, offers & promotions | The Los Angeles Times
Guidance | GreenStar™ 3 2630 Display
Latest Posts
Article information

Author: Terence Hammes MD

Last Updated:

Views: 5614

Rating: 4.9 / 5 (49 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Terence Hammes MD

Birthday: 1992-04-11

Address: Suite 408 9446 Mercy Mews, West Roxie, CT 04904

Phone: +50312511349175

Job: Product Consulting Liaison

Hobby: Jogging, Motor sports, Nordic skating, Jigsaw puzzles, Bird watching, Nordic skating, Sculpting

Introduction: My name is Terence Hammes MD, I am a inexpensive, energetic, jolly, faithful, cheerful, proud, rich person who loves writing and wants to share my knowledge and understanding with you.