In [1]:
%matplotlib inline

Reinforcement Learning (DQN) Tutorial

Based on the tutorial by:

Author: Adam Paszke https://github.com/apaszke

This tutorial shows how to use PyTorch to train a Deep Q Learning (DQN) agent on the CartPole-v0 task from the OpenAI Gym https://gym.openai.com/

Task

You can find an official leaderboard with various algorithms and visualizations at the Gym website https://gym.openai.com/envs/CartPole-v0

The player to decide between two actions - moving the cart left or right - so that the pole attached to it stays upright.

In this task, rewards are:

  • +1 for every incremental timestep
  • and the environment terminates if
    • the pole falls over too far
    • or the cart moves more then 2.4 units away from center.

This means better performing scenarios will run for longer duration, accumulating larger return.

Neural networks can solve the task purely by looking at the scene.

  • we'll use a patch of the screen centered on the cart as the observation of the current state
  • our actions are move left or move right

Strictly speaking, we will present the state as the difference between the current screen patch and the previous one. This will allow the agent to take the velocity of the pole into account from one image.

Packages

First, let's import needed packages. Firstly, we need gym https://gym.openai.com/docs for the environment (Install using pip install gym). We'll also use the following from PyTorch:

  • neural networks (torch.nn)
  • optimization (torch.optim)
  • automatic differentiation (torch.autograd)
  • utilities for vision tasks (torchvision - a separate package https://github.com/pytorch/vision).
In [2]:
!pip3 install gym[classic_control]
Requirement already satisfied: gym[classic_control] in /opt/conda/lib/python3.7/site-packages (0.26.2)
Requirement already satisfied: numpy>=1.18.0 in /opt/conda/lib/python3.7/site-packages (from gym[classic_control]) (1.21.6)
Requirement already satisfied: importlib-metadata>=4.8.0 in /opt/conda/lib/python3.7/site-packages (from gym[classic_control]) (4.13.0)
Requirement already satisfied: gym-notices>=0.0.4 in /opt/conda/lib/python3.7/site-packages (from gym[classic_control]) (0.0.8)
Requirement already satisfied: cloudpickle>=1.2.0 in /opt/conda/lib/python3.7/site-packages (from gym[classic_control]) (2.1.0)
Collecting pygame==2.1.0
  Downloading pygame-2.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.3/18.3 MB 25.4 MB/s eta 0:00:00
Requirement already satisfied: zipp>=0.5 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata>=4.8.0->gym[classic_control]) (3.8.0)
Requirement already satisfied: typing-extensions>=3.6.4 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata>=4.8.0->gym[classic_control]) (4.1.1)
Installing collected packages: pygame
Successfully installed pygame-2.1.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
In [3]:
import gym
import math
import random
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from collections import namedtuple, deque
from itertools import count
from PIL import Image

import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.transforms as T

import matplotlib.pyplot as plt 

# if gpu is to be used
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

To save an episode as gif and display it later

In [4]:
import imageio
import os
from IPython.display import HTML

def save_frames_as_gif(frames, path='./', filename='gym_animation.gif'):
    """Takes a list of frames (each frame can be generated with the `env.render()` function from OpenAI gym)
    and converts it into GIF, and saves it to the specified location.
    Code adapted from this gist: https://gist.github.com/botforge/64cbb71780e6208172bbf03cd9293553
    Args:
        frames (list): A list of frames generated with the env.render() function
        path (str, optional): The folder in which to save the generated GIF. Defaults to './'.
        filename (str, optional): The target filename. Defaults to 'gym_animation.gif'.
    """
    imageio.mimwrite(os.path.join(path, filename), frames, fps=15)
In [5]:
# setup the environment
env = gym.make('CartPole-v1', render_mode='rgb_array')
In [6]:
env.reset()
frame = env.render()
plt.imshow(frame)
plt.grid(False)
In [7]:
frames = []
env.reset()
total_reward = 0
for i in range(100):
    action = env.action_space.sample()
    next_state, reward, terminated, truncated, info = env.step(action)
    done = terminated or truncated
    total_reward += reward
    frame = env.render()
    frames.append(frame)
    if done:
        break

print("Game terminated after", len(frames), " steps with reward ", total_reward)
save_frames_as_gif(frames, path='./', filename='random_agent.gif')
Game terminated after 12  steps with reward  12.0
In [8]:
HTML('<img src="./random_agent.gif">')
Out[8]:

Lets compute the average reward of the random agent

In [9]:
sum_reward=0
for j in range(100):
    env.reset()
    frames = []
    total_reward = 0
    for i in range(500):
        action = env.action_space.sample()
        pesudo_state, reward, terminated, truncated, info = env.step(action)
        done = terminated or truncated
        #reward = torch.tensor([reward], device=device)
        total_reward += reward

        frame = env.render()
        frames.append(frame)

        #print(i, action.item())
        if done:
            break

    print("Game ", j , " terminated after", len(frames), "steps with reward", total_reward)
    sum_reward += total_reward
print("Average reward", sum_reward/100)
Game  0  terminated after 29 steps with reward 29.0
Game  1  terminated after 12 steps with reward 12.0
Game  2  terminated after 11 steps with reward 11.0
Game  3  terminated after 18 steps with reward 18.0
Game  4  terminated after 9 steps with reward 9.0
Game  5  terminated after 24 steps with reward 24.0
Game  6  terminated after 29 steps with reward 29.0
Game  7  terminated after 33 steps with reward 33.0
Game  8  terminated after 13 steps with reward 13.0
Game  9  terminated after 16 steps with reward 16.0
Game  10  terminated after 14 steps with reward 14.0
Game  11  terminated after 16 steps with reward 16.0
Game  12  terminated after 34 steps with reward 34.0
Game  13  terminated after 19 steps with reward 19.0
Game  14  terminated after 10 steps with reward 10.0
Game  15  terminated after 20 steps with reward 20.0
Game  16  terminated after 31 steps with reward 31.0
Game  17  terminated after 27 steps with reward 27.0
Game  18  terminated after 12 steps with reward 12.0
Game  19  terminated after 28 steps with reward 28.0
Game  20  terminated after 21 steps with reward 21.0
Game  21  terminated after 26 steps with reward 26.0
Game  22  terminated after 22 steps with reward 22.0
Game  23  terminated after 20 steps with reward 20.0
Game  24  terminated after 24 steps with reward 24.0
Game  25  terminated after 9 steps with reward 9.0
Game  26  terminated after 14 steps with reward 14.0
Game  27  terminated after 38 steps with reward 38.0
Game  28  terminated after 27 steps with reward 27.0
Game  29  terminated after 16 steps with reward 16.0
Game  30  terminated after 22 steps with reward 22.0
Game  31  terminated after 16 steps with reward 16.0
Game  32  terminated after 25 steps with reward 25.0
Game  33  terminated after 24 steps with reward 24.0
Game  34  terminated after 11 steps with reward 11.0
Game  35  terminated after 11 steps with reward 11.0
Game  36  terminated after 68 steps with reward 68.0
Game  37  terminated after 18 steps with reward 18.0
Game  38  terminated after 10 steps with reward 10.0
Game  39  terminated after 33 steps with reward 33.0
Game  40  terminated after 15 steps with reward 15.0
Game  41  terminated after 26 steps with reward 26.0
Game  42  terminated after 52 steps with reward 52.0
Game  43  terminated after 41 steps with reward 41.0
Game  44  terminated after 14 steps with reward 14.0
Game  45  terminated after 15 steps with reward 15.0
Game  46  terminated after 17 steps with reward 17.0
Game  47  terminated after 30 steps with reward 30.0
Game  48  terminated after 13 steps with reward 13.0
Game  49  terminated after 16 steps with reward 16.0
Game  50  terminated after 17 steps with reward 17.0
Game  51  terminated after 11 steps with reward 11.0
Game  52  terminated after 11 steps with reward 11.0
Game  53  terminated after 59 steps with reward 59.0
Game  54  terminated after 13 steps with reward 13.0
Game  55  terminated after 18 steps with reward 18.0
Game  56  terminated after 12 steps with reward 12.0
Game  57  terminated after 61 steps with reward 61.0
Game  58  terminated after 32 steps with reward 32.0
Game  59  terminated after 22 steps with reward 22.0
Game  60  terminated after 57 steps with reward 57.0
Game  61  terminated after 18 steps with reward 18.0
Game  62  terminated after 18 steps with reward 18.0
Game  63  terminated after 19 steps with reward 19.0
Game  64  terminated after 39 steps with reward 39.0
Game  65  terminated after 19 steps with reward 19.0
Game  66  terminated after 13 steps with reward 13.0
Game  67  terminated after 14 steps with reward 14.0
Game  68  terminated after 13 steps with reward 13.0
Game  69  terminated after 15 steps with reward 15.0
Game  70  terminated after 9 steps with reward 9.0
Game  71  terminated after 16 steps with reward 16.0
Game  72  terminated after 17 steps with reward 17.0
Game  73  terminated after 37 steps with reward 37.0
Game  74  terminated after 25 steps with reward 25.0
Game  75  terminated after 12 steps with reward 12.0
Game  76  terminated after 17 steps with reward 17.0
Game  77  terminated after 20 steps with reward 20.0
Game  78  terminated after 17 steps with reward 17.0
Game  79  terminated after 24 steps with reward 24.0
Game  80  terminated after 17 steps with reward 17.0
Game  81  terminated after 24 steps with reward 24.0
Game  82  terminated after 17 steps with reward 17.0
Game  83  terminated after 29 steps with reward 29.0
Game  84  terminated after 39 steps with reward 39.0
Game  85  terminated after 22 steps with reward 22.0
Game  86  terminated after 13 steps with reward 13.0
Game  87  terminated after 41 steps with reward 41.0
Game  88  terminated after 30 steps with reward 30.0
Game  89  terminated after 15 steps with reward 15.0
Game  90  terminated after 94 steps with reward 94.0
Game  91  terminated after 11 steps with reward 11.0
Game  92  terminated after 14 steps with reward 14.0
Game  93  terminated after 12 steps with reward 12.0
Game  94  terminated after 27 steps with reward 27.0
Game  95  terminated after 35 steps with reward 35.0
Game  96  terminated after 19 steps with reward 19.0
Game  97  terminated after 32 steps with reward 32.0
Game  98  terminated after 29 steps with reward 29.0
Game  99  terminated after 17 steps with reward 17.0
Average reward 23.11

Replay Memory

We'll be using experience replay memory for training our DQN. It stores the transitions that the agent observes, allowing us to reuse this data later. By sampling from it randomly, the transitions that build up a batch are decorrelated. It has been shown that this greatly stabilizes and improves the DQN training procedure.

For this, we're going to need two classses:

  • Transition - a named tuple representing a single transition in our environment. It essentially maps (state, action) pairs to their (next_state, reward) result, with the state being the screen difference image as described later on.
  • ReplayMemory - a cyclic buffer of bounded size that holds the transitions observed recently. It also implements a .sample() method for selecting a random batch of transitions for training.
In [10]:
# the structure of the transition that we store
Transition = namedtuple('Transition', ('state', 'action', 'next_state', 'reward'))

# stores the Experience Replay buffer
class ReplayMemory(object):

    def __init__(self, capacity):
        self.cap = capacity
        self.memory = deque([],maxlen=capacity)

    def push(self, *args):
        self.memory.append(Transition(*args))

    def sample(self, batch_size):
        return random.sample(self.memory, batch_size)

    def __len__(self):
        return len(self.memory)

Now, let's define our model. But first, let's quickly recap what a DQN is.

DQN algorithm

Our environment is deterministic, so all equations presented here are also formulated deterministically for the sake of simplicity. In the reinforcement learning literature, they would also contain expectations over stochastic transitions in the environment.

Our aim will be to train a policy that tries to maximize the discounted, cumulative reward Rt0=t=t0γtt0rt, where Rt0 is also known as the return. The discount, γ, should be a constant between 0 and 1 that ensures the sum converges. It makes rewards from the uncertain far future less important for our agent than the ones in the near future that it can be fairly confident about.

The main idea behind Q-learning is that if we had a function Q:State×ActionR, that could tell us what our return would be, if we were to take an action in a given state, then we could easily construct a policy that maximizes our rewards:

(1)π(s)=argmaxa Q(s,a)

However, we don't know everything about the world, so we don't have access to Q. But, since neural networks are universal function approximators, we can simply create one and train it to resemble Q.

For our training update rule, we'll use a fact that every Q function for some policy obeys the Bellman equation:

(2)Qπ(s,a)=r+γQπ(s,π(s))

The difference between the two sides of the equality is known as the temporal difference error, δ:

(3)δ=Q(s,a)(r+γmaxaQ(s,a))

To minimise this error, we will use the Smooth L1 Loss aka Huber loss https://en.wikipedia.org/wiki/Huber_loss. The Huber loss acts like the mean squared error when the error is small, but like the mean absolute error when the error is large - this makes it more robust to outliers when the estimates of Q are very noisy. We calculate this over a batch of transitions, B, sampled from the replay memory:

(4)L=1|B|(s,a,s,r)  BL(δ)(5)whereL(δ)={12δ2for |δ|1,|δ|12otherwise.

Q-network

Our model will be a convolutional neural network that takes in the difference between the current and previous screen patches. It has two outputs, representing Q(s,left) and Q(s,right) (where s is the input to the network). In effect, the network is trying to predict the expected return of taking each action given the current input.

In [11]:
class DQN(nn.Module):

    def __init__(self, h, w, output_size):
        super(DQN, self).__init__()
        self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2)
        self.bn1 = nn.BatchNorm2d(16)
        self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)
        self.bn2 = nn.BatchNorm2d(32)
        self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2)
        self.bn3 = nn.BatchNorm2d(32)

        # Number of Linear input connections depends on output of conv2d layers
        # and therefore the input image size, so compute it.
        def conv2d_size_out(size, kernel_size = 5, stride = 2):
            return (size - (kernel_size - 1) - 1) // stride  + 1
        
        convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w)))
        convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h)))
        linear_input_size = convw * convh * 32
        
        self.lin1 = nn.Linear(linear_input_size, 50)
        self.lin2 = nn.Linear(50, output_size)

    # Called with either one element to determine next action, or a batch
    # during optimization. Returns tensor([[left0exp,right0exp]...]).
    def forward(self, x):
        x = x.to(device)
        x = F.relu(self.bn1(self.conv1(x)))
        x = F.relu(self.bn2(self.conv2(x)))
        x = F.relu(self.bn3(self.conv3(x)))
        x = F.relu(self.lin1(x.view(x.size(0), -1)))
        return self.lin2(x)

Preprocess the Input

The input image, from the video game display, is larger than necessary. Processing it directly will be more expensive. So we trim it down

The code below are utilities for extracting and processing rendered images from the environment. It uses the torchvision package, which makes it easy to compose image transforms. Once you run the cell it will display an example patch that it extracted.

In [12]:
resize = T.Compose([T.ToPILImage(),
                    T.Resize(40, interpolation=Image.CUBIC),
                    T.ToTensor()])


def get_image_center(screen_width): 
    world_width = env.x_threshold * 2
    scale = screen_width / world_width
    return int(env.state[0] * scale + screen_width / 2.0)  # MIDDLE OF CART

def get_screen():
    # Returned screen requested by gym is 400x600x3, but is sometimes larger
    # such as 800x1200x3. Transpose it into torch order (CHW).
    screen = env.render().transpose((2, 0, 1))
    # Cart is in the lower half, so strip off the top and bottom of the screen
    _, screen_height, screen_width = screen.shape
    screen = screen[:, int(screen_height*0.4):int(screen_height * 0.8)]
    view_width = int(screen_width * 0.6)
    cart_location = get_image_center(screen_width)
    if cart_location < view_width // 2:
        slice_range = slice(view_width)
    elif cart_location > (screen_width - view_width // 2):
        slice_range = slice(-view_width, None)
    else:
        slice_range = slice(cart_location - view_width // 2,
                            cart_location + view_width // 2)
    # Strip off the edges, so that we have a square image centered on a cart
    screen = screen[:, :, slice_range]
    # Convert to float, rescale, convert to torch tensor
    # (this doesn't require a copy)
    screen = np.ascontiguousarray(screen, dtype=np.float32) / 255
    screen = torch.from_numpy(screen)
    # Resize, and add a batch dimension (BCHW)
    return resize(screen).unsqueeze(0)


env.reset()
plt.figure()
plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(), interpolation='none')
#plt.imshow(env.render(mode='rgb_array'))
plt.title('Example of extracted screen')
plt.show()
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:2: DeprecationWarning: CUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
  
/opt/conda/lib/python3.7/site-packages/torchvision/transforms/transforms.py:333: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
  "Argument interpolation should be of type InterpolationMode instead of int. "

Training

Hyperparameters and utilities

This cell instantiates our model and its optimizer, and defines some utilities:

  • select_action - will select an action accordingly to an epsilon greedy policy. Simply put, we'll sometimes use our model for choosing the action, and sometimes we'll just sample one uniformly. The probability of choosing a random action will start at EPS_START and will decay exponentially towards EPS_END. EPS_DECAY controls the rate of the decay.
  • plot_durations - a helper for plotting the durations of episodes, along with an average over the last 100 episodes (the measure used in the official evaluations). The plot will be underneath the cell containing the main training loop, and will update after every episode.
In [13]:
def select_action(state, policy=None, train=True):
    global steps_done
    sample = random.random()
    eps_threshold = EPS_END + (EPS_START - EPS_END) * math.exp(-1. * steps_done / EPS_DECAY)
    steps_done += 1

    with torch.no_grad():
        # t.max(1) will return largest column value of each row.
        # second column on max result is index of where max element was
        # found, so we pick action with the larger expected reward.
        action = policy(state).max(1)[1].view(1, 1)
    
    if train:
        if sample > eps_threshold:
            return action
        else:
            return torch.tensor([[random.randrange(n_actions)]], device=device, dtype=torch.long)
    else:
        return action


episode_durations = []

Training loop

Finally, the code for training our model.

Here, you can find an optimize_model function that performs a single step of the optimization. It first samples a batch, concatenates all the tensors into a single one, computes Q(st,at) and V(st+1)=maxaQ(st+1,a), and combines them into our loss. By definition we set V(s)=0 if s is a terminal state. We also use a target network to compute V(st+1) for added stability. The target network has its weights kept frozen most of the time, but is updated with the policy network's weights every so often. This is usually a set number of steps but we shall use episodes for simplicity.

In [14]:
def optimize_model():
    if len(memory) < BATCH_SIZE:
        return
    transitions = memory.sample(BATCH_SIZE)
    # Transpose the batch (see https://stackoverflow.com/a/19343/3343043 for
    # detailed explanation). This converts batch-array of Transitions
    # to Transition of batch-arrays.
    batch = Transition(*zip(*transitions))

    # Compute a mask of non-final states and concatenate the batch elements
    # (a final state would've been the one after which simulation ended)
    non_final_mask = torch.tensor(tuple(map(lambda s: s is not None,
                                          batch.next_state)), device=device, dtype=torch.bool)
    non_final_next_states = torch.cat([s for s in batch.next_state
                                                if s is not None])
    state_batch = torch.cat(batch.state)
    action_batch = torch.cat(batch.action)
    reward_batch = torch.cat(batch.reward)

    # Compute Q(s_t, a) - the model computes Q(s_t), then we select the
    # columns of actions taken. These are the actions which would've been taken
    # for each batch state according to policy_net
    state_action_values = policy_net(state_batch).gather(1, action_batch)

    # Compute V(s_{t+1}) for all next states.
    # Expected values of actions for non_final_next_states are computed based
    # on the "older" target_net; selecting their best reward with max(1)[0].
    # This is merged based on the mask, such that we'll have either the expected
    # state value or 0 in case the state was final.
    next_state_values = torch.zeros(BATCH_SIZE, device=device)
    next_state_values[non_final_mask] = target_net(non_final_next_states).max(1)[0].detach()
    # Compute the expected Q values
    expected_state_action_values = (next_state_values * GAMMA) + reward_batch

    # Compute Huber loss
    criterion = nn.SmoothL1Loss()
    loss = criterion(state_action_values, expected_state_action_values.unsqueeze(1))

    # Optimize the model
    optimizer.zero_grad()
    loss.backward()
    for param in policy_net.parameters():
        param.grad.data.clamp_(-1, 1)
    optimizer.step()

Below, you can find the main training loop. At the beginning we reset the environment and initialize the state Tensor. Then, we sample an action, execute it, observe the next screen and the reward (always 1), and optimize our model once. When the episode ends (our model fails), we restart the loop.

Below, num_episodes is set small. You should download the notebook and run lot more epsiodes, such as 300+ for meaningful duration improvements.

In [15]:
BATCH_SIZE = 128
GAMMA = 0.999
EPS_START = 0.9
EPS_END = 0.05
EPS_DECAY = 200
TARGET_UPDATE = 10

# Get screen size so that we can initialize layers correctly based on shape
# returned from AI gym. Typical dimensions at this point are close to 3x40x90
# which is the result of a clamped and down-scaled render buffer in get_screen()
init_screen = get_screen()
_, _, screen_height, screen_width = init_screen.shape

# Get number of actions from gym action space
n_actions = env.action_space.n

policy_net = DQN(screen_height, screen_width, n_actions).to(device)
target_net = DQN(screen_height, screen_width, n_actions).to(device)
target_net.load_state_dict(policy_net.state_dict())
target_net.eval()

optimizer = optim.RMSprop(policy_net.parameters())
memory = ReplayMemory(10000)

steps_done = 0
num_episodes = 200

for i_episode in range(num_episodes+1):
    # Initialize the environment and state
    env.reset()
    last_screen = get_screen()
    current_screen = get_screen()
    state = current_screen - last_screen
    for t in count():
        # Select and perform an action
        action = select_action(state, policy_net)
        next_state, reward, terminated, truncated, info = env.step(action.item())
        done = terminated or truncated
        reward = torch.tensor([reward], device=device)

        # Observe new state
        last_screen = current_screen
        current_screen = get_screen()
        if not done:
            next_state = current_screen - last_screen
        else:
            next_state = None

        # Store the transition in memory
        memory.push(state, action, next_state, reward)

        # Move to the next state
        state = next_state

        # Perform one step of the optimization (on the policy network)
        optimize_model()
        if done:
            episode_durations.append(t + 1)
            break
    
    # Update the target network, copying all weights and biases in DQN
    if i_episode % TARGET_UPDATE == 0:
        print("Completed Episode", i_episode)
        target_net.load_state_dict(policy_net.state_dict())
    
    if i_episode % 50 == 0:
        print("Playing a test game after episode ", i_episode)
        frames = []
        env.reset()
        last_screen = get_screen()
        current_screen = get_screen()
        state = current_screen - last_screen
        total_reward = 0
        for i in range(1000):
            if i == 0:
                action = env.action_space.sample()
            action = select_action(state, policy_net, train=False)
            #pesudo_state, reward, done, info = env.step(action.item())
            pesudo_state, reward, terminated, truncated, info = env.step(action.item())
            done = terminated or truncated
            #reward = torch.tensor([reward], device=device)
            total_reward += reward

            # Observe new state
            last_screen = current_screen
            current_screen = get_screen()
            if not done:
                state = current_screen - last_screen
            else:
                break

            frame = env.render()
            frames.append(frame)
            if done:
                break

        print("Game terminated after", len(frames), "steps with reward", total_reward)

print('Complete')
env.render()
env.close()
Completed Episode 0
Playing a test game after episode  0
Game terminated after 23 steps with reward 24.0
Completed Episode 10
Completed Episode 20
Completed Episode 30
Completed Episode 40
Completed Episode 50
Playing a test game after episode  50
Game terminated after 17 steps with reward 18.0
Completed Episode 60
Completed Episode 70
Completed Episode 80
Completed Episode 90
Completed Episode 100
Playing a test game after episode  100
Game terminated after 15 steps with reward 16.0
Completed Episode 110
Completed Episode 120
Completed Episode 130
Completed Episode 140
Completed Episode 150
Playing a test game after episode  150
Game terminated after 27 steps with reward 28.0
Completed Episode 160
Completed Episode 170
Completed Episode 180
Completed Episode 190
Completed Episode 200
Playing a test game after episode  200
Game terminated after 89 steps with reward 90.0
Complete

Play a game

In [16]:
sum_reward=0
for j in range(100):
    env.reset()
    frames = []
    last_screen = get_screen()
    current_screen = get_screen()
    state = current_screen - last_screen
    total_reward = 0
    for i in range(500):
        if i == 0:
            action = env.action_space.sample()
        action = select_action(state, policy_net, train=False)
        pesudo_state, reward, terminated, truncated, info = env.step(action.item())
        done = terminated or truncated
        #reward = torch.tensor([reward], device=device)
        total_reward += reward

        # Observe new state
        last_screen = current_screen
        current_screen = get_screen()
        if not done:
            state = current_screen - last_screen
        else:
            break

        frame = env.render()
        frames.append(frame)

        #print(i, action.item())
        if done:
            break

    print("Game ", j , " terminated after", len(frames), "steps with reward", total_reward)
    sum_reward += total_reward
print("Average reward", sum_reward/100)
Game  0  terminated after 88 steps with reward 89.0
Game  1  terminated after 80 steps with reward 81.0
Game  2  terminated after 84 steps with reward 85.0
Game  3  terminated after 87 steps with reward 88.0
Game  4  terminated after 93 steps with reward 94.0
Game  5  terminated after 107 steps with reward 108.0
Game  6  terminated after 88 steps with reward 89.0
Game  7  terminated after 83 steps with reward 84.0
Game  8  terminated after 97 steps with reward 98.0
Game  9  terminated after 76 steps with reward 77.0
Game  10  terminated after 85 steps with reward 86.0
Game  11  terminated after 91 steps with reward 92.0
Game  12  terminated after 102 steps with reward 103.0
Game  13  terminated after 86 steps with reward 87.0
Game  14  terminated after 103 steps with reward 104.0
Game  15  terminated after 92 steps with reward 93.0
Game  16  terminated after 93 steps with reward 94.0
Game  17  terminated after 116 steps with reward 117.0
Game  18  terminated after 97 steps with reward 98.0
Game  19  terminated after 100 steps with reward 101.0
Game  20  terminated after 101 steps with reward 102.0
Game  21  terminated after 88 steps with reward 89.0
Game  22  terminated after 95 steps with reward 96.0
Game  23  terminated after 96 steps with reward 97.0
Game  24  terminated after 101 steps with reward 102.0
Game  25  terminated after 86 steps with reward 87.0
Game  26  terminated after 88 steps with reward 89.0
Game  27  terminated after 88 steps with reward 89.0
Game  28  terminated after 87 steps with reward 88.0
Game  29  terminated after 102 steps with reward 103.0
Game  30  terminated after 94 steps with reward 95.0
Game  31  terminated after 82 steps with reward 83.0
Game  32  terminated after 108 steps with reward 109.0
Game  33  terminated after 97 steps with reward 98.0
Game  34  terminated after 103 steps with reward 104.0
Game  35  terminated after 79 steps with reward 80.0
Game  36  terminated after 47 steps with reward 48.0
Game  37  terminated after 92 steps with reward 93.0
Game  38  terminated after 110 steps with reward 111.0
Game  39  terminated after 82 steps with reward 83.0
Game  40  terminated after 80 steps with reward 81.0
Game  41  terminated after 103 steps with reward 104.0
Game  42  terminated after 101 steps with reward 102.0
Game  43  terminated after 86 steps with reward 87.0
Game  44  terminated after 89 steps with reward 90.0
Game  45  terminated after 94 steps with reward 95.0
Game  46  terminated after 85 steps with reward 86.0
Game  47  terminated after 97 steps with reward 98.0
Game  48  terminated after 88 steps with reward 89.0
Game  49  terminated after 82 steps with reward 83.0
Game  50  terminated after 100 steps with reward 101.0
Game  51  terminated after 90 steps with reward 91.0
Game  52  terminated after 93 steps with reward 94.0
Game  53  terminated after 83 steps with reward 84.0
Game  54  terminated after 86 steps with reward 87.0
Game  55  terminated after 100 steps with reward 101.0
Game  56  terminated after 93 steps with reward 94.0
Game  57  terminated after 101 steps with reward 102.0
Game  58  terminated after 82 steps with reward 83.0
Game  59  terminated after 91 steps with reward 92.0
Game  60  terminated after 106 steps with reward 107.0
Game  61  terminated after 94 steps with reward 95.0
Game  62  terminated after 82 steps with reward 83.0
Game  63  terminated after 91 steps with reward 92.0
Game  64  terminated after 105 steps with reward 106.0
Game  65  terminated after 97 steps with reward 98.0
Game  66  terminated after 102 steps with reward 103.0
Game  67  terminated after 94 steps with reward 95.0
Game  68  terminated after 103 steps with reward 104.0
Game  69  terminated after 88 steps with reward 89.0
Game  70  terminated after 87 steps with reward 88.0
Game  71  terminated after 86 steps with reward 87.0
Game  72  terminated after 93 steps with reward 94.0
Game  73  terminated after 113 steps with reward 114.0
Game  74  terminated after 95 steps with reward 96.0
Game  75  terminated after 102 steps with reward 103.0
Game  76  terminated after 93 steps with reward 94.0
Game  77  terminated after 83 steps with reward 84.0
Game  78  terminated after 98 steps with reward 99.0
Game  79  terminated after 90 steps with reward 91.0
Game  80  terminated after 87 steps with reward 88.0
Game  81  terminated after 95 steps with reward 96.0
Game  82  terminated after 85 steps with reward 86.0
Game  83  terminated after 92 steps with reward 93.0
Game  84  terminated after 98 steps with reward 99.0
Game  85  terminated after 100 steps with reward 101.0
Game  86  terminated after 89 steps with reward 90.0
Game  87  terminated after 92 steps with reward 93.0
Game  88  terminated after 83 steps with reward 84.0
Game  89  terminated after 92 steps with reward 93.0
Game  90  terminated after 83 steps with reward 84.0
Game  91  terminated after 87 steps with reward 88.0
Game  92  terminated after 107 steps with reward 108.0
Game  93  terminated after 94 steps with reward 95.0
Game  94  terminated after 86 steps with reward 87.0
Game  95  terminated after 88 steps with reward 89.0
Game  96  terminated after 142 steps with reward 143.0
Game  97  terminated after 104 steps with reward 105.0
Game  98  terminated after 117 steps with reward 118.0
Game  99  terminated after 38 steps with reward 39.0
Average reward 93.59
In [17]:
frames = []
env.reset()
total_reward = 0
last_screen = get_screen()
current_screen = get_screen()
state = current_screen - last_screen
for i in range(500):
    if i == 0:
        action = env.action_space.sample()
    action = select_action(state, policy_net, train=False)
    pesudo_state, reward, terminated, truncated, info = env.step(action.item())
    done = terminated or truncated
    #reward = torch.tensor([reward], device=device)
    total_reward += reward

    # Observe new state
    last_screen = current_screen
    current_screen = get_screen()
    if not done:
        state = current_screen - last_screen
    else:
        break

    frame = env.render()
    frames.append(frame)

    if done:
        break

print("Game terminated after", len(frames), "steps with reward", total_reward)
save_frames_as_gif(frames, path='./', filename='RL_agent.gif')
Game terminated after 93 steps with reward 94.0
In [18]:
HTML('<img src="./RL_agent.gif">')
Out[18]:

Here is the diagram that illustrates the overall resulting data flow.

Actions are chosen either randomly or based on a policy, getting the next step sample from the gym environment. We record the results in the replay memory and also run optimization step on every iteration. Optimization picks a random batch from the replay memory to do training of the new policy. "Older" target_net is also used in optimization to compute the expected Q values; it is updated occasionally to keep it current.