In [1]:
%matplotlib inline

Reinforcement Learning (DQN) TutorialΒΆ

Based on the tutorial by:

Author: Adam Paszke https://github.com/apaszke

This tutorial shows how to use PyTorch to train a Deep Q Learning (DQN) agent on the CartPole-v0 task from the OpenAI Gym https://gym.openai.com/

Task

You can find an official leaderboard with various algorithms and visualizations at the Gym website https://gym.openai.com/envs/CartPole-v0

The player to decide between two actions - moving the cart left or right - so that the pole attached to it stays upright.

In this task, rewards are:

  • +1 for every incremental timestep
  • and the environment terminates if
    • the pole falls over too far
    • or the cart moves more then 2.4 units away from center.

This means better performing scenarios will run for longer duration, accumulating larger return.

Neural networks can solve the task purely by looking at the scene.

  • we'll use a patch of the screen centered on the cart as the observation of the current state
  • our actions are move left or move right

Strictly speaking, we will present the state as the difference between the current screen patch and the previous one. This will allow the agent to take the velocity of the pole into account from one image.

Packages

First, let's import needed packages. Firstly, we need gym https://gym.openai.com/docs for the environment (Install using pip install gym). We'll also use the following from PyTorch:

  • neural networks (torch.nn)
  • optimization (torch.optim)
  • automatic differentiation (torch.autograd)
  • utilities for vision tasks (torchvision - a separate package https://github.com/pytorch/vision).
In [2]:
!pip install gym[atari] 
Requirement already satisfied: gym[atari] in /opt/conda/lib/python3.7/site-packages (0.21.0)
Requirement already satisfied: importlib-metadata>=4.8.1 in /opt/conda/lib/python3.7/site-packages (from gym[atari]) (4.8.2)
Requirement already satisfied: cloudpickle>=1.2.0 in /opt/conda/lib/python3.7/site-packages (from gym[atari]) (2.0.0)
Requirement already satisfied: numpy>=1.18.0 in /opt/conda/lib/python3.7/site-packages (from gym[atari]) (1.19.5)
Collecting ale-py~=0.7.1
  Downloading ale_py-0.7.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.6 MB)
     |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.6 MB 517 kB/s            
Requirement already satisfied: importlib-resources in /opt/conda/lib/python3.7/site-packages (from ale-py~=0.7.1->gym[atari]) (5.4.0)
Requirement already satisfied: typing-extensions>=3.6.4 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata>=4.8.1->gym[atari]) (3.10.0.2)
Requirement already satisfied: zipp>=0.5 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata>=4.8.1->gym[atari]) (3.6.0)
Installing collected packages: ale-py
Successfully installed ale-py-0.7.3
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
In [3]:
!pip install pyglet==1.5.0
Collecting pyglet==1.5.0
  Downloading pyglet-1.5.0-py2.py3-none-any.whl (1.0 MB)
     |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.0 MB 517 kB/s            
Requirement already satisfied: future in /opt/conda/lib/python3.7/site-packages (from pyglet==1.5.0) (0.18.2)
Installing collected packages: pyglet
Successfully installed pyglet-1.5.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
In [4]:
#!apt-get install python-opengl -y
#!pip install PyOpenGL 
#!pip install PyOpenGL_accelerate
!pip install pyvirtualdisplay
Collecting pyvirtualdisplay
  Downloading PyVirtualDisplay-2.2-py3-none-any.whl (15 kB)
Collecting EasyProcess
  Downloading EasyProcess-0.3-py2.py3-none-any.whl (7.9 kB)
Installing collected packages: EasyProcess, pyvirtualdisplay
Successfully installed EasyProcess-0.3 pyvirtualdisplay-2.2
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
In [5]:
import gym
import math
import random
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from collections import namedtuple, deque
from itertools import count
from PIL import Image

import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.transforms as T

# to display things
import os
from pyvirtualdisplay import Display
from matplotlib import animation , rc

display = Display(visible=0, size=(1400, 900))
display.start()
os.environ["DISPLAY"] = ":" + str(display.display) + "." + str(display._obj._screen)

# setup the environment
env = gym.make('CartPole-v0').unwrapped

# set up matplotlib
is_ipython = 'inline' in matplotlib.get_backend()
if is_ipython:
    from IPython import display

plt.ion()

# if gpu is to be used
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
/opt/conda/lib/python3.7/site-packages/ale_py/roms/utils.py:90: DeprecationWarning: SelectableGroups dict interface is deprecated. Use select.
  for external in metadata.entry_points().get(self.group, []):

Display the game environmentΒΆ

In [6]:
env.reset()
plt.imshow(env.render('rgb_array'))
plt.grid(False)
In [7]:
frame = []
env.reset()
total_reward = 0
for i in range(100):
    action = env.action_space.sample()
    state, reward, done, info = env.step(action)
    total_reward += reward
    img = plt.imshow(env.render('rgb_array'))
    frame.append([img])
    if done:
        break

print("Game terminated after", len(frame), " steps with reward ", total_reward)
Game terminated after 35  steps with reward  35.0
In [8]:
fig = plt.figure()
anim = animation.ArtistAnimation(fig, frame, interval=100, repeat_delay=1000, blit=True)
rc('animation', html='jshtml')
anim
Out[8]:
<Figure size 432x288 with 0 Axes>

Replay MemoryΒΆ

We'll be using experience replay memory for training our DQN. It stores the transitions that the agent observes, allowing us to reuse this data later. By sampling from it randomly, the transitions that build up a batch are decorrelated. It has been shown that this greatly stabilizes and improves the DQN training procedure.

For this, we're going to need two classses:

  • Transition - a named tuple representing a single transition in our environment. It essentially maps (state, action) pairs to their (next_state, reward) result, with the state being the screen difference image as described later on.
  • ReplayMemory - a cyclic buffer of bounded size that holds the transitions observed recently. It also implements a .sample() method for selecting a random batch of transitions for training.
In [9]:
# the structure of the transition that we store
Transition = namedtuple('Transition', ('state', 'action', 'next_state', 'reward'))

# stores the Experience Replay buffer
class ReplayMemory(object):

    def __init__(self, capacity):
        self.cap = capacity
        self.memory = deque([],maxlen=capacity)

    def push(self, *args):
        self.memory.append(Transition(*args))

    def sample(self, batch_size):
        return random.sample(self.memory, batch_size)

    def __len__(self):
        return len(self.memory)

Now, let's define our model. But first, let's quickly recap what a DQN is.

DQN algorithmΒΆ

Our environment is deterministic, so all equations presented here are also formulated deterministically for the sake of simplicity. In the reinforcement learning literature, they would also contain expectations over stochastic transitions in the environment.

Our aim will be to train a policy that tries to maximize the discounted, cumulative reward Rt0=βˆ‘t=t0∞γtβˆ’t0rt, where Rt0 is also known as the return. The discount, Ξ³, should be a constant between 0 and 1 that ensures the sum converges. It makes rewards from the uncertain far future less important for our agent than the ones in the near future that it can be fairly confident about.

The main idea behind Q-learning is that if we had a function Qβˆ—:StateΓ—Actionβ†’R, that could tell us what our return would be, if we were to take an action in a given state, then we could easily construct a policy that maximizes our rewards:

(1)Ο€βˆ—(s)=argmaxa Qβˆ—(s,a)

However, we don't know everything about the world, so we don't have access to Qβˆ—. But, since neural networks are universal function approximators, we can simply create one and train it to resemble Qβˆ—.

For our training update rule, we'll use a fact that every Q function for some policy obeys the Bellman equation:

(2)QΟ€(s,a)=r+Ξ³QΟ€(sβ€²,Ο€(sβ€²))

The difference between the two sides of the equality is known as the temporal difference error, Ξ΄:

(3)Ξ΄=Q(s,a)βˆ’(r+Ξ³maxaQ(sβ€²,a))

To minimise this error, we will use the Smooth L1 Loss aka Huber loss https://en.wikipedia.org/wiki/Huber_loss. The Huber loss acts like the mean squared error when the error is small, but like the mean absolute error when the error is large - this makes it more robust to outliers when the estimates of Q are very noisy. We calculate this over a batch of transitions, B, sampled from the replay memory:

(4)L=1|B|βˆ‘(s,a,sβ€²,r) βˆˆ BL(Ξ΄)(5)whereL(Ξ΄)={12Ξ΄2for |Ξ΄|≀1,|Ξ΄|βˆ’12otherwise.

Q-networkΒΆ

Our model will be a convolutional neural network that takes in the difference between the current and previous screen patches. It has two outputs, representing Q(s,left) and Q(s,right) (where s is the input to the network). In effect, the network is trying to predict the expected return of taking each action given the current input.

In [10]:
class DQN(nn.Module):

    def __init__(self, h, w, outputs):
        super(DQN, self).__init__()
        self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2)
        self.bn1 = nn.BatchNorm2d(16)
        self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)
        self.bn2 = nn.BatchNorm2d(32)
        self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2)
        self.bn3 = nn.BatchNorm2d(32)

        # Number of Linear input connections depends on output of conv2d layers
        # and therefore the input image size, so compute it.
        def conv2d_size_out(size, kernel_size = 5, stride = 2):
            return (size - (kernel_size - 1) - 1) // stride  + 1
        
        convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w)))
        convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h)))
        linear_input_size = convw * convh * 32
        
        self.head = nn.Linear(linear_input_size, outputs)

    # Called with either one element to determine next action, or a batch
    # during optimization. Returns tensor([[left0exp,right0exp]...]).
    def forward(self, x):
        x = x.to(device)
        x = F.relu(self.bn1(self.conv1(x)))
        x = F.relu(self.bn2(self.conv2(x)))
        x = F.relu(self.bn3(self.conv3(x)))
        return self.head(x.view(x.size(0), -1))

Input extractionΒΆ

The code below are utilities for extracting and processing rendered images from the environment. It uses the torchvision package, which makes it easy to compose image transforms. Once you run the cell it will display an example patch that it extracted.

In [11]:
resize = T.Compose([T.ToPILImage(),
                    T.Resize(40, interpolation=Image.CUBIC),
                    T.ToTensor()])


def get_cart_location(screen_width):
    world_width = env.x_threshold * 2
    scale = screen_width / world_width
    return int(env.state[0] * scale + screen_width / 2.0)  # MIDDLE OF CART

def get_screen():
    # Returned screen requested by gym is 400x600x3, but is sometimes larger
    # such as 800x1200x3. Transpose it into torch order (CHW).
    screen = env.render(mode='rgb_array').transpose((2, 0, 1))
    # Cart is in the lower half, so strip off the top and bottom of the screen
    _, screen_height, screen_width = screen.shape
    screen = screen[:, int(screen_height*0.4):int(screen_height * 0.8)]
    view_width = int(screen_width * 0.6)
    cart_location = get_cart_location(screen_width)
    if cart_location < view_width // 2:
        slice_range = slice(view_width)
    elif cart_location > (screen_width - view_width // 2):
        slice_range = slice(-view_width, None)
    else:
        slice_range = slice(cart_location - view_width // 2,
                            cart_location + view_width // 2)
    # Strip off the edges, so that we have a square image centered on a cart
    screen = screen[:, :, slice_range]
    # Convert to float, rescale, convert to torch tensor
    # (this doesn't require a copy)
    screen = np.ascontiguousarray(screen, dtype=np.float32) / 255
    screen = torch.from_numpy(screen)
    # Resize, and add a batch dimension (BCHW)
    return resize(screen).unsqueeze(0)


env.reset()
plt.figure()
plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(), interpolation='none')
#plt.imshow(env.render(mode='rgb_array'))
plt.title('Example extracted screen')
plt.show()
/opt/conda/lib/python3.7/site-packages/torchvision/transforms/transforms.py:281: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
  "Argument interpolation should be of type InterpolationMode instead of int. "

TrainingΒΆ

Hyperparameters and utilitiesΒΆ

This cell instantiates our model and its optimizer, and defines some utilities:

  • select_action - will select an action accordingly to an epsilon greedy policy. Simply put, we'll sometimes use our model for choosing the action, and sometimes we'll just sample one uniformly. The probability of choosing a random action will start at EPS_START and will decay exponentially towards EPS_END. EPS_DECAY controls the rate of the decay.
  • plot_durations - a helper for plotting the durations of episodes, along with an average over the last 100 episodes (the measure used in the official evaluations). The plot will be underneath the cell containing the main training loop, and will update after every episode.
In [12]:
def select_action(state, policy=None, train=True):
    global steps_done
    sample = random.random()
    eps_threshold = EPS_END + (EPS_START - EPS_END) * math.exp(-1. * steps_done / EPS_DECAY)
    steps_done += 1

    with torch.no_grad():
        # t.max(1) will return largest column value of each row.
        # second column on max result is index of where max element was
        # found, so we pick action with the larger expected reward.
        action = policy(state).max(1)[1].view(1, 1)
    
    if train:
        if sample > eps_threshold:
            return action
        else:
            return torch.tensor([[random.randrange(n_actions)]], device=device, dtype=torch.long)
    else:
        return action


episode_durations = []


def plot_durations():
    plt.figure(2)
    plt.clf()
    durations_t = torch.tensor(episode_durations, dtype=torch.float)
    plt.title('Training...')
    plt.xlabel('Episode')
    plt.ylabel('Duration')
    plt.plot(durations_t.numpy())
    # Take 100 episode averages and plot them too
    if len(durations_t) >= 100:
        means = durations_t.unfold(0, 100, 1).mean(1).view(-1)
        means = torch.cat((torch.zeros(99), means))
        plt.plot(means.numpy())

    plt.pause(0.001)  # pause a bit so that plots are updated
    if is_ipython:
        display.clear_output(wait=True)
        display.display(plt.gcf())

Training loopΒΆ

Finally, the code for training our model.

Here, you can find an optimize_model function that performs a single step of the optimization. It first samples a batch, concatenates all the tensors into a single one, computes Q(st,at) and V(st+1)=maxaQ(st+1,a), and combines them into our loss. By definition we set V(s)=0 if s is a terminal state. We also use a target network to compute V(st+1) for added stability. The target network has its weights kept frozen most of the time, but is updated with the policy network's weights every so often. This is usually a set number of steps but we shall use episodes for simplicity.

In [13]:
def optimize_model():
    if len(memory) < BATCH_SIZE:
        return
    transitions = memory.sample(BATCH_SIZE)
    # Transpose the batch (see https://stackoverflow.com/a/19343/3343043 for
    # detailed explanation). This converts batch-array of Transitions
    # to Transition of batch-arrays.
    batch = Transition(*zip(*transitions))

    # Compute a mask of non-final states and concatenate the batch elements
    # (a final state would've been the one after which simulation ended)
    non_final_mask = torch.tensor(tuple(map(lambda s: s is not None,
                                          batch.next_state)), device=device, dtype=torch.bool)
    non_final_next_states = torch.cat([s for s in batch.next_state
                                                if s is not None])
    state_batch = torch.cat(batch.state)
    action_batch = torch.cat(batch.action)
    reward_batch = torch.cat(batch.reward)

    # Compute Q(s_t, a) - the model computes Q(s_t), then we select the
    # columns of actions taken. These are the actions which would've been taken
    # for each batch state according to policy_net
    state_action_values = policy_net(state_batch).gather(1, action_batch)

    # Compute V(s_{t+1}) for all next states.
    # Expected values of actions for non_final_next_states are computed based
    # on the "older" target_net; selecting their best reward with max(1)[0].
    # This is merged based on the mask, such that we'll have either the expected
    # state value or 0 in case the state was final.
    next_state_values = torch.zeros(BATCH_SIZE, device=device)
    next_state_values[non_final_mask] = target_net(non_final_next_states).max(1)[0].detach()
    # Compute the expected Q values
    expected_state_action_values = (next_state_values * GAMMA) + reward_batch

    # Compute Huber loss
    criterion = nn.SmoothL1Loss()
    loss = criterion(state_action_values, expected_state_action_values.unsqueeze(1))

    # Optimize the model
    optimizer.zero_grad()
    loss.backward()
    for param in policy_net.parameters():
        param.grad.data.clamp_(-1, 1)
    optimizer.step()

Below, you can find the main training loop. At the beginning we reset the environment and initialize the state Tensor. Then, we sample an action, execute it, observe the next screen and the reward (always 1), and optimize our model once. When the episode ends (our model fails), we restart the loop.

Below, num_episodes is set small. You should download the notebook and run lot more epsiodes, such as 300+ for meaningful duration improvements.

In [14]:
BATCH_SIZE = 128
GAMMA = 0.999
EPS_START = 0.9
EPS_END = 0.05
EPS_DECAY = 200
TARGET_UPDATE = 10

# Get screen size so that we can initialize layers correctly based on shape
# returned from AI gym. Typical dimensions at this point are close to 3x40x90
# which is the result of a clamped and down-scaled render buffer in get_screen()
init_screen = get_screen()
_, _, screen_height, screen_width = init_screen.shape

# Get number of actions from gym action space
n_actions = env.action_space.n

policy_net = DQN(screen_height, screen_width, n_actions).to(device)
target_net = DQN(screen_height, screen_width, n_actions).to(device)
target_net.load_state_dict(policy_net.state_dict())
target_net.eval()

optimizer = optim.RMSprop(policy_net.parameters())
memory = ReplayMemory(10000)

steps_done = 0
num_episodes = 200

for i_episode in range(num_episodes+1):
    # Initialize the environment and state
    env.reset()
    last_screen = get_screen()
    current_screen = get_screen()
    state = current_screen - last_screen
    for t in count():
        # Select and perform an action
        action = select_action(state, policy_net)
        _, reward, done, _ = env.step(action.item())
        reward = torch.tensor([reward], device=device)

        # Observe new state
        last_screen = current_screen
        current_screen = get_screen()
        if not done:
            next_state = current_screen - last_screen
        else:
            next_state = None

        # Store the transition in memory
        memory.push(state, action, next_state, reward)

        # Move to the next state
        state = next_state

        # Perform one step of the optimization (on the policy network)
        optimize_model()
        if done:
            episode_durations.append(t + 1)
            #plot_durations()
            break
    
    # Update the target network, copying all weights and biases in DQN
    if i_episode % TARGET_UPDATE == 0:
        print("Completed Episode", i_episode)
        target_net.load_state_dict(policy_net.state_dict())
    
    if i_episode % 50 == 0:
        print("Playing a test game after episode ", i_episode)
        frame = []
        env.reset()
        last_screen = get_screen()
        current_screen = get_screen()
        state = current_screen - last_screen
        total_reward = 0
        for i in range(1000):
            if i == 0:
                action = env.action_space.sample()
            action = select_action(state, policy_net, train=False)
            pesudo_state, reward, done, info = env.step(action.item())
            #reward = torch.tensor([reward], device=device)
            total_reward += reward

            # Observe new state
            last_screen = current_screen
            current_screen = get_screen()
            if not done:
                state = current_screen - last_screen
            else:
                break

            #img = plt.imshow(env.render('rgb_array'))
            #frame.append([img])
            if done:
                break

        print("Game terminated after", len(frame), "steps with reward", total_reward)

print('Complete')
env.render()
env.close()
plt.ioff()
plt.show()
Completed Episode 0
Playing a test game after episode  0
Game terminated after 0 steps with reward 14.0
Completed Episode 10
Completed Episode 20
Completed Episode 30
Completed Episode 40
Completed Episode 50
Playing a test game after episode  50
Game terminated after 0 steps with reward 15.0
Completed Episode 60
Completed Episode 70
Completed Episode 80
Completed Episode 90
Completed Episode 100
Playing a test game after episode  100
Game terminated after 0 steps with reward 66.0
Completed Episode 110
Completed Episode 120
Completed Episode 130
Completed Episode 140
Completed Episode 150
Playing a test game after episode  150
Game terminated after 0 steps with reward 38.0
Completed Episode 160
Completed Episode 170
Completed Episode 180
Completed Episode 190
Completed Episode 200
Playing a test game after episode  200
Game terminated after 0 steps with reward 160.0
Complete

Play a gameΒΆ

In [15]:
sum_reward=0
for j in range(100):
    env.reset()
    frame = []
    last_screen = get_screen()
    current_screen = get_screen()
    state = current_screen - last_screen
    total_reward = 0
    for i in range(500):
        if i == 0:
            action = env.action_space.sample()
        action = select_action(state, policy_net, train=False)
        pesudo_state, reward, done, info = env.step(action.item())
        #reward = torch.tensor([reward], device=device)
        total_reward += reward

        # Observe new state
        last_screen = current_screen
        current_screen = get_screen()
        if not done:
            state = current_screen - last_screen
        else:
            break

        #img = plt.imshow(env.render('rgb_array'))
        #frame.append([img])

        #print(i, action.item())
        if done:
            break

    print("Game ", j , " terminated after", len(frame), "steps with reward", total_reward)
    sum_reward += total_reward
print("Average reward", sum_reward/100)
Game  0  terminated after 0 steps with reward 75.0
Game  1  terminated after 0 steps with reward 112.0
Game  2  terminated after 0 steps with reward 235.0
Game  3  terminated after 0 steps with reward 68.0
Game  4  terminated after 0 steps with reward 103.0
Game  5  terminated after 0 steps with reward 151.0
Game  6  terminated after 0 steps with reward 29.0
Game  7  terminated after 0 steps with reward 137.0
Game  8  terminated after 0 steps with reward 31.0
Game  9  terminated after 0 steps with reward 60.0
Game  10  terminated after 0 steps with reward 34.0
Game  11  terminated after 0 steps with reward 90.0
Game  12  terminated after 0 steps with reward 118.0
Game  13  terminated after 0 steps with reward 63.0
Game  14  terminated after 0 steps with reward 197.0
Game  15  terminated after 0 steps with reward 55.0
Game  16  terminated after 0 steps with reward 67.0
Game  17  terminated after 0 steps with reward 85.0
Game  18  terminated after 0 steps with reward 87.0
Game  19  terminated after 0 steps with reward 85.0
Game  20  terminated after 0 steps with reward 223.0
Game  21  terminated after 0 steps with reward 207.0
Game  22  terminated after 0 steps with reward 100.0
Game  23  terminated after 0 steps with reward 316.0
Game  24  terminated after 0 steps with reward 59.0
Game  25  terminated after 0 steps with reward 16.0
Game  26  terminated after 0 steps with reward 45.0
Game  27  terminated after 0 steps with reward 104.0
Game  28  terminated after 0 steps with reward 14.0
Game  29  terminated after 0 steps with reward 145.0
Game  30  terminated after 0 steps with reward 92.0
Game  31  terminated after 0 steps with reward 47.0
Game  32  terminated after 0 steps with reward 148.0
Game  33  terminated after 0 steps with reward 129.0
Game  34  terminated after 0 steps with reward 93.0
Game  35  terminated after 0 steps with reward 175.0
Game  36  terminated after 0 steps with reward 177.0
Game  37  terminated after 0 steps with reward 89.0
Game  38  terminated after 0 steps with reward 89.0
Game  39  terminated after 0 steps with reward 219.0
Game  40  terminated after 0 steps with reward 31.0
Game  41  terminated after 0 steps with reward 105.0
Game  42  terminated after 0 steps with reward 31.0
Game  43  terminated after 0 steps with reward 35.0
Game  44  terminated after 0 steps with reward 86.0
Game  45  terminated after 0 steps with reward 62.0
Game  46  terminated after 0 steps with reward 43.0
Game  47  terminated after 0 steps with reward 117.0
Game  48  terminated after 0 steps with reward 98.0
Game  49  terminated after 0 steps with reward 139.0
Game  50  terminated after 0 steps with reward 95.0
Game  51  terminated after 0 steps with reward 86.0
Game  52  terminated after 0 steps with reward 24.0
Game  53  terminated after 0 steps with reward 37.0
Game  54  terminated after 0 steps with reward 146.0
Game  55  terminated after 0 steps with reward 83.0
Game  56  terminated after 0 steps with reward 173.0
Game  57  terminated after 0 steps with reward 48.0
Game  58  terminated after 0 steps with reward 93.0
Game  59  terminated after 0 steps with reward 45.0
Game  60  terminated after 0 steps with reward 23.0
Game  61  terminated after 0 steps with reward 110.0
Game  62  terminated after 0 steps with reward 14.0
Game  63  terminated after 0 steps with reward 44.0
Game  64  terminated after 0 steps with reward 54.0
Game  65  terminated after 0 steps with reward 119.0
Game  66  terminated after 0 steps with reward 100.0
Game  67  terminated after 0 steps with reward 64.0
Game  68  terminated after 0 steps with reward 82.0
Game  69  terminated after 0 steps with reward 55.0
Game  70  terminated after 0 steps with reward 95.0
Game  71  terminated after 0 steps with reward 83.0
Game  72  terminated after 0 steps with reward 116.0
Game  73  terminated after 0 steps with reward 126.0
Game  74  terminated after 0 steps with reward 61.0
Game  75  terminated after 0 steps with reward 87.0
Game  76  terminated after 0 steps with reward 143.0
Game  77  terminated after 0 steps with reward 28.0
Game  78  terminated after 0 steps with reward 78.0
Game  79  terminated after 0 steps with reward 123.0
Game  80  terminated after 0 steps with reward 187.0
Game  81  terminated after 0 steps with reward 141.0
Game  82  terminated after 0 steps with reward 94.0
Game  83  terminated after 0 steps with reward 91.0
Game  84  terminated after 0 steps with reward 118.0
Game  85  terminated after 0 steps with reward 93.0
Game  86  terminated after 0 steps with reward 92.0
Game  87  terminated after 0 steps with reward 212.0
Game  88  terminated after 0 steps with reward 16.0
Game  89  terminated after 0 steps with reward 135.0
Game  90  terminated after 0 steps with reward 111.0
Game  91  terminated after 0 steps with reward 104.0
Game  92  terminated after 0 steps with reward 103.0
Game  93  terminated after 0 steps with reward 90.0
Game  94  terminated after 0 steps with reward 103.0
Game  95  terminated after 0 steps with reward 157.0
Game  96  terminated after 0 steps with reward 171.0
Game  97  terminated after 0 steps with reward 55.0
Game  98  terminated after 0 steps with reward 135.0
Game  99  terminated after 0 steps with reward 174.0
Average reward 99.33
In [ ]:
#fig = plt.figure()
#anim = animation.ArtistAnimation(fig, frame, interval=100, repeat_delay=1000, blit=True)
#rc('animation', html='jshtml')
#anim
In [ ]:
sum_reward=0
for j in range(100):
    env.reset()
    frame = []
    total_reward = 0
    for i in range(500):
        action = env.action_space.sample()
        pesudo_state, reward, done, info = env.step(action)
        total_reward += reward

        img = plt.imshow(env.render('rgb_array'))
        frame.append([img])

        #print(i, action.item())
        if done:
            break

    print("Game ", j , " terminated after", len(frame), "steps with reward", total_reward)
    sum_reward += total_reward
print("Average reward", sum_reward/100)
Game  0  terminated after 21 steps with reward 21.0
Game  1  terminated after 18 steps with reward 18.0
Game  2  terminated after 33 steps with reward 33.0
Game  3  terminated after 9 steps with reward 9.0
Game  4  terminated after 13 steps with reward 13.0
Game  5  terminated after 15 steps with reward 15.0
Game  6  terminated after 85 steps with reward 85.0
Game  7  terminated after 33 steps with reward 33.0
Game  8  terminated after 30 steps with reward 30.0
Game  9  terminated after 16 steps with reward 16.0
Game  10  terminated after 14 steps with reward 14.0
Game  11  terminated after 25 steps with reward 25.0
Game  12  terminated after 14 steps with reward 14.0
Game  13  terminated after 24 steps with reward 24.0
Game  14  terminated after 10 steps with reward 10.0
Game  15  terminated after 34 steps with reward 34.0
Game  16  terminated after 18 steps with reward 18.0
Game  17  terminated after 46 steps with reward 46.0
Game  18  terminated after 19 steps with reward 19.0
Game  19  terminated after 26 steps with reward 26.0
Game  20  terminated after 33 steps with reward 33.0
Game  21  terminated after 62 steps with reward 62.0
Game  22  terminated after 26 steps with reward 26.0
Game  23  terminated after 18 steps with reward 18.0
Game  24  terminated after 19 steps with reward 19.0
Game  25  terminated after 13 steps with reward 13.0
Game  26  terminated after 12 steps with reward 12.0
Game  27  terminated after 14 steps with reward 14.0
Game  28  terminated after 39 steps with reward 39.0
Game  29  terminated after 16 steps with reward 16.0
Game  30  terminated after 15 steps with reward 15.0
Game  31  terminated after 38 steps with reward 38.0
Game  32  terminated after 10 steps with reward 10.0
Game  33  terminated after 11 steps with reward 11.0
Game  34  terminated after 16 steps with reward 16.0
Game  35  terminated after 26 steps with reward 26.0
Game  36  terminated after 17 steps with reward 17.0
Game  37  terminated after 20 steps with reward 20.0
Game  38  terminated after 10 steps with reward 10.0
Game  39  terminated after 67 steps with reward 67.0
Game  40  terminated after 15 steps with reward 15.0
Game  41  terminated after 60 steps with reward 60.0
Game  42  terminated after 14 steps with reward 14.0
Game  43  terminated after 14 steps with reward 14.0
Game  44  terminated after 15 steps with reward 15.0
Game  45  terminated after 23 steps with reward 23.0
Game  46  terminated after 23 steps with reward 23.0
Game  47  terminated after 29 steps with reward 29.0
Game  48  terminated after 12 steps with reward 12.0
Game  49  terminated after 35 steps with reward 35.0
Game  50  terminated after 13 steps with reward 13.0
Game  51  terminated after 9 steps with reward 9.0
Game  52  terminated after 38 steps with reward 38.0
Game  53  terminated after 14 steps with reward 14.0
Game  54  terminated after 19 steps with reward 19.0
Game  55  terminated after 13 steps with reward 13.0
Game  56  terminated after 49 steps with reward 49.0
Game  57  terminated after 17 steps with reward 17.0
Game  58  terminated after 19 steps with reward 19.0
Game  59  terminated after 14 steps with reward 14.0
Game  60  terminated after 13 steps with reward 13.0
Game  61  terminated after 23 steps with reward 23.0
Game  62  terminated after 11 steps with reward 11.0
Game  63  terminated after 24 steps with reward 24.0
Game  64  terminated after 17 steps with reward 17.0
Game  65  terminated after 11 steps with reward 11.0
Game  66  terminated after 35 steps with reward 35.0
Game  67  terminated after 11 steps with reward 11.0
Game  68  terminated after 13 steps with reward 13.0
Game  69  terminated after 10 steps with reward 10.0
Game  70  terminated after 48 steps with reward 48.0
Game  71  terminated after 13 steps with reward 13.0
Game  72  terminated after 10 steps with reward 10.0
Game  73  terminated after 13 steps with reward 13.0
Game  74  terminated after 16 steps with reward 16.0
Game  75  terminated after 15 steps with reward 15.0
Game  76  terminated after 10 steps with reward 10.0
Game  77  terminated after 12 steps with reward 12.0
Game  78  terminated after 25 steps with reward 25.0
Game  79  terminated after 15 steps with reward 15.0
Game  80  terminated after 39 steps with reward 39.0
Game  81  terminated after 28 steps with reward 28.0
Game  82  terminated after 18 steps with reward 18.0
Game  83  terminated after 42 steps with reward 42.0
Game  84  terminated after 19 steps with reward 19.0
Game  85  terminated after 18 steps with reward 18.0
Game  86  terminated after 9 steps with reward 9.0
Game  87  terminated after 14 steps with reward 14.0
Game  88  terminated after 14 steps with reward 14.0
Game  89  terminated after 11 steps with reward 11.0
Game  90  terminated after 25 steps with reward 25.0
Game  91  terminated after 24 steps with reward 24.0
Game  92  terminated after 24 steps with reward 24.0
Game  93  terminated after 19 steps with reward 19.0
Game  94  terminated after 10 steps with reward 10.0
Game  95  terminated after 15 steps with reward 15.0
Game  96  terminated after 18 steps with reward 18.0
Game  97  terminated after 79 steps with reward 79.0
Game  98  terminated after 15 steps with reward 15.0
Game  99  terminated after 16 steps with reward 16.0
Average reward 22.32

Here is the diagram that illustrates the overall resulting data flow.

Actions are chosen either randomly or based on a policy, getting the next step sample from the gym environment. We record the results in the replay memory and also run optimization step on every iteration. Optimization picks a random batch from the replay memory to do training of the new policy. "Older" target_net is also used in optimization to compute the expected Q values; it is updated occasionally to keep it current.