Built with Mathigon

Glossary

Select one of the keywords on the left…

Reinforcement LearningIntroduction

Reading time: ~10 min

Have you ever tried to teach a dog a new trick? You don't hand them a manual or give them a map. Instead, you wait for them to do something close to the trick, and then you give them a treat. Over time, they figure it out.

This is exactly how Reinforcement Learning (RL) works. It's an area of machine learning where we don't tell the computer exactly what to do. Instead, we let it interact with a world, try things out, and give it "treats" when it does something good.

In traditional supervised learning, we give the computer all the answers upfront. In reinforcement learning, the computer has to .

Problem Structure

Let's break down how this learning process actually works. Imagine playing a video game without looking at the instructions.

The Intuition

You are the Agent. You look around your environment (the State), and you decide to push a button on your controller (the Action). If your action causes you to find treasure, the game gives you a high score (a Reward). Your overall mental strategy for deciding what buttons to press in the future is called your Policy.

The Mechanics

In formal RL terminology:

  • Agent: The learner or decision maker.
  • Environment: Everything the agent interacts with.
  • State: The current condition or perspective returned by the environment.
  • Action: What the agent chooses to do.
  • Reward: The feedback signal.
  • Policy: The mapping from states to actions (the underlying mathematical strategy).

Let's see if you can categorize these core game elements into their correct RL terms!

Hearing a ping and getting 100 points
The player holding the controller
The layout of the level right now
Pushing the jump button
Agent
Action
Reward
State

Wait, what makes a really smart agent? A smart agent doesn't just want a single point right now; it wants to figure out the best overall outcome over time. This long-term thinking requires learning a Value Function.

Based on this, what does a Value Function actually do? x-picker.list .item.pill.bblue(data-error="incorrect") It is an exact instruction manual telling the agent which button to press next. .item.pill.bblue(data-error="incorrect") It is the exact immediate point score you get for taking just one single action. .item.pill.bblue It estimates how logically desirable a situation is for getting future rewards over time.

The Core Goal: The agent's ultimate objective is to discover a Policy that maximizes its cumulative Reward over the course of an entire episode or lifetime.

Let's map out the chronological cycle of how an agent interacts and learns over time. Put the events of a single time step into the correct order:

The Environment returns a Reward based on what just happened.
The Agent observes the current State.
The Agent updates its Policy and Value Function to get smarter.
The Agent chooses to take an Action based on its current Policy.
The Environment transitions to a new State.

To reveal more content, you have to complete all the activities and exercises above. 
Are you stuck? or reveal all steps

Next up:
Simulation
Sina