Skip to content
Brilliant.org logo

Brilliant.org

Helping students solve hard problems with AI

Problem Statement
How can we design an AI for Brilliant that helps students when they get stuck, without taking away the challenge of learning?
Brilliant Case Study Overview

Note: This is a conceptual exploration created for portfolio purposes. The work is not affiliated with or endorsed by Brilliant.org.

Overview

Brilliant is a leader in interactive STEM learning, replacing passive lectures with hands-on intuition building. This concept exploration aligns with their brand refresh, focusing on moving from static help to an Agentic Learning Experience that acts as a Socratic tutor.

The goal was to solve a critical friction point: when a level gets too hard, momentum breaks. Static hints fail because they don't understand the student's specific logic gap.

Observed problem

  • Learners lose confidence when they get stuck without guidance.
  • Static hints are often too vague to help or too specific, killing the challenge.
  • Momentum breaks because the help isn't personalized to the student's logic.

STRATEGIC CHALLENGE

How can we design an AI for Brilliant that helps students when they get stuck, without taking away the "productive struggle" that defines active learning?

Research & Insight

Research revealed that learners often repeat mistakes without knowing why. However, giving away answers kills the sense of accomplishment. The insight was clear: we need to nudge instead of tell.

Sam the Motivated Learner

Sam represents the core Brilliant learner: curious and independent, they value the "aha!" moment of independent discovery but need a lifeline when a logic gap stalls their progress.

User Persona & Research Synthesis
User Persona & Research Synthesis

Goals & Frustrations

  • Goal: Build confidence through successful problem-solving.
  • Need: Context-aware nudges that preserve the intellectual challenge.
  • Frustration: Generic hints that give away the answer or stop flow.
  • Requirement: Control over when and how assistance is offered.

The Strategy: Agentic Logic

In this concept, "agentic" refers to an AI that proactively monitors student progress and offers assistance at critical friction points, intervening only when a learner is likely to reach a wall.

What this looks like

  • Monitors real-time progress, such as attempt frequency and time spent per puzzle.
  • Suggests subtle micro-nudges when momentum begins to stall.
  • Surfaces relevant learning tools or mini-lessons tailored to specific errors.
  • Continuously adapts its assistance based on individual learner responses.

The AI process

Koji follows a structured loop to support active learning:

Observe → Reason → Suggest → Assist → Adapt

The goal is to maintain momentum without doing the thinking for the student. Koji helps learners understand where they are, identify their logic gaps, and decide what to try next.

  • Observe: Watch for when a student gets stuck or hesitates.
  • Reason: Figure out the specific mistake or logic gap causing the issue.
  • Suggest: Give a small nudge to keep them thinking without giving it away.
  • Assist: Show a helpful tool or a quick review if they can't move past the wall.
  • Adapt: Learn from how the student responds to give better help next time.

This simple process made sure the AI always supports the student instead of just giving away the answer.

Final Concept

The Agentic Learning Companion is embodied by Koji, the official mascot for Brilliant.org. It acts as a context-aware Socratic tutor embedded directly into lessons. It is built on five core principles to ensure it supports learning without removing the challenge:

  • Preserve productive struggle
  • Intervene selectively
  • Be transparent about AI reasoning
  • Keep learners in control
  • Support learning, not completion
Proposed Agentic Companion Interface
Proposed Agentic Companion Interface

Instead of a standalone chat, the agent uses real-time signals to offer adaptive guidance and targeted practice, helping learners regain momentum while preserving the active thinking that defines Brilliant.

Key User Flows

I focused on three high-impact moments where contextual AI support could most effectively reduce friction while keeping the learner in control.

Each flow explores:

  • Trigger moments
  • AI behavior
  • User control points

Flow 1: Getting Unstuck

Proactive guidance when momentum stalls.

  • Trigger: Detected hesitation or repeated incorrect attempts.
  • AI Behavior: Koji offers an optional nudge towards a key principle without revealing the answer.
  • Impact: Reduces frustration while keeping the learner in control of the assistance.
Getting Unstuck visualization
Getting Unstuck visualization

Flow 2: Chat Interaction with Prof Koji

Direct Socratic tutoring through conversation.

  • Trigger: Learner engages with Koji for deeper assistance.
  • AI Behavior: Koji provides structured Socratic guidance via a chat panel. help escalates from conceptual hints to specific examples.
  • Impact: Mirrors a human tutor to help learners discover solutions independently.
Chat Interaction visualization
Chat Interaction visualization

Flow 3: Post-Lesson Summary and Suggestions

Meta-cognition and long-term learning paths.

  • Trigger: Lesson or module completion.
  • AI Behavior: Koji summarizes progress and suggests personalized next steps.
  • Impact: Encourages reflection and supports long-term skill development.
Post-Lesson Summary visualization
Post-Lesson Summary visualization

Conversation Logic

Koji uses a coaching style to support thinking rather than delivering answers. Help escalates gradually—from small nudges to targeted hints—to preserve the "productive struggle" and keep learners in control of their progress.

  • Clarifying questions to surface logic gaps.
  • Reasoning-focused hints that avoid giving away solutions.
  • Suggestions framed as options, not instructions.
  • Transparent handling of AI uncertainty to encourage exploration.

Evaluation

This is a concept project. If it were implemented, here are simple ways we could test if it works.

Does it help?Student success
Does it work?AI behavior

Success Metrics

  • Momentum: Do students move past difficult puzzles faster?
  • Completion: Are more students finishing the hard levels?
  • Growth: Does the help lead to better scores in later lessons?
  • Control: Do students feel in charge of when they get help?
  • Clarity: Is the guidance easy to follow and encouraging?
  • Confidence: Does using the tool make students feel more capable?

AI quality

  • Does it offer help at the right time?
  • Does it avoid giving away answers?
  • Does it handle confusion well?

These would need real testing with students to know if the concept actually works.

Reflection & Next Steps

This project showed me that AI should focus on helping people stay in control. The real challenge wasn't making the AI "smart," but knowing when to step in and when to stay quiet so students can still do the thinking.

To move forward, we would need to test this with real students. We would want to see if the help arrives at the right time and if students find the suggestions useful. We would also need to see how their needs change as they keep using the tool over time.

In the future, I would look at making the AI clearer about why it is giving certain hints. I also want to see how this could help students over many weeks, not just one lesson. This project made me excited to design AI that feels truly supportive and respectful to the person using it.