Engineering a Context-Aware AI Learning System
Building a personalized AI tutoring system using structured lessons, user learning profiles, and prompt orchestration.
This project explores how large language models can power personalized learning experiences through a hybrid engine of structured pedagogy and intelligent context management.
Instead of relying on raw chatbot interactions, the platform was designed around a robust multi-layered architecture:
Structured Lessons
Curated educational content stored as deterministic records.
Interactive AI Tutor
Context-aware system answering complex learner queries.
Memory Engine
Adapting explanations to each unique user learning style.
Targeting Core Challenges
The Mission
The goal was to build a system that feels fluidly conversational while maintaining strict engineering control over operational cost, content accuracy, and pedagogical outcomes.
The platform began with a purely generative approach, allowing users to spawn educational content on any topic in real-time.
The Prototype Logic
Users could enter any subject, and the system would instantly construct a custom lesson using raw LLM prompts. While technically impressive, this "unconstrained" generation faced major operational hurdles.
Critical Constraints Exposed
Exponential Cost
Generating full lessons for every user request led to uncontrolled token consumption and high operational overhead.
Context Saturation
Long, detailed prompts frequently exceeded LLM token limits, causing session breaks and data loss.
The Strategic Pivot
Dynamic generation also introduced significant variations in lesson quality. To scale effectively, we realized the system required a fundamental architectural redesign toward deterministic content.
Instead of generating lessons dynamically, the platform shifted to a structured, deterministic lesson model to ensure pedagogical consistency.
Deterministic Lesson Structure
Real-World Scenario
Starting with a grounding, relatable context.
Conceptual Explanation
Detailing the fundamental logic of the scenario.
Technical Mapping
Bridging the gap to system design principles.
These lessons anchor the AI's reasoning, using varied industry scenarios:
This approach ensures consistent lesson quality while freeing the AI assistant to focus on clarifying complexity and expanding explanations.
To make responses truly personalized, the platform builds a dynamic learning profile for each individual user.
During conversations, the system continuously analyzes interactions to extract high-fidelity learning signals that inform the personalization engine.
Areas of Confusion
Identifying specific concepts where the user struggles.
Mastered Concepts
Tracking areas where the user has shown proficiency.
Preferred Analogies
Saving metaphors that resonated with the learner.
Learning Patterns
Analyzing how the user best absorbs new info.
Adaptive Pedagogy
These insights are stored and injected into future prompts, allowing the AI to gradually adapt its explanations to each learner's unique pace instead of repeating generic responses.
To prevent context drift and manage strict LLM token limits, we implemented a recursive summarization pipeline.
Maintaining high-fidelity interactions in long-form educational dialogue requires aggressive but intelligent context compression.
Sliding Window Summary
After every 10 messages, the system distills the dialogue into a high-density narrative summary.
Session Archival
Upon session completion, a final archival summary is generated to anchor future encounters.
Compact Contextual Logic
These summaries preserve critical pedagogical signals—such as concept mastery and areas of confusion—while keeping total prompt length compact and predictable.
When a user asks a question, the system constructs a structured prompt containing multiple context layers.
Prompt components include:
- 1.System instructions
- 2.Lesson summary
- 3.User learning profile
- 4.Session conversation summary
- 5.Recent conversation history
- 6.User question
This approach ensures the AI assistant has enough context to produce relevant responses while keeping token usage predictable.
To ensure accuracy and maintain pedagogical focus, the AI assistant is strictly constrained to the curated lesson content.
Strict Scope Enforcement
"Out-of-bounds" queries—questions unrelated to the current lesson topic—trigger a controlled fallback response rather than speculative or drifting AI generation.
Triple-Layer Protection
Pedagogical Anchor
By anchoring the AI to deterministic data, we transform it from a generic chatbot into a specialized tutor that remains consistently helpful and contextually accurate.
To ensure continuous improvement, the system captures real-time qualitative signals from every interaction.
Each AI response is equipped with a lightweight feedback mechanism to close the loop between user performance and model refinement.
Acurate & Helpful
Confirming successful explanations and strong pedagogical alignment.
Needs Refinement
Flagging confusing analogies or drifting context for review.
Data-Driven Iteration
These granular signals are aggregated into our monitoring dashboard, allowing engineers to identify weak explanation patterns and proactively refine prompt strategies and lesson content.
┌───────────────────────┐
│ User Browser │
│ (Next.js UI) │
└─────────────┬─────────┘
│
│ API Requests
▼
┌────────────────────────┐
│ NestJS API Layer │
│ Authentication │
│ Lesson APIs │
│ Chat APIs │
└─────────────┬──────────┘
│
│
▼
┌─────────────────────────┐
│ Prompt Orchestration │
│ Service │
│ │
│ • Prompt Builder │
│ • Context Assembly │
│ • Token Management │
└─────────────┬───────────┘
│
│
▼
┌─────────────────┐
│ OpenAI API │
│ LLM Responses │
└────────┬────────┘
│
▼
┌─────────────────────────┐
│ PostgreSQL DB │
│ │
│ Lessons │
│ User Profiles │
│ Chat Sessions │
│ Conversation Summaries │
│ Feedback Signals │
└─────────────────────────┘User opens lesson
│
▼
Reads structured lesson
(real-world scenario → tech concept)
│
▼
User asks a question
│
▼
System gathers context
• lesson summary
• user learning profile
• session conversation history
• previous summaries
│
▼
Prompt is constructed
│
▼
LLM generates explanation
│
▼
Response streamed to user
│
▼
User provides feedback
(Like / Dislike)
│
▼
System updates
• user learning profile
• chat summaries
• feedback metricsPrompt Construction
System Instructions
+
Lesson Summary
+
User Learning Profile
+
Conversation Summary
+
Recent Messages
+
User Question
│
▼
LLM
│
▼
Contextual AI ResponseBuilding a context-aware AI tutor taught us valuable lessons about LLM orchestration and state management.
Context compression is essential
Long conversations quickly exceed token limits. Summarization pipelines allow systems to maintain context without uncontrolled growth.
Structured knowledge over generation
Using curated lessons instead of fully generated content ensures consistent explanations and prevents hallucinations.
Prompt orchestration enables true personalization
Combining user profiles, lesson summaries, and conversation context allows the AI assistant to adapt its pedagogical approach to each individual learner's pace and style.
This project demonstrates that the future of educational AI lies in the tight coupling of structured pedagogy with flexible LLM reasoning.