Case Study · Applied AI Persona

Richie

A personal AI persona built on retrieval, agentic codebase analysis, and structured memory so recruiters, collaborators, and curious users can explore my work through conversation instead of static documents.

RAG system Agent workflows Vector memory Portfolio-aware chat
2

Core AI flows: codebase memory generation and live retrieval-based answering.

Agentic

Uses specialized agents for summarization, collapsing, classification, and routing.

Vector

Stores structured project memory for semantic lookup across repos and personal data.

Persona

Answers questions about projects, experience, and technical background in context.

Tech Stack

Core systems behind Richie

Python Python
RAG
Pinecone Pinecone
LangChain LangChain
Prompt Routing
Streamlit Streamlit

Problem

Why build an AI persona at all?

Resumes and portfolios are static. They compress experience into short bullet points, but they do not let someone ask follow-up questions, compare projects, or understand how pieces of work relate to each other.

Richie turns that static material into an interactive layer. Instead of manually reading repositories, resume lines, and portfolio pages, a user can ask direct questions and get answers grounded in structured memory.

  • Acts as a conversational interface over projects and experience.
  • Improves discoverability of technical work beyond a traditional resume.
  • Combines codebase understanding with broader personal/professional context.

Memory System

Agent workflow for codebase analysis and vector storage

Richie’s memory layer starts with deep codebase analysis across repositories and related artifacts. The goal is not just to index files, but to build a usable representation of project intent, structure, responsibilities, and technical decisions.

The pipeline is intentionally modular. Individual files are summarized first, then larger contexts are collapsed when token limits become an issue, and finally project-level summaries are produced for storage. This keeps the system cost-aware and makes large repositories tractable.

  • File Analyzer Agent extracts file-level logic and responsibilities.
  • Collapser Agent condenses overlong intermediate outputs without dropping key context.
  • Project Analyzer Agent produces higher-level project understanding.
  • Orchestrator Node coordinates sequencing and resource usage.

The resulting summaries are embedded into a vector database alongside resume text, portfolio material, and other professional context. That becomes Richie’s long-term memory.

Answering Flow

RAG system for smart query answering

Once the memory layer is built, Richie uses retrieval-augmented generation to answer questions in real time. The system is not a single prompt over a vector store; it uses routing logic so different kinds of questions take different paths.

  • Query Classification Agent decides whether the question is document-based, general, or out-of-scope.
  • Prompt Rewriting Agent clarifies vague requests before retrieval.
  • Context Chatbot Agent pulls relevant memory chunks and assembles context.
  • General Chatbot / Out-of-Scope Agents handle casual or irrelevant queries without wasting retrieval resources.

This structure improves relevance, avoids unnecessary token spend, and keeps the experience more stable than a single undifferentiated prompt chain.

Modular LLM agent flow for project comprehension

Memory-building workflow

A staged summarization pipeline lets Richie build usable memory from raw repositories without exceeding model limits.

RAG flow diagram

Routing and retrieval

Richie classifies, rewrites, retrieves, and answers so the final output is grounded in relevant context rather than generic model completion.

Product View

What makes Richie useful

Portfolio-aware conversation

Richie can answer project and experience questions in a way that feels closer to an interactive briefing than a static personal site.

Memory with structure

The system does not depend on one giant context dump. It builds reusable memory from modular summaries and retrieval.

Cost-conscious design

The pipeline uses staged analysis and routing so large repositories and casual queries do not burn unnecessary model budget.

Explainable project recall

Because memory is built from analyzed repos, Richie can explain goals, architecture, and implementation patterns instead of only repeating keywords.

Useful for hiring and collaboration

It lowers the cost of understanding technical work, which is the main reason a personal AI persona is worth building.

Foundation for broader agent tooling

Richie also acts as a practical playground for agent design, retrieval systems, and memory architecture patterns.

Project links

Explore the live app or inspect the implementation

Richie is both a usable AI interface and a case study in personal memory systems, retrieval, and agent orchestration.