Standard resumes are great for structure, but sometimes you just want a specific answer without digging through PDFs.
I built this RAG Agent not to replace the CV, but to complement it. It allows visitors to query my experience naturally, verifying every fact against my database.
Built with Modern Infrastructure
Data integrity was the priority. A breakdown of how a user query travels through the security layer, hits the vector database, and generates a verified response.
My resume data (Sanity CMS) is converted into vector embeddings and indexed in Pinecone.
A lightweight model analyzes the user input for toxicity, jailbreaks, and off-topic queries.
We perform a semantic search in Pinecone to find the most relevant context chunks.
Gemini Pro generates the final answer using the strict context provided. Streamed via Vercel AI SDK.
I've always been curious about LLMs, but I didn't just want to wrap the OpenAI API. I wanted to understand the full lifecycle of an AI product.
This project was my sandbox to learn about vector embeddings, prompt engineering, and the importance of "grounding" AI responses to prevent hallucinations. It also serves a practical purpose: making my portfolio more interactive and accessible.
Preventing the model from inventing jobs or skills I don't have. Solved with strict RAG context injection.
Preventing users from overriding system instructions. Solved with a dedicated analysis layer.
Chat widgets often break on mobile keyboards. Implemented a responsive full-screen adapter with safe-area handling.
Test the RAG pipeline yourself. Ask specific questions about my tech stack or projects to see how it retrieves information.