
An AI-powered interview assistant that simulates end-to-end technical interviews for software engineering candidates in an educational setting. Developed as part of the Turing College AI Engineering Capstone Project, it demonstrates how autonomous AI agents can assess skills, provide personalized feedback, and generate tailored learning plans—without human interviewers.
Inconsistent interview evaluations
Limited scalability of human interviewers
Subjective bias in assessments
Poor feedback quality
Loss of context across interview stages
The system uses an autonomous agent architecture to conduct multi-phase interviews, adapt questions in real time, preserve long-term context, and produce actionable learning insights.
Assessment: Baseline evaluation across six technical domains
Interview: Deep-dive into weakest skill areas
Feedback: Structured, actionable performance analysis
Planning: Personalized learning roadmap and resources
Autonomous orchestration with LangChain & LangGraph
Dual-layer memory: Redis TTL (short-term) + Pinecone (long-term)
Semantic search for context-aware continuity
Real-time scoring and adaptive questioning
Multi-modal support for code and image-based discussions
This project is for training and demonstration only. It is not production-ready and does not meet real-world hiring, bias, legal, or security requirements. It serves as an exploration of ethical AI development, memory systems, and scalable AI agent design in a simulated interview environment.