MLOps / LLMOps Engineer · Chicago, IL · Open to OPT roles from June 2026
I research and build production-grade LLMOps systems — pipelines, monitoring layers, and observability tools that make agentic AI reliable in the real world.
My current Master's research at DePaul University investigates hallucination detection and mitigation in ReAct-style multi-agent pipelines without model retraining. I built a 3-agent system (Planner → Critic → Fixer) using LangGraph, Ollama, and MLflow, and ran a full 8-condition ablation study on 50 HumanEval problems to characterize exactly when lightweight runtime monitoring works — and when it doesn't.
→ agentic-llmops — the full implementation + research report + all experiment results
- 🎓 MS Computer Science @ DePaul University, Chicago — graduating June 2026 (GPA: 3.80)
- 💼 3+ years as Software Engineer & Data Scientist @ sensen.ai (Hyderabad) and Engineer @ AECOM / Apple Siri
- ☁️ AWS Certified Cloud Practitioner
- 🔬 Research supervised by Prof. Vahid Alizadeh · targeting paper submission June 2026
Core: Python · Java · SQL
AI/ML: PyTorch · TensorFlow · Scikit-learn · Hugging Face · MLflow · LangGraph · LangChain · Ollama
MLOps/Infra: Docker · Kubernetes · AWS · MLflow · Streamlit
Databases: PostgreSQL · Oracle SQL · MySQL
| Project | What it is | Stack |
|---|---|---|
| agentic-llmops | Master's research: runtime hallucination monitoring in multi-agent LLM pipelines. Full ablation study, 8 conditions, 50 HumanEval problems. | LangGraph · Ollama · MLflow · Python |
- 🔬 Phase 2 of LLMOps research — scaling to 100 HumanEval problems, testing Llama 8B as Critic
- 📚 Working through Neetcode 150 (DSA prep)
- 🎯 Actively interviewing for MLOps / AI Engineer roles — available June 2026 on OPT
Thanks for stopping by — feel free to explore the repos or connect on LinkedIn.