Skip to content
View kalyan-venk's full-sized avatar

Highlights

  • Pro

Block or report kalyan-venk

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
kalyan-venk/README.md

Hi, I'm Kalyan Venkatesh 👋

MLOps / LLMOps Engineer · Chicago, IL · Open to OPT roles from June 2026

LinkedIn Email


What I'm Building

I research and build production-grade LLMOps systems — pipelines, monitoring layers, and observability tools that make agentic AI reliable in the real world.

My current Master's research at DePaul University investigates hallucination detection and mitigation in ReAct-style multi-agent pipelines without model retraining. I built a 3-agent system (Planner → Critic → Fixer) using LangGraph, Ollama, and MLflow, and ran a full 8-condition ablation study on 50 HumanEval problems to characterize exactly when lightweight runtime monitoring works — and when it doesn't.

agentic-llmops — the full implementation + research report + all experiment results


Background

  • 🎓 MS Computer Science @ DePaul University, Chicago — graduating June 2026 (GPA: 3.80)
  • 💼 3+ years as Software Engineer & Data Scientist @ sensen.ai (Hyderabad) and Engineer @ AECOM / Apple Siri
  • ☁️ AWS Certified Cloud Practitioner
  • 🔬 Research supervised by Prof. Vahid Alizadeh · targeting paper submission June 2026

Tech Stack

Core: Python · Java · SQL
AI/ML: PyTorch · TensorFlow · Scikit-learn · Hugging Face · MLflow · LangGraph · LangChain · Ollama
MLOps/Infra: Docker · Kubernetes · AWS · MLflow · Streamlit
Databases: PostgreSQL · Oracle SQL · MySQL


Featured Work

Project What it is Stack
agentic-llmops Master's research: runtime hallucination monitoring in multi-agent LLM pipelines. Full ablation study, 8 conditions, 50 HumanEval problems. LangGraph · Ollama · MLflow · Python

Currently

  • 🔬 Phase 2 of LLMOps research — scaling to 100 HumanEval problems, testing Llama 8B as Critic
  • 📚 Working through Neetcode 150 (DSA prep)
  • 🎯 Actively interviewing for MLOps / AI Engineer roles — available June 2026 on OPT

Thanks for stopping by — feel free to explore the repos or connect on LinkedIn.

Pinned Loading

  1. agentic-llmops agentic-llmops Public

    LLMOps pipeline for hallucination monitoring in agentic AI systems. Master's research project focusing on ReAct agents with lightweight mitigation. Built with Ollama, LangGraph, and MLflow.

    Python

  2. kalyan-venk kalyan-venk Public

    Kalyan Venkatesh | Computer Science graduate student | AI/ML & MLOps Enthusiast