An AI-powered full-stack web application that helps users understand their insurance policies through intelligent analysis, risk scoring, and a conversational chatbot — built with FastAPI, Next.js, and Groq's Llama 3.3.
Live Demo: rag-insurance-analyzer.vercel.app
Backend API Docs: rag-based-insurance-analyzer.onrender.com/docs
Insurance policies are notoriously difficult to understand — filled with legal jargon, hidden exclusions, and complex terms. I built this tool to make policy analysis accessible to everyone. Upload your PDF, get a clean breakdown, and chat with an AI that actually knows your policy.
- PDF Upload and Parsing — Upload any insurance policy PDF and get instant structured insights
- 5-Tab Results View — Coverage, Exclusions, Premiums, Claims, and Risk Score
- AI Risk Scoring — Overall risk score out of 10 with favorable and unfavorable aspects
- RAG Chatbot — Ask natural language questions, answered strictly from your policy data
- Chat Memory — Conversation history persisted per policy using localStorage
- User Dashboard — View and manage all your uploaded policies in one place
- Policy Delete — Remove old policies from your dashboard
- JWT Authentication — Secure login and signup with token-based auth
---
Backend
- FastAPI
- SQLAlchemy + SQLite
- pdfplumber for PDF extraction
- Groq Llama 3.3-70B for AI parsing and RAG chat
- passlib + bcrypt for password hashing
- python-jose for JWT tokens
Frontend
- Next.js 14 with App Router
- Tailwind CSS
- localStorage for chat persistence
Deployment
- Vercel for frontend
- Render for backend
- Python 3.11+
- Node.js 18+
- Groq API Key — free at console.groq.com
git clone https://github.com/rahulchand017/RAG_Based_Insurance_Analyzer.git
cd RAG_Based_Insurance_Analyzer
python -m venv venv
venv\Scripts\Activate.ps1 # Windows
source venv/bin/activate # Mac/Linux
pip install -r requirements.txtCreate a .env file:
uvicorn main:app --reload
# Runs at http://localhost:8000
# API docs at http://localhost:8000/docscd frontend
npm installCreate a .env.local file:
npm run dev
# Runs at http://localhost:3000| Method | Endpoint | Description |
|---|---|---|
| POST | /register | Create account |
| POST | /login | Login and get token |
| POST | /upload-policy | Upload PDF |
| POST | /analyze-policy/{id} | Run AI analysis |
| GET | /policy/{id} | Get full policy data |
| GET | /my-policies | Get user's policies |
| DELETE | /policy/{id} | Delete a policy |
| POST | /chat | Ask question via RAG |
- User asks a question about their policy
- Backend fetches all structured data for that policy from the database
- The data is injected as context into the Groq prompt
- Llama 3.3-70B answers based strictly on the policy — no hallucinations
- Response is returned to the user in the chat UI
Rahul Chand
GitHub: @rahulchand017
If you found this useful, consider giving it a star on GitHub.