Skip to content

jingyanjiang/gitreflect

Repository files navigation

gitreflect

Your git history already holds the story. gitreflect helps you read it.

Most engineers ship meaningful work every week — then scramble to summarize it on a deadline from memory. gitreflect is a skill suite for AI coding agents that reads your git history, synthesizes what you actually accomplished (not just commit messages), and turns it into structured reflections: weekly snapshots, monthly reviews, quarterly narratives, and career-level retrospectives.

It doesn't just report what you did. Over time, it helps you see the arc — what grew, what shipped, what mattered.

Built for engineers and researchers who do significant work across many repos and deserve a clear-eyed record of it.

License: MIT Claude Code Gemini CLI OpenCode Codex CLI


What a reflection looks like

Weekly snapshot — what shipped this week, per repo, ~2 min
## Week ending 2026-04-04

### 1. Technical Work & Project Progress

**distributed-cache**
- Replaced single-node Redis with a three-node cluster to eliminate the availability
  bottleneck under peak load; failover is automatic and has been verified in staging.
- Resolved a serialization mismatch causing silent cache misses on nested objects —
  root cause was inconsistent key hashing between writer and reader services.

**ml-pipeline**
- Refactored the feature extraction stage to run in parallel across dataset shards,
  cutting preprocessing time from 4h to 45min on the standard benchmark.
- Started integrating with the new model serving API; single-node inference path passing.

### 2. Meetings & Collaboration
- Attended team standup, sprint planning, and architecture review.
- Synced with the platform team on shared storage quota and access patterns.

### 3. Community & Service
- Reviewed one upstream dependency pull request.
- Participated in department seminar.

### 4. Papers, Talks & Misc
- Submitted camera-ready version of the workshop paper.
- Drafted outline for the system design section of the conference submission.
Monthly reflection — what moved, what mattered, and what you're growing into
## April 2026 — Monthly Reflection

### Goal Tracking (from March 2026)**Deploy distributed caching to production**
   → Shipped. Three-node cluster live; p99 latency down 62% under peak load.

⚠️ **Complete conference paper draft**
   → Two of four sections drafted. Experiments section pending final benchmark runs.
     Targeting submission by May 15.

❌ **Onboard external collaborator**
   → Blocked by access provisioning at the partner institution.
     Following up; rescheduled for May.

### Project Progress & Status

**Distributed Cache**
Moved from prototype to production. The three-node cluster handles failover correctly
and has been stable under real traffic for two weeks. Monitoring dashboards are in place
and the on-call runbook is written. Next: tuning eviction policy for the long-tail
workload pattern observed in production.

**ML Pipeline**
Feature extraction parallelization is complete and merged. End-to-end training time
dropped from 6h to 1.5h on the full dataset. Multi-node inference integration is in
progress; single-node path works, multi-node path under testing.

### Impacts — Internal
The caching improvement unblocked two downstream teams whose services were timing out
under load. The pipeline speedup cuts the team's weekly experiment iteration cycle
from two days to half a day.

### Impacts — External
Workshop paper camera-ready submitted with open-source benchmark scripts to enable
reproducibility.

### Personal Growth & Skill Development
Deepened hands-on knowledge of distributed caching: eviction strategies, consistent
hashing, and failover behavior under network partition. Took end-to-end ownership of
the production rollout, including writing the runbook and handling the first on-call
incident independently.

### Future Vision & Goals for Next Month
- Complete and submit the conference paper by May 15.
- Finish multi-node inference integration and run end-to-end benchmark.
- Resolve external collaborator access and begin onboarding.
- Present the caching architecture to the broader engineering team.

### Questions / Discussion Points for 1:1
- Priority trade-off: paper deadline vs. pipeline features if they conflict in May?
- Production path for multi-node inference — who is the approver for the rollout?
- Long-term: should the caching layer become a shared platform service?
Quarterly review — initiative arcs, cumulative impact, honest quarter-over-quarter accounting
## Q1 2026 — Quarterly Review

### Goal Tracking (from Q4 2025)**Production-grade distributed caching for the ML platform**
   → Live. Three-node cluster serving full traffic; p99 latency down 62%.

✅ **Parallelized ML pipeline with 4x throughput improvement**
   → Complete. End-to-end training time cut from 6h to 1.5h.

⚠️ **Conference paper submission**
   → Abstract accepted. Full paper 60% complete; targeting May 15 deadline.

❌ **External collaboration established**
   → Delayed by access provisioning. Onboarding rescheduled to April.

### Project Progress & Status

**Platform Infrastructure**
Delivered the distributed caching layer and progressed the multi-node inference
integration. Caching is fully in production. Inference integration is 70% complete —
single-node shipped, multi-node in testing. Together these make the platform capable
of serving the next model generation at production scale.

**ML Pipeline Modernization**
Completed the parallelization initiative. The 4x speedup changes the team's experiment
velocity. The architecture is also designed for extension to streaming datasets, which
is the next phase.

**Research Output**
Workshop paper published. Conference submission in active writing; experiments complete,
analysis and write-up ongoing.

### Impacts — Internal
Two downstream teams unblocked by the caching improvement. Experiment velocity
meaningfully improved for the full team. On-call burden reduced through automated
failover and documented runbooks.

### Impacts — External
Workshop paper published with open-source benchmark scripts. Conference abstract
accepted; reproducibility package will accompany the full paper.

### Personal Growth & Skill Development
Went from contributor to owner on two significant infrastructure initiatives. Developed
production operations experience — incident response, capacity planning, and cross-team
coordination for shared infrastructure. Grew technical writing skills through two paper
submission cycles.

### Future Vision & Goals for Next Quarter
- Submit conference paper by May 15 and begin revision cycle.
- Ship multi-node inference integration and run production load test.
- Complete external collaborator onboarding; start joint experiment series.
- Scope and begin the streaming dataset extension for the pipeline.

### Questions for 1:1 with Manager/Skip Manager
- How to best position the infrastructure work in the conference paper narrative?
- Is the on-call rotation sustainable as we add more production services?
- Career track: systems engineering depth vs. broader research breadth at this stage?
Mid-year retrospective — where you grew, what you led, what comes next
## H1 2026 — Mid-Year Retrospective

### Goal Tracking (from H2 2025)**Ship production ML platform infrastructure**
   → Delivered distributed caching and parallelized training pipeline. Both in
     production, stable, and adopted by partner teams.

✅ **First-author paper at a peer-reviewed venue**
   → Conference paper submitted. Workshop paper published with open-source release.

⚠️ **Establish external research collaboration**
   → Active but delayed three months. Joint experiments now running; first results
     expected in Q3.

### High-Level Project Achievements

**ML Platform Infrastructure**
Designed and delivered two production systems from scratch: distributed caching
(62% p99 latency reduction) and parallelized training pipeline (4x throughput).
Both are in active use by the team and two partner teams. The caching architecture
is being adopted as the reference design for new platform services.

**Research Publications**
Completed a full research cycle from hypothesis to publication. Workshop paper
published with reproducibility package. Conference paper submitted with a broader
scope and full production evaluation.

**External Collaboration**
Established a joint research program with a partner institution. Despite delays,
the collaboration is active and producing shared datasets and benchmark results.

### Impacts — Group/Organization Level
Platform improvements directly reduced infrastructure costs and unblocked two
dependent teams. The caching runbook and architecture docs are now used as the
reference design for new services. On-call load reduced by 40% through automation.

### Impacts — Science/Research Community
Workshop paper published; open-source benchmark scripts released with 80+ stars
in the first month. Conference paper under review; reproducibility package will
accompany publication.

### Technical Capability Growth
Developed production-level expertise in distributed systems: consistent hashing,
replica failover, and eviction policy design under real workload constraints. Expanded
into ML infrastructure — pipeline orchestration, data-parallel training, and model
serving at scale. Can now design and own a production ML system end-to-end.

### Responsibility & Leadership Growth
Shifted from feature contributor to infrastructure owner on two systems that multiple
teams depend on. Took on production on-call responsibility for both. Led the first
cross-team rollout I've driven independently, requiring alignment across three
engineering teams.

### Potential Future Responsibilities
- Technical lead for the ML serving platform, covering infrastructure and the serving
  API that model teams depend on.
- Mentoring role for the incoming intern cohort on distributed systems and ML ops.
- Co-lead on the next conference submission building on the published work.

### Interests, Goals & Vision for H2
- Submit the revised conference paper and respond to reviewer feedback.
- Ship the streaming dataset extension and measure its impact on experiment velocity.
- Publish the first joint collaboration results — targeting a workshop or short paper.
- Develop a technical talk on the platform architecture for an internal or external venue.
- Begin exploring ML compiler toolchains; an area I want to build depth in.

### Questions for Manager/Skip Manager
- Is the infrastructure ownership scope I've taken on sustainable long-term, or should
  we plan to distribute it as the platform grows?
- How does the systems work this half-year translate to career progression?
- Is there budget for attending the main conference if the paper is accepted?
- What does the path to a staff-level role look like from here?
Yearly retrospective — the full arc: what you built, how you grew, and where you're headed
## 2026 — Yearly Retrospective

### Goal Tracking (from 2025)**Build production ML platform infrastructure**
   → Delivered distributed caching, parallelized training, and multi-node serving.
     All three in production and serving the team and two external teams.

✅ **Publish research at a peer-reviewed venue**
   → Conference paper accepted and presented. Workshop paper published with 120+
     GitHub stars and adoption of the benchmark by external groups.

⚠️ **Establish a long-term external collaboration**
   → Active, but slower than planned. Produced two joint datasets and one paper draft.
     Collaboration continues into 2027.

❌ **Build depth in ML compiler toolchains**
   → Deprioritized due to infrastructure load. Carrying forward as a 2027 investment.

### High-Level Project Achievements

**ML Platform — Full Stack**
Designed, delivered, and now own a production ML platform used by the team and two
partner teams. Covers distributed caching (62% latency reduction), parallelized
training (4x throughput), multi-node inference (10x model size capacity), and
streaming dataset ingestion. The platform is the team's primary compute surface for
experiments and production model serving.

**Research Publications**
Completed a full arc from hypothesis to conference presentation. Paper accepted and
presented at the main venue; workshop paper published with an open-source release
cited in four external follow-on papers within the year.

**External Collaboration**
Established and maintained a joint research program. Produced shared datasets,
co-authored one paper draft, and co-designed benchmarks now used by both institutions.

### Impacts — Group/Organization Level
Platform infrastructure saved an estimated 300+ engineer-hours per month in experiment
cycle time across the team and partners. Two dependent teams were unblocked and able
to ship. Infrastructure documentation now serves as the reference design for three new
services built by other engineers.

### Impacts — Science/Research Community
Conference paper presented; four external citations within the year. Benchmark suite
released as open source with 120+ stars. Reproducibility package adopted by at least
two external research groups. External collaboration produced publicly accessible
datasets.

### Technical Capability Growth
Grew from a general software engineering background to deep expertise in ML
infrastructure: distributed training, model serving at scale, caching architecture,
and ML pipeline design. Can now architect, build, and operate a production ML platform
independently. Developed sustained technical writing through two peer-reviewed
publications.

### Responsibility & Leadership Growth
Moved from individual contributor to infrastructure owner and on-call lead for two
production systems. Led three cross-team coordination efforts. Mentored two junior
engineers from onboarding to first production ownership. Drove the external
collaboration independently across institutions.

### Potential Future Responsibilities
- Staff engineer or tech lead for the ML platform — owning the roadmap and the growing
  community of teams that depend on it.
- Principal investigator or co-PI on a follow-on research grant building on the
  published platform work.
- Internal mentor program lead for engineers entering the ML infrastructure space.

### Interests, Goals & Vision for Next Year
- Lead the next major platform initiative: ML compiler integration or hardware
  accelerator support, areas where I want to build depth.
- Submit a follow-on paper extending the published work into the streaming domain.
- Grow the external collaboration into a named joint project with shared funding.
- Develop and deliver a technical talk at an external conference or workshop.
- Invest dedicated time in ML compiler toolchains — the investment I deferred this year.
- Mentor at least one engineer from onboarding through their first production ownership.

### Questions for Manager/Skip Manager / Performance Discussion
- Does the infrastructure ownership I've taken on reflect a staff-level contribution,
  and how is that reflected in the review cycle?
- What is the team's plan for distributing on-call burden as the platform scales?
- Is the research trajectory something the org wants to support long-term, and with
  what resources?
- What is the one thing I should focus on to advance to the next level?
- Is there appetite for submitting a grant proposal around this platform work?

Quick start — Claude Code (Recommended)

macOS / Linux

# 1. Clone
git clone https://github.com/your-username/gitreflect.git
cd gitreflect

# 2. Install (symlinks skills into ~/.claude/commands/)
bash install.sh

# 3. Configure — edit the three required fields
cp settings.example.yaml settings.yaml
open settings.yaml          # or: nano settings.yaml / vim settings.yaml

Windows

Option A — WSL (recommended): Use this if you already have WSL2 installed. Open a WSL terminal, navigate to the cloned repo, and follow the macOS/Linux steps above exactly — WSL runs a full Linux environment, so everything works the same way.

Option B — Git Bash: Use this if you have Git for Windows installed but not WSL. The steps are identical to macOS/Linux — the only difference is the shell.

Open Git Bash inside the cloned repo directory: right-click the gitreflect folder in File Explorer and select "Git Bash Here". Then run the same commands:

bash install.sh
cp settings.example.yaml settings.yaml

Note: Symlink creation in Git Bash requires Developer Mode to be enabled on Windows 10/11 (Settings → Privacy & security → For developers → Developer Mode). If symlinks fail, use Option C instead.

Option C — PowerShell (no bash required)

Open PowerShell in the cloned repo directory (shift-right-click the folder → "Open PowerShell window here"), then run:

$repo = "$PWD"
$commands = "$env:USERPROFILE\.claude\commands"
New-Item -ItemType Directory -Force -Path $commands

foreach ($skill in @("weekly-report","monthly-report","quarterly-report","midyear-report","yearly-report")) {
    New-Item -ItemType SymbolicLink -Path "$commands\$skill.md" `
             -Target "$repo\$skill.md" -Force
}

New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.config\reporting"
Copy-Item settings.yaml "$env:USERPROFILE\.config\reporting\settings.yaml"

Note: Symlink creation requires either Developer Mode enabled or running PowerShell as Administrator.

Configure settings.yaml

Three fields to fill in:

author:
  emails:
    - "you@your-org.com"      # your git commit email(s) — supports multiple

repo_paths:
  - "~/Repos"                 # root dir(s) to scan recursively for .git repos

output:
  reports_dir: "~/reports"    # base dir — subdirs weekly/, monthly/, etc. are auto-created

Invoke

In any Claude Code session, type one of:

/weekly-report         reflect on this week's work across your repos (last 7 days by default)
/monthly-report        generate a monthly reflection with goal tracking and impact framing
/quarterly-report      generate a quarterly review across initiatives and cumulative impact
/midyear-report        generate a mid-year retrospective on growth, leadership, and H2 vision
/yearly-report         generate a yearly retrospective for performance review and career planning

Specify a period inline if you want to override the default:

/weekly-report for the week of March 24–30
/quarterly-report for Q1 2026
/yearly-report for 2025

Other agents

Gemini CLI — native slash commands

macOS / Linux

mkdir -p ~/.gemini/commands
for skill in weekly-report monthly-report quarterly-report midyear-report yearly-report; do
    ln -s "$(pwd)/${skill}.md" ~/.gemini/commands/${skill}.md
done
mkdir -p ~/.config/reporting
ln -s "$(pwd)/settings.yaml" ~/.config/reporting/settings.yaml

Windows (PowerShell)

$repo = "$PWD"
New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.gemini\commands"

foreach ($skill in @("weekly-report","monthly-report","quarterly-report","midyear-report","yearly-report")) {
    New-Item -ItemType SymbolicLink `
             -Path "$env:USERPROFILE\.gemini\commands\$skill.md" `
             -Target "$repo\$skill.md" -Force
}

New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.config\reporting"
Copy-Item settings.yaml "$env:USERPROFILE\.config\reporting\settings.yaml"

See Invoke for available commands — syntax is identical to Claude Code.

OpenCode — native slash commands

macOS / Linux

mkdir -p ~/.config/opencode/commands
for skill in weekly-report monthly-report quarterly-report midyear-report yearly-report; do
    ln -s "$(pwd)/${skill}.md" ~/.config/opencode/commands/${skill}.md
done
mkdir -p ~/.config/reporting
ln -s "$(pwd)/settings.yaml" ~/.config/reporting/settings.yaml

Windows (PowerShell)

$repo = "$PWD"
New-Item -ItemType Directory -Force -Path "$env:APPDATA\opencode\commands"

foreach ($skill in @("weekly-report","monthly-report","quarterly-report","midyear-report","yearly-report")) {
    New-Item -ItemType SymbolicLink `
             -Path "$env:APPDATA\opencode\commands\$skill.md" `
             -Target "$repo\$skill.md" -Force
}

New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.config\reporting"
Copy-Item settings.yaml "$env:USERPROFILE\.config\reporting\settings.yaml"

See Invoke for available commands — syntax is identical to Claude Code.

Codex CLI — pass skill via stdin

No install needed. Pass the skill file at runtime:

macOS / Linux

cat weekly-report.md | codex -
cat monthly-report.md | codex -

Windows (PowerShell)

Get-Content weekly-report.md | codex -
Get-Content monthly-report.md | codex -

See Invoke for the full list of available skills.

Aider

Load the skill file as read-only context, then prompt:

All platforms

/read-only weekly-report.md
Follow the instructions in weekly-report.md to generate my weekly progress report.

Run Aider from your top-level repos root for cross-repo git access.

See Invoke for the full list of available skills.


How it works

Reading your commits, not just listing them

Raw commits are rarely reflection-ready. gitreflect groups related commits by file area and work-session proximity, reads the combined diff to understand the net change, and distills a cluster of activity into a single outcome sentence. Fix sequences ("try → fix → fix again → final") collapse into one result. Design decisions surface a rationale clause — "[what changed] to [why]" — extracted from commit bodies and diff context when available.

Questions shaped by your template

Clarifying questions for things git can't see — meetings, papers, service activities — are generated directly from your weekly.template section headings. If your template has a "Conferences" section, it asks about that. If you change a heading, the questions change with it. Nothing is hardcoded to any org's format.

Reflection across time

The longer the period, the deeper the reflection. Monthly and longer reports scan your saved reports directory for the prior period, extract the goals and intentions you wrote then, and honestly account for each one: ✅ delivered (with evidence), ⚠️ partial (what remains and why), or ❌ not yet (an honest reason and revised plan). Quarterly and longer reports can also ingest prior sub-period reports as additional context — so a quarterly review can draw on three months of monthly reflections, not just raw commits.


Configuration reference

settings.yaml is gitignored — your personal settings never leave your machine. Copy settings.example.yaml to get started.

Key Description
author.emails Git email(s) used to filter commits. Leave [] to auto-detect from git config user.email. A list supports multiple identities (work + personal).
repo_paths Paths to search. Root dirs (e.g. ~/Repos) and specific repo paths both work. Search is always recursive — any .git/ at any depth is found.
output.reports_dir Base directory. Reports save to reports_dir/weekly/, reports_dir/monthly/, etc. Subdirectories are created automatically.
weekly.date_range rolling (last 7 days, default) or calendar_week (Mon–Sun of prior week).
weekly.template Section headings for your weekly report. These drive report structure and clarifying questions.
weekly.examples Sample weekly entries — guide tone, sentence length, and detail level. Two to three examples produce noticeably better output.

Monthly, quarterly, mid-year, and yearly reports use interactive section selection at runtime — no additional config needed.


Report types

Command Period Structure Goal tracking What it reflects
/weekly-report 7 days Fixed (your template) No What shipped this week, per repo — outcomes and next steps
/monthly-report ~30 days Interactive sections vs. last monthly What moved on each project, its impact, and how you're growing
/quarterly-report ~90 days Interactive sections vs. last quarterly Initiative arcs, cumulative impact inside and outside the org
/midyear-report ~6 months Interactive sections vs. last mid-year Capability and leadership growth, and your vision for the second half
/yearly-report ~12 months Interactive sections vs. last yearly The full year's arc — what you built, how you led, and where you're headed

Customizing your reflection format

The weekly.template field controls the structure of your weekly snapshot and shapes which clarifying questions get asked. The reflection adapts to whatever format your org uses — replace the defaults to match:

weekly:
  template: |
    1. Technical Work & Project Progress:
    2. Meetings & Collaboration:
    3. Community & Service:
    4. Papers, Talks & Misc:

Add examples to calibrate tone and detail. Two to three examples produce noticeably better output:

  examples:
    - |
      Week ending YYYY-MM-DD

      1. Technical Work & Project Progress:
         - Deployed X, reducing latency by Y%.
         - Resolved multi-node stability issue caused by Z.
      ...

License

MIT — see LICENSE.