essay / developer-tools
Building a meta-AI tool: analyzing your AI usage patterns with AI
PANDA is a VS Code extension that reads your Copilot and Claude Code sessions, computes developer analytics, and generates narrative reports about how you actually work with AI.
I kept telling my team to measure their AI effectiveness, but I had no tool to offer them. Usage rates tell you adoption happened. They tell you nothing about whether the adoption is working.
So I built PANDA (Prompt Analytics for Developer AI): a VS Code extension that reads your local AI coding sessions, computes stats across every interaction, and generates a narrative report about your patterns.
What it actually measures
PANDA registers as a Copilot Chat participant. Type @panda /report and it parses every session you have had with both GitHub Copilot Chat and Claude Code, entirely locally.
For Copilot, it reads the JSONL session files from VS Code’s workspace storage, using better-sqlite3 to pull session metadata from the state database. For Claude Code, it reads the conversation transcripts from ~/.claude/projects/. Each tool stores data differently, so the parsers handle two distinct formats and normalize them into a common session model.
From there it computes: total sessions, requests, response times (min, median, p95), tool usage frequency, per-project breakdowns, daily activity heatmaps, model distribution, and time-of-day patterns. It also detects friction: API errors, retries, slow responses over 60 seconds, and rejected tool calls where the developer said no to a suggested action.
The report
The interesting part is not the stats. It is the narrative. PANDA sends the computed analytics to the language model with a system prompt requesting eight specific sections: a persona assessment, an at-a-glance summary, usage pattern analysis, wins, friction analysis, improvement roadmap, horizon scan, and a fun closing.
The output is a self-contained HTML dashboard with CSS-only animated bar charts, an activity heatmap, time-of-day distribution, and a badge grid. Fifteen achievement badges (Centurion for 100+ turns, Night Owl for 50+ turns between midnight and 6 AM, Marathon Runner for a session over 30 minutes, and so on) add a gamification layer that makes people actually want to check their stats.
A share button renders a canvas-based PNG card for social sharing, which is how the reports started spreading across our team without me pushing them.
Why this matters beyond the tool
The data PANDA surfaces answers questions I have been writing about for months. Are your senior engineers rejecting AI suggestions at a higher rate than juniors? That shows up in the friction metrics. Are developers iterating with the AI or accepting first outputs? That shows up in turns-per-session and tool call patterns. Is someone using AI heavily but only for trivial tasks? The per-project and model distribution data reveals that.
The tool does not judge. It just makes the invisible visible. What developers do with that information is up to them, but in my experience, just seeing the data changes behavior.