prism - session intelligence for Claude Code
Python TUI that finds where extra tokens are burned in Claude Code sessions, why sessions fail, and what to fix. Built on Textual, focused on debugging your own usage.
PRISM is the diagnostic for "why is my Claude Code session burning so many tokens?" - a Python TUI built on Textual that reads your local session files and tells you where the tokens went, whether your CLAUDE.md rules are actually being followed, and exactly what to change.
The framing in the README earns its place: real session data from a single machine surfaced things like a project with 6738% CLAUDE.md re-read cost (a 237-line file being re-read on every tool call), a project where CLAUDE.md re-reads consumed 480% of total session tokens (more spent on instructions than on actual work), 4 migration file edits in a project whose rules said never touch them, and 5 consecutive tool failures in a single session with no diagnosis. None of that was visible before; the token counter just said you hit your limit.
If Codeburn tells you which projects burn tokens and abtop tells you what's running right now, PRISM is the third piece - it tells you why a specific session went sideways and gives you concrete fixes.
Quick start
pip install prism-cc
prism analyze # Rich-formatted health report, then exit
prism advise # CLAUDE.md recommendations
prism # full interactive TUI dashboard
prism dashboard # generate HTML dashboard and open in browser
prism watch # live dashboard for the running session
Or as a Claude Code plugin:
/plugin marketplace add jakeefr/prism
/plugin install prism@prism
/reload-plugins
Then ask Claude: "analyze my Claude Code sessions." The plugin auto-detects whether pip is installed and walks you through it if not.
Python 3.11+. No API key needed. Reads local files only. macOS, Linux, Windows.
What you actually see
The grade table is the headline:
Project Overall Token Eff. Tool Health Ctx Hygiene MD Adherence Continuity
myapp C+ D B+ D C A
ai-assistant C F A B B+ A-
data-pipeline C+ C+ D B C+ B
web-scraper C+ D+ B B+ B A
cli-tool B+ B+ A- B+ A A
Followed by the advisor with concrete diff recommendations:
PRISM ADVISOR - recommendations for myapp
TRIM (High impact - silent token drain every session)
Remove lines 120-148: personality/tone instructions
Claude Code's system prompt already handles this.
These 29 lines cost tokens on every single tool call.
RESTRUCTURE (Reduce root-level re-read cost)
Move 3 rules to subdirectory CLAUDE.md files:
- "Use functional components only in React"
- "Import from @/components, never relative paths"
- "Run bun run typecheck after TypeScript changes"
These only matter in src/, and loading them globally wastes
tokens in every session that doesn't touch that directory.
This is the difference between "your sessions are expensive" and "delete these specific 29 lines." The advisor produces actionable diffs, not platitudes.
The five dimensions
| Dimension | What PRISM measures |
|---|---|
| Token Efficiency | CLAUDE.md re-read costs, cache hit patterns, compaction frequency |
| Tool Health | Retry loops, edit-revert cycles, consecutive failures, interactive command hangs |
| Context Hygiene | Compaction loss events, mid-task boundaries, sidechain fragmentation |
| CLAUDE.md Adherence | Whether your rules are actually being followed, or ignored mid-session |
| Session Continuity | Resume success rate, context loss on restart, truncated session files |
The CLAUDE.md adherence dimension is the most interesting one philosophically. The "Mr. Tinkleberry test" from a 748-upvote HN comment: put an absurd instruction in your CLAUDE.md and see when Claude stops following it. When Claude stops mid-session, your file has grown too long and adherence is degrading. PRISM automates that test across all your real sessions.
The CLAUDE.md re-read problem
This is the issue PRISM is built around and the one most people don't realise they have.
Every tool call Claude Code makes re-reads your CLAUDE.md from the top of context. A 200-line CLAUDE.md × 50 tool calls = 10,000 tokens spent on instructions, per session. If your CLAUDE.md has grown to include personality instructions, full documentation copies, or rules that only apply to one subdirectory, you pay for all of it every time.
PRISM measures this exactly and tells you which lines are costing you the most. That's the kind of analysis that turns "my sessions feel expensive" into a 30-second fix.
Two data sources
JSONL (default) - reads raw session files from ~/.claude/projects/. Zero setup, works out of the box.
agentsview - reads from an agentsview SQLite database. agentsview parses and normalises Claude Code sessions into a queryable DB, so PRISM gets richer data: real API token counts (instead of the chars/4 heuristic) and agentsview's own health_score/health_grade/outcome per session, shown alongside PRISM's grades.
prism analyze --source agentsview
prism analyze --source agentsview --agentsview-db /path/to/sessions.db
Resolution order when --agentsview-db isn't specified: AGENTSVIEW_DATA_DIR, then AGENT_VIEWER_DATA_DIR, then ~/.agentsview/sessions.db.
How it works
You use Claude Code normally
|
v
Claude Code writes session files to ~/.claude/projects/
|
v
PRISM reads and analyzes those files (JSONL or agentsview DB)
|
v
Health scores + root cause diagnosis + CLAUDE.md diff
PRISM never touches Claude Code. It never modifies your sessions. It reads JSONL files Claude Code already writes and surfaces what's inside them.
prism advise --apply is the only command that writes anything, and it confirms before doing so. prism advise without --apply only prints recommendations.
When to reach for it
- Your sessions feel expensive and you can't point at a cause.
- Your CLAUDE.md has grown over months and you suspect it's no longer working.
- You're seeing inconsistent rule adherence and want to know if the file is the problem.
- You're optimising for cost or context window and want a baseline.
When not to
- Single-session, single-project workflows where the issue is obviously the prompt.
- Sessions that don't touch CLAUDE.md or rules. PRISM's biggest leverage is on the rule-following analysis.
- If you don't use Claude Code. PRISM is Claude Code-specific - it reads CC's session format.
Trade-offs
Read-only by default, write only on explicit --apply confirmation, no network calls, no telemetry, no external servers. Four well-maintained Python deps (textual, rich, typer, watchdog), no C extensions, no compiled binaries.
The grades are heuristic. A "C" doesn't mean your project is bad - it means there's headroom. Use the advisor's specific recommendations as the actionable signal; the grades are useful for comparing across projects but don't fixate on improving the letter.
prism replay <session-id> lets you scrub through a session timeline. Underrated for the post-mortem case where a specific session went sideways and you want to see exactly what happened.
MIT. The author asks for issues when you find interesting patterns in your own session data - real-world examples improve the detection logic.
Recent discussion
From the wider web🔮 PRISM - AI-Powered Edge Orchestration & Distributed Inference
dev.to · Apr 28, 2026
Why is Prism down?
reddit.com · Apr 19, 2026
Show HN: Prism, one inbox for GitHub and GitLab PRs
prismstudio.dev · Apr 17, 2026
Prism: Symbolic Superoptimization of Tensor Programs
arxiv.org · Apr 16, 2026
Prism: A stateless payment integration library extracted from 4 years of production
dev.to · Apr 16, 2026
Featured in
Claude Code tools, plugins, and integrations
The best tools, MCP servers, and harnesses for getting more out of Claude Code - orchestration, observability, telemetry, and remote control.
Terminal UIs (TUIs) for daily workflows
Polished terminal UIs for git, tasks, observability, and agent control - the tools that make the terminal feel like a real surface again.
Observability for AI coding agents
Tools that show you what your coding agents are actually doing: token spend, session state, tool calls, and parallel execution.
Related entries
claudetop - htop for Claude Code sessions
Real-time terminal monitor for Claude Code: cost, cache efficiency, model comparison, and alert thresholds. Targeted at users running long agent sessions who need spend visibility.
Recall - TUI search across agent session history
Local-first Rust TUI that searches Claude Code, Codex, and OpenCode session history with hybrid full-text plus semantic retrieval. Built on ratatui.
jeeves - TUI for browsing AI agent sessions
Terminal UI to search, preview, read, and resume Claude Code and Codex sessions in a unified view. More framework integrations planned.
coding_agent_session_search - 11-provider session search
Rust TUI and CLI that indexes and searches local coding-agent session history across Codex, Claude Code, Gemini, Cursor, Aider and seven other providers.