Claude Code Insights

2,305 messages across 307 sessions (573 total) | 2026-02-09 to 2026-03-11

At a Glance
What's working: You've built a genuinely impressive automated PR review pipeline with custom skills that post structured GitHub reviews with inline comments — that's a power-user workflow most people haven't discovered yet. You're also effective at orchestrating complex multi-step git operations through Claude, like recovering from a 70k-line diff by restructuring into rename-tracking commits, and you've established a solid open source contribution research pipeline. Impressive Things You Did →
What's hindering you: On Claude's side, it frequently rushes ahead when you've explicitly asked for step-by-step review, and its first-pass analysis is often wrong — miscounting diffs, recommending already-resolved issues, staging files you didn't ask for — which burns your time on corrections. On your side, a lot of sessions end at the planning stage without reaching implementation, and Claude doesn't always get enough upfront constraints to avoid its recurring mistakes (like 'verify issue status before recommending' or 'don't commit without my approval'). Where Things Go Wrong →
Quick wins to try: Set up hooks to auto-run your linter or test suite before Claude commits — this would catch the pre-commit hook failures and buggy code issues that keep blocking your pushes. You could also consolidate your most common git patterns (commit, push, create PR, post review) into a single custom skill chain, since git operations are by far your most frequent task. Features to Try →
Ambitious workflows: As models get more capable, your /review skill could evolve into a full autonomous loop: review PR → apply only spec-validated fixes → run tests → commit only if green, eliminating the over-applied fixes and manual rollbacks you deal with now. Your large-scale refactoring work (like the 1,406 file rename) is also a prime candidate for parallel sub-agents that each handle a file subset, validate import graphs independently, and produce atomic rename-tracking commits — turning risky monolithic changes into safe parallel work. On the Horizon →
2,305
Messages
+49,712/-3,325
Lines
818
Files
28
Days
82.3
Msgs/Day

What You Work On

TypeScript Web Application Development ~45 sessions
Large-scale TypeScript codebase work including render props removal (~157 candidates identified, 1,406 file renames), wizard refactoring from multi-page to single-page patterns, reservation history UI implementation, and TVOD purchase benefits features. Claude Code was heavily used for multi-file refactoring, codebase analysis, and planning with Edit/Write tools, though several sessions stayed at the planning stage without completing implementation.
Automated PR Reviews & Git Workflow ~35 sessions
Building and using automated PR review skills (/review command) that post structured GitHub reviews with inline comments via the API, plus extensive git operations including branch management, merge aborts, commit splitting, and push workflows. Claude Code was essential for analyzing diffs, posting reviews via GitHub API, and managing complex git scenarios like rebasing large renames into clean commit histories.
Open Source Contributions & Issue Triage ~20 sessions
Contributing to open source projects including agent-browser and Appium-related tools — reproducing bugs, planning fixes, implementing viewport fixes, documenting features, and submitting PRs. Claude Code was used to investigate issues, reproduce bugs locally, create detailed fix plans, and manage the contribution workflow including syncing repos and tracking issues in ISSUES.md.
E2E Testing & QA Planning ~15 sessions
Planning and writing E2E tests for partner center Jira stories and reservation features, including accessing planning documents via Google Drive MCP integration. Claude Code was primarily used for test planning and codebase exploration, though most sessions ended at the planning stage rather than producing final test code.
DevOps & Developer Tooling ~12 sessions
Implementing a sudo-free SSL proxy dev server pattern across a monorepo, debugging 503 deployment errors, setting up MCP integrations, and writing blog content about browser tooling and context efficiency. Claude Code handled multi-file infrastructure changes and iterative debugging of proxy timing and URL navigation issues.
What You Wanted
Git Operations
36
Code Review
14
Bug Fix
14
Code Explanation
11
Code Modification
9
Commit And Proceed
9
Top Tools Used
Bash
2634
Read
1789
Edit
1014
Grep
472
Write
458
TaskUpdate
272
Languages
TypeScript
1690
Markdown
323
JSON
202
YAML
86
Python
58
Rust
44
Session Types
Iterative Refinement
46
Single Task
30
Multi Task
27
Quick Question
8
Exploration
8

How You Use Claude Code

You are a power user who treats Claude Code as a full-spectrum development partner, not just a code generator. Across 307 sessions in a single month with 2,305 messages, you leverage Claude heavily for git operations, PR workflows, and code reviews — your top goals center on git operations (36 sessions), code review (14), and bug fixes (14). You've built custom skills like `/review` for automated PR reviews, and you routinely chain together complex workflows: analyze a PR, post inline GitHub comments, commit, push, and create PRs all in one flow. Your TypeScript-heavy work (1,690 file touches) spans a large monorepo where you tackle ambitious refactors like renaming 1,406 files and fixing all imports.

Your interaction style is iterative and supervisory rather than hands-off. You frequently ask Claude to plan first — writing plan files, analyzing codebases, comparing against planning docs — before greenlighting implementation. When Claude rushes ahead (which happens often, as seen in the native mode session where it committed Stage 3 without waiting for approval), you pull it back with git resets and corrections. You're not afraid to interrupt, reject changes, or ask Claude to revert when it over-applies fixes or stages files you didn't request. The friction data tells a clear story: your most common issue is Claude taking the wrong approach (37 instances), followed by buggy code and excessive changes, suggesting you're constantly course-correcting an eager assistant.

Despite the friction, you're overwhelmingly satisfied — 236 sessions marked as likely satisfied, with many rated essential or very helpful. You have a pattern of starting sessions with exploration and planning, then moving to execution in follow-up sessions. Many sessions end at the planning stage (partially_achieved), which isn't failure — it's your deliberate workflow of scoping before building. You also use Claude for knowledge work: understanding GitHub issues, analyzing deployment problems, and reviewing PRs against specs, making you a strategic delegator who uses Claude for both thinking and doing.

Key pattern: You operate as a hands-on technical lead who delegates complex multi-step workflows to Claude but actively supervises, frequently course-correcting when it rushes ahead or takes the wrong approach.
User Response Time Distribution
2-10s
136
10-30s
236
30s-1m
277
1-2m
257
2-5m
253
5-15m
185
>15m
126
Median: 72.9s • Average: 284.0s
Multi-Clauding (Parallel Sessions)
89
Overlap Events
123
Sessions Involved
13%
Of Messages

You run multiple Claude Code sessions simultaneously. Multi-clauding is detected when sessions overlap in time, suggesting parallel workflows.

User Messages by Time of Day
Morning (6-12)
706
Afternoon (12-18)
890
Evening (18-24)
638
Night (0-6)
71
Tool Errors Encountered
Command Failed
196
Other
183
User Rejected
77
File Not Found
38
File Too Large
13
File Changed
10

Impressive Things You Did

Over the past month, you've run 307 sessions across a large TypeScript monorepo with heavy use of Bash, Read, and Edit tools, building out a notably sophisticated Git and PR automation workflow.

Automated PR Review Pipeline
You've built custom skills like /review that trigger full automated code reviews with inline GitHub comments, and you're using them consistently across multiple PRs. This is a standout workflow — you've essentially turned Claude into a first-pass code reviewer that posts structured feedback directly via the GitHub API.
End-to-End Git Workflow Orchestration
You're managing complex multi-step Git operations through Claude — branching, committing, rebasing, creating PRs, resolving review comments, and pushing — all in single sessions. Your ability to recover from issues like a 70k-line diff by restructuring into rename-tracking commits shows strong iterative collaboration with the agent.
OSS Contribution Research Pipeline
You've built a repeatable workflow for open source contributions: syncing repos, curating issue lists, reproducing bugs, writing fix plans, and submitting PRs with documentation. Your sessions on projects like agent-browser show a disciplined approach of plan-then-implement that consistently reaches successful outcomes.
What Helped Most (Claude's Capabilities)
Multi-file Changes
37
Good Explanations
20
Proactive Help
17
Correct Code Edits
15
Good Debugging
12
Fast/Accurate Search
10
Outcomes
Not Achieved
9
Partially Achieved
32
Mostly Achieved
32
Fully Achieved
44
Unclear
2

Where Things Go Wrong

Your sessions frequently stall due to Claude rushing ahead without confirmation, producing inaccurate preliminary analysis, and stopping at planning rather than completing implementation.

Rushing ahead without waiting for approval
You explicitly ask for step-by-step review, but Claude repeatedly jumps ahead and commits or implements multiple stages at once, forcing you to reset work. Consider breaking tasks into separate prompts with hard checkpoints rather than relying on Claude to self-pace.
  • You asked for native mode implementation with review at each stage, but Claude implemented and committed Stage 3 without waiting, requiring a git reset
  • Claude tried to stage and commit extra files you didn't request (DisplaySettingForm files, history test) and once attempted to commit+push when you only wanted staged files committed
Inaccurate initial analysis requiring corrections
Claude frequently gets facts wrong on the first pass—miscounting diff sizes, recommending already-resolved issues, or misidentifying installed tools—costing you time on back-and-forth corrections. You could front-load key context (e.g., 'verify issue status before recommending') to reduce these rounds.
  • Issue recommendations repeatedly included issues that already had PRs open, and ISSUES.md miscategorized many resolved issues despite you correcting this twice
  • Claude described a PR diff as 5,000+ lines when it was actually 50,000+, and the initial single-commit strategy caused a 70k line diff that couldn't detect renames, requiring a full redo with two commits
Sessions ending at planning without implementation
A significant number of your sessions produce detailed plans or analysis documents but never reach actual code changes, leaving you with partially achieved goals. Try scoping sessions to smaller deliverables or explicitly stating 'implement, don't just plan' to push past the analysis phase.
  • You asked for E2E tests based on a planning doc, but the session ended with only a plan file written and no actual test code produced
  • You requested render prop removal starting with simple bypass ones, but Claude wrote an analysis and removal plan without performing any actual removals
Primary Friction Types
Wrong Approach
37
Buggy Code
20
Excessive Changes
14
Misunderstood Request
14
User Rejected Action
5
Tool Unavailable
2
Inferred Satisfaction (model-estimated)
Frustrated
2
Dissatisfied
33
Likely Satisfied
236
Satisfied
34

Existing CC Features to Try

Suggested CLAUDE.md Additions

Just copy this into Claude Code to add it to your CLAUDE.md.

Multiple sessions show Claude repeatedly recommending issues that already had PRs or were resolved, requiring user corrections.
Multiple sessions show Claude rushing ahead without waiting for review, requiring git resets and user frustration.
Claude repeatedly tried to commit extra unrelated files, and linter modifications to unrelated files caused issues across multiple sessions.
A 70k-line diff disaster occurred because a single commit prevented GitHub from detecting renames, requiring a full redo.
Users had to repeatedly ask Claude to focus on specific code changes rather than broad analysis across multiple sessions.
Multiple sessions were blocked by pre-commit hook failures, with ESLint reverting changes and lint-staged crashing.

Just copy this into Claude Code and it'll set it up for you.

Custom Skills
Reusable prompts that run with a single /command
Why for you: You already use /review and /commit skills heavily (36 git_operations, 14 code_reviews, 9 commit_and_proceed). Codify your PR creation, issue triage, and E2E test planning workflows as skills to avoid repeating setup instructions.
mkdir -p .claude/skills/pr && cat > .claude/skills/pr/SKILL.md << 'EOF' ## Create PR 1. Check current branch and diff against target 2. Only stage files related to the current task 3. For large renames, split into rename commit + content commit 4. Create PR with structured description 5. Wait for user approval before pushing EOF
Hooks
Shell commands that auto-run at specific lifecycle events
Why for you: Your top friction is wrong_approach (37 instances) and excessive_changes (14). A pre-edit hook could auto-run TypeScript type checks and a post-edit hook could run ESLint on changed files, catching issues before they snowball.
// Add to .claude/settings.json { "hooks": { "postToolUse": [ { "tool": "Edit", "command": "npx tsc --noEmit 2>&1 | head -20" } ] } }
Headless Mode
Run Claude non-interactively from scripts and CI/CD
Why for you: You do a lot of automated PR reviews (multiple sessions show PR review skills running). Headless mode in CI could auto-review every PR on open, eliminating manual trigger sessions.
# In GitHub Actions workflow: claude -p "Review PR #${{ github.event.pull_request.number }}. Post inline comments on bugs, security issues, and style violations. Use COMMENT not REQUEST_CHANGES." --allowedTools "Bash,Read,Grep,Glob"

New Ways to Use Claude Code

Just copy this into Claude Code and it'll walk you through it.

Planning sessions that never reach implementation
Many sessions end at the planning stage. Split planning and implementation into explicit two-phase prompts.
At least 8 sessions produced plans (E2E tests, refactors, wizard changes, reservation features) but ended before any code was written. This wastes context window on exploration that then expires. Try front-loading implementation by giving Claude the plan upfront, or explicitly telling Claude 'skip planning, start coding' when you already know what you want.
Paste into Claude Code:
I already have a plan in ./plans/feature-x.md. Read it and start implementing Stage 1 now. Don't re-analyze the codebase — the plan already covers that. Stop after Stage 1 for my review.
Git operations dominate your workflow
Automate your most common git patterns into a single custom skill.
Git operations are your #1 goal (36 sessions), plus 9 commit_and_proceed and 8 git_commit_and_push. Many friction points involve staging wrong files, commit strategy mistakes, and pre-commit hook issues. A unified /ship skill that handles staging, commit splitting, hook workarounds, and PR creation would eliminate repeated instructions.
Paste into Claude Code:
Create a custom skill at .claude/skills/ship/SKILL.md that: 1) shows me the staged diff for approval, 2) only stages task-related files, 3) splits large renames into separate commits, 4) uses --no-verify if hooks crash, 5) creates a PR with a structured description.
Reduce wrong_approach friction with upfront constraints
Start sessions by stating what NOT to do, based on your recurring friction patterns.
Your #1 friction is wrong_approach (37 instances). Common patterns: Claude recommends resolved issues, implements too much at once, stages wrong files, and gives broad analysis instead of specific answers. Adding 2-3 constraint sentences at the start of a session dramatically reduces these. Think of it as a pre-flight checklist.
Paste into Claude Code:
Before you start: 1) Don't modify or stage files unrelated to this task. 2) Stop after each logical step for my review. 3) When I ask about a bug, tell me the exact lines that changed and why, not a general overview. Now here's the task:

On the Horizon

Your 307 sessions over 667 hours reveal a power-user workflow increasingly centered on autonomous git operations, PR reviews, and large-scale refactoring — with clear opportunities to push further into fully autonomous, multi-agent pipelines.

Autonomous PR Review and Fix Pipeline
Your most successful sessions chain PR review → code fix → commit → push in a single flow, but friction data shows Claude often over-applies fixes or stages unwanted files. An autonomous agent could run your /review skill, filter actionable findings against your project's spec and test suite, apply only validated fixes, and self-verify by running tests before committing — closing the loop without manual intervention.
Getting started: Use Claude Code's Task tool to spawn a sub-agent that reviews, then a second sub-agent that applies fixes and runs your test suite before committing. Chain them in a single prompt.
Paste into Claude Code:
Review PR #XXXX using our /review skill. For each actionable finding: 1) Create a sub-task to implement the fix in a feature branch, 2) Run the relevant test suite (npm test / cargo test) and ESLint to verify the fix doesn't break anything, 3) Only stage and commit files directly related to the fix with a conventional commit message, 4) If any test fails, revert that specific fix and report it as 'needs human review'. After all fixes are applied, push the branch and post a summary comment on the PR listing what was fixed and what was skipped.
Parallel Agents for Large-Scale Refactoring
Your .client filename removal session touched 1,406 files and produced a 70k-line diff that had to be redone — a perfect case for parallel agents. Multiple sub-agents could each handle a subset of files, validate import graphs independently, and produce atomic commits that git can track as renames. This turns a risky monolithic refactor into safe, parallelized work.
Getting started: Use the Task tool to fan out work across multiple sub-agents, each scoped to a specific directory or module, with a final agent that validates the full build and assembles commits.
Paste into Claude Code:
I need to refactor [PATTERN] across the entire TypeScript monorepo. Do this safely: 1) First, scan the repo and group affected files by top-level directory, 2) Spawn a separate sub-task for each directory group — each sub-task should apply the refactor, fix all imports within its scope, and verify with 'tsc --noEmit', 3) After all sub-tasks complete, run a final full build and full test suite from the repo root, 4) Create commits grouped by directory so git detects renames properly (keep each commit under 500 files), 5) If any sub-task's tsc check fails, report the broken imports and stop before committing that group. Show me the execution plan before starting.
Test-Driven Bug Fix with Auto-Verification
20 sessions hit 'buggy_code' friction and 37 hit 'wrong_approach' — often because Claude fixed code without running tests to verify. An autonomous workflow could write a failing test that captures the bug, iterate on the fix until the test passes, run the full suite to catch regressions, and only then commit. This eliminates the back-and-forth cycle that dominates your bug_fix and debugging sessions.
Getting started: Prompt Claude to follow a strict red-green-refactor loop, using Bash to run tests after each change and only proceeding when tests pass.
Paste into Claude Code:
Fix this bug: [DESCRIBE BUG OR PASTE ISSUE URL]. Follow this strict workflow: 1) Read the relevant source code and existing tests, 2) Write a new test case that reproduces the exact bug — run it and confirm it FAILS, 3) Implement the minimal fix to make that test pass, 4) Run the full test suite (npm test / cargo test) and ESLint, 5) If any unrelated test breaks, revert and try a different approach — do NOT skip failing tests, 6) Once all tests pass, commit with a message referencing the bug, 7) Show me the before/after test output and a summary of what changed and why. Do not commit until step 6 is fully green.
"Claude renamed 1,406 files in one shot, then panicked when GitHub showed a 70,000-line diff because git couldn't detect the renames"
A user asked Claude to remove '.client' from 1,406 filenames and fix all imports. Claude did it in a single commit — which GitHub rendered as a 70k-line diff of pure deletions and additions. Claude had to redo the whole thing as two commits with a rebase so git could recognize them as renames. It even initially downplayed the diff as '5,000+ lines' before the user corrected it to 50,000+.