I've been connecting AI agents to production databases via MCP (Model Context Protocol) for product analytics. The problem: when you ask "how many records are running?" the agent sees status values 0, 1, 2, 3 — and guesses which means what. It picked "failed" records instead of "running" and confidently gave the wrong number.
The fix was obvious in hindsight: give the agent access to the app source code too. It finds the enum (pending: 0, running: 1, failed: 2, completed: 3) and stops guessing.
Blog post walks through real Claude Code conversations showing the before/after, plus setup with DBHub MCP and Repomix for packaging your codebase.
I've been connecting AI agents to production databases via MCP (Model Context Protocol) for product analytics. The problem: when you ask "how many records are running?" the agent sees status values 0, 1, 2, 3 — and guesses which means what. It picked "failed" records instead of "running" and confidently gave the wrong number. The fix was obvious in hindsight: give the agent access to the app source code too. It finds the enum (pending: 0, running: 1, failed: 2, completed: 3) and stops guessing. Blog post walks through real Claude Code conversations showing the before/after, plus setup with DBHub MCP and Repomix for packaging your codebase.