Not a full team adoption story, but relevant data point: I run a small engineering org (~40 engineers across teams) and we've been tracking AI coding tool adoption informally.
The split is roughly: 30% all-in (Claude Code or Cursor for everything), 50% selective users (use it for boilerplate, tests, docs but still hand-write core logic), 20% holdouts.
What I've noticed on PR velocity: it went up initially, then plateaued. The PRs got bigger, which means reviews take longer. We actually had to introduce a "max diff size" policy because AI-assisted PRs were becoming 800+ line monsters that nobody could review meaningfully.
The quality concern that keeps coming up: security. AI-generated code tends to take shortcuts on auth, input validation, error handling. We've started running dedicated security scans specifically tuned for patterns that AI likes to produce. That's been the biggest process change.
Net effect: probably 20-30% faster on feature delivery, but we're spending more time on review and security validation than before.
I have seen the same Ai hallucinations that you mentioned: auth, input validation, error handling, non-existent dependencies, etc. It's tricky to get them all as LLM's have mastered the art of being "confidently wrong". What tools are you using to catch those issues? I feel current tooling is ill equiped for this new wave of Ai generated output.
Not a full team adoption story, but relevant data point: I run a small engineering org (~40 engineers across teams) and we've been tracking AI coding tool adoption informally.
The split is roughly: 30% all-in (Claude Code or Cursor for everything), 50% selective users (use it for boilerplate, tests, docs but still hand-write core logic), 20% holdouts.
What I've noticed on PR velocity: it went up initially, then plateaued. The PRs got bigger, which means reviews take longer. We actually had to introduce a "max diff size" policy because AI-assisted PRs were becoming 800+ line monsters that nobody could review meaningfully.
The quality concern that keeps coming up: security. AI-generated code tends to take shortcuts on auth, input validation, error handling. We've started running dedicated security scans specifically tuned for patterns that AI likes to produce. That's been the biggest process change.
Net effect: probably 20-30% faster on feature delivery, but we're spending more time on review and security validation than before.
I have seen the same Ai hallucinations that you mentioned: auth, input validation, error handling, non-existent dependencies, etc. It's tricky to get them all as LLM's have mastered the art of being "confidently wrong". What tools are you using to catch those issues? I feel current tooling is ill equiped for this new wave of Ai generated output.
The joke I hear is Claude Code will double your PRs
One PR from Claude. The next PR from you fixing Claude’s mistakes.