A senior Google engineer says an AI coding tool from Anthropic can rapidly produce working software concepts but the systems are not ready for real world use.
Jaana Dogan, a principal engineer at Google, said she tested Anthropic’s Claude Code on a distributed agent orchestrator in which her team had spent more than a year designing it. Using a short three paragraph prompt without any proprietary information, Dogan asked the system to recreate the core idea.
Within about an hour, Claude Code produced a functional prototype. According to Dogan the output mirrored several key design patterns that Google engineers had explored internally. However, she emphasized that the result was a simplified “toy version,” not a production ready system that developers could deploy at scale or maintain over time.
Dogan said her deep familiarity with the problem space was critical in judging the quality of the AI-generated code. As a result, she told her engineers to test AI tools only in domains where they perform best and where engineers can easily see their strengths and limits.
She also noted that Google currently restricts internal use of Claude Code reflecting broader industry caution around security and ownership.