In a sharp turn of events in the competitive world of artificial intelligence, Anthropic has publicly accused OpenAI of using its proprietary Claude coding tools to refine and train GPT-5, its highly anticipated next-generation language model. The allegation has stirred significant debate in the tech world, raising concerns about competitive ethics, data use, and the boundaries of AI benchmarking.
A Quiet Test Turns Loud: How the Allegation Surfaced
The dispute came to light following an investigative report by Wired, which cited insiders at Anthropic who claimed that OpenAI had been using Claude’s developer APIs—not just the public chat interface—to run deep internal evaluations of Claude’s capabilities. These tests reportedly focused on coding, creative writing, and handling of sensitive prompts related to safety, which gave OpenAI insight into Claude’s architecture and response behavior.
While such benchmarking might appear routine in the AI research world, Anthropic argues that OpenAI went beyond what is considered acceptable.
Anthropic Draws the Line on API Use
“Claude Code has become the go-to choice for developers,” Anthropic spokesperson Christopher Nulty said, adding that OpenAI’s engineers tapping into Claude’s coding tools to refine GPT-5 was a “direct violation of our terms of service.”
According to Anthropic’s usage policies, customers are strictly prohibited from using Claude to train or develop competing AI products. While benchmarking for safety is a permitted use, exploiting tools to optimize direct competitors is not.
That distinction, Anthropic claims, is what OpenAI crossed. The company has now limited OpenAI’s access to its APIs—allowing only minimal usage for safety benchmarking going forward.
OpenAI’s Response: Disappointed but Diplomatic
In a measured response, OpenAI’s Chief Communications Officer Hannah Wong acknowledged the API restriction but underscored the industry norm of cross-model benchmarking.
“It’s industry standard to evaluate other AI systems to benchmark progress and improve safety,” Wong noted. “While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them.”
The statement suggests OpenAI is seeking to maintain diplomatic ties despite the tensions.
A Pattern of Caution from Anthropic
This isn’t the first time Anthropic has shut the door on a competitor. Earlier this year, it reportedly blocked Windsurf, a coding-focused AI startup, over rumors of OpenAI’s acquisition interest. Jared Kaplan, Anthropic’s Chief Science Officer, had at the time stated, “It would be odd for us to be selling Claude to OpenAI.”
With GPT-5 reportedly close to release, the incident reveals how fiercely guarded innovation has become in the AI world. Every prompt, every tool, and every line of code has strategic value—and access to a rival’s system, even indirectly, can be a game-changer.
What This Means for the Future of AI Development
The AI landscape is becoming increasingly guarded. With foundational models becoming key differentiators for companies, control over access—especially to development tools and APIs—is tightening.
Anthropic’s defensive stance could be a sign of things to come: fewer shared benchmarks, more closed systems, and increased scrutiny over how AI labs test, train, and scale their models.
As for GPT-5, questions now swirl not only around its capabilities but also its developmental origins—a storyline that will continue to unfold in the months ahead.