Musk’s ‘Partly’ Remark on Grok and OpenAI Opens New Front in AI Race

Elon Musk Testifies xAI Used OpenAI Models to Help Train Grok

The420 Web Desk
7 Min Read

Elon Musk’s testimony in federal court on Thursday did more than advance his lawsuit against OpenAI. It opened a window into a practice that many in the AI industry have long suspected but rarely acknowledged so directly.

On the stand in California, Musk was asked whether xAI had used distillation techniques on OpenAI models to train Grok, xAI’s flagship chatbot. He said the practice was general across AI companies. When pressed on whether that meant yes in this case, he replied: “Partly.”

That answer landed in an industry already on edge over distillation, the process by which companies use publicly accessible chatbots or APIs to help train new models. In recent months, OpenAI and Anthropic have taken increasingly aggressive positions against outside efforts to use their systems in this way. The public argument has often centered on Chinese firms, which frontier labs have accused of using distillation to build cheaper open-weight models that approach the capabilities of leading American systems.

FCRF Academy Launches Premier Anti-Money Laundering Certification Program

Yet inside the industry, there has long been a quieter assumption: that major American labs, too, learn from one another whenever they can.

Musk’s testimony appears to have turned that assumption into something closer to the public record.

What Distillation Reveals About the Economics of AI

The reason distillation matters is not merely technical. It is economic and strategic.

Training frontier models from scratch demands enormous investment in compute infrastructure, engineering talent and data pipelines. Distillation offers a way to compress some of that advantage. By systematically querying a more capable model and learning from its outputs, a smaller or later-moving company may be able to produce systems that come close to the frontier at a fraction of the cost.

That is why the practice has become so contested. To the companies that build the most advanced systems, distillation threatens to weaken the moat created by years of capital expenditure and technical lead time. To challengers, it can look like an unavoidable tactic in an industry where falling too far behind may mean irrelevance.

Musk’s admission is especially notable because it comes amid a broader campaign by OpenAI, Anthropic and Google to combat such techniques. Those companies have reportedly worked through the Frontier Model Forum to share information about how to identify and prevent distillation attempts, particularly ones associated with suspicious mass querying. The effort reflects a growing belief among leading labs that model outputs themselves have become a strategic asset requiring protection.

The irony, however, is difficult to ignore. The same industry that now portrays distillation as a serious threat has itself faced repeated scrutiny over how frontier models were trained, including accusations that copyright norms were bent or broken in the race to acquire sufficient training data.

Musk’s Lawsuit Against OpenAI Frames the Moment

The testimony came in the course of Musk’s lawsuit against OpenAI, Sam Altman and Greg Brockman, in which he argues that the company betrayed its founding mission by shifting from a nonprofit dedicated to safe AI for humanity into a for-profit enterprise.

The trial began this week and has already featured extended testimony from Musk, who is seeking to halt OpenAI’s conversion to a for-profit company. At the heart of his case is the claim that OpenAI and Altman manipulated him into providing $38 million in the venture’s early years under the banner of nonprofit public-interest development, only to later transform it into a commercial operation. On Wednesday, Musk told the court, “I was a fool who provided them free funding to create a startup.”

Against that backdrop, Thursday’s admission had an added layer of tension. Musk was not merely testifying as a critic of OpenAI. He was also, in effect, describing how his own company had interacted with the technologies of a rival he is simultaneously accusing of abandoning principle for profit.

That contradiction is less incidental than it may appear. It captures the paradox at the heart of the modern AI race: companies condemn one another’s methods even as competitive pressures make those same methods difficult to resist.

A Smaller Company, a Larger Race

Later in his testimony, Musk was asked about a claim he made last summer that xAI would soon be far beyond any company other than Google. His response was more restrained. He ranked the world’s leading AI providers by placing Anthropic first, followed by OpenAI, Google, and then Chinese open-source models. He described xAI as a much smaller company with only a few hundred employees.

That assessment was revealing in its own way. It suggested that even Musk, who has made some of the boldest claims in the sector, sees xAI as operating from behind rather than from the front. And that, in turn, helps explain why distillation matters so much. For smaller labs, it may be less a shortcut than an adaptation to structural disadvantage.

Still, the consequences of Musk’s admission extend beyond his own company. If one of the world’s most visible AI entrepreneurs is willing under oath to acknowledge that Grok was trained at least partly on OpenAI models, the statement may sharpen a debate that has until now been fought mostly through platform terms, industry whispers and selective enforcement.

The question is no longer simply whether distillation happens. It is how central it has become to the way the AI industry actually builds.

Stay Connected