OpenAI Rehires Former Thinking Machines Lab Researchers Amid Industry Turmoil
Key Points
- OpenAI announces the return of Barret Zoph, Luke Metz, and Sam Schoenholz from Thinking Machines Lab.
- Thinking Machines had raised internal concerns about Zoph’s conduct and possible confidential information sharing.
- OpenAI states the hires were planned for weeks and does not share the lab’s ethical concerns.
- The AI sector is experiencing ongoing turnover, with researchers reporting fatigue from constant industry drama.
- AI labs are contracting third‑party firms to gather anonymized work examples for training enterprise agents.
- Data suppliers are paying contractors upwards of $100 an hour to provide realistic professional data.
- Simulated environments are being built to teach AI agents how to use enterprise software applications.
- The developments highlight both talent consolidation strategies and ethical questions around data usage.
OpenAI announced the return of former Thinking Machines Lab cofounders Barret Zoph and Luke Metz, along with researcher Sam Schoenholz. The hires, described as weeks‑long discussions, follow internal concerns at Thinking Machines about Zoph’s conduct and potential confidential information sharing. The move highlights ongoing personnel shifts across the AI sector, where researchers report fatigue from constant industry drama. At the same time, AI labs are intensifying efforts to train agents for professional tasks by sourcing real‑world work data from contractors, a strategy that raises both practical and ethical questions.
OpenAI’s Latest Staffing Moves
OpenAI disclosed that it is bringing back three former researchers who had recently been employed at Thinking Machines Lab. The group includes Barret Zoph and Luke Metz, who co‑founded the lab, as well as Sam Schoenholz, a former OpenAI researcher who had joined Thinking Machines. OpenAI’s applications chief said the hires had been in the works for weeks and that Zoph had indicated a desire to leave Thinking Machines before his termination.
Background at Thinking Machines Lab
According to a source with direct knowledge, Thinking Machines leadership believed Zoph was involved in a serious misconduct incident while at the company. The alleged incident eroded trust with the lab’s chief executive, Mira Murati, and led to Zoph’s dismissal shortly before OpenAI’s offer. The source also claimed the lab raised concerns about whether Zoph might have shared confidential information with competitors. OpenAI, however, stated it does not share the lab’s concerns about Zoph’s ethics.
Industry‑wide Personnel Shifts
The re‑hires occur amid a broader pattern of turnover in the artificial‑intelligence field. Researchers at several leading labs have expressed exhaustion from the frequent drama and high‑profile departures that have characterized the sector in recent years. Commentators have compared the current situation to earlier upheavals, such as the brief ouster of OpenAI’s chief executive in 2023, noting that personnel moves are now a recurring feature of the industry’s rapid growth.
AI Labs Training Enterprise Agents
In parallel with the staffing news, AI laboratories are accelerating efforts to develop agents capable of performing professional tasks. Companies like OpenAI are contracting third‑party firms—Handshake, Mercor, Surge, and Turing—to supply anonymized examples of real work from fields such as consulting, finance, and healthcare. Contractors are instructed to scrub any confidential or personally identifying information before submitting the data.
These data sets are used to create simulated environments that teach AI agents how to operate enterprise software. The goal is to fine‑tune models for specific knowledge‑work domains, enabling agents to handle tasks traditionally performed by consultants, bankers, or doctors. Some data‑supply firms have begun paying contractors upwards of $100 an hour for their contributions.
Implications and Outlook
The return of Zoph, Metz, and Schoenholz underscores OpenAI’s strategy of consolidating talent from rival labs, even when those moves intersect with internal disputes. At the same time, the push to train AI agents on real‑world professional data signals a shift toward more practical, task‑oriented applications of large‑scale models. Observers suggest that as AI research continues to attract substantial funding, both personnel dynamics and the ethical considerations surrounding data use will remain central topics for the industry.