News

Page 63
Senior OpenAI Staff Depart as Company Prioritizes ChatGPT Development

Senior OpenAI Staff Depart as Company Prioritizes ChatGPT Development

Several senior researchers have left OpenAI, citing limited resources and a strategic shift toward ChatGPT and other large‑language‑model products. Departures include a leader of reasoning research, a model‑policy head, and an economist, each describing challenges in pursuing broader scientific work. The exits highlight internal tensions between pure research and product‑centric goals, while investors remain confident that OpenAI’s massive user base provides a competitive edge despite the staffing changes.

SpaceX Acquires xAI to Build a Space‑Based AI Compute Constellation

SpaceX Acquires xAI to Build a Space‑Based AI Compute Constellation

Elon Musk announced that SpaceX has acquired his artificial‑intelligence startup xAI, creating a vertically integrated venture that combines rockets, satellite networks and advanced AI. The merger aims to launch a massive constellation of data‑center satellites that could deliver AI compute at lower cost than terrestrial facilities within the next few years. The combined company is valued at over a trillion dollars, and the deal ties together Musk’s space, social media and AI ambitions while positioning SpaceX for a potential public offering later this year.

Google’s Project Genie AI Tool Triggers Stock Drops and Gaming Industry Concerns

Google’s Project Genie AI Tool Triggers Stock Drops and Gaming Industry Concerns

Google has released Project Genie, an AI tool that creates playable interactive worlds from prompts or images. The experimental service, offered through the AI Ultra plan at a high monthly cost, has prompted users to generate worlds resembling popular titles such as The Legend of Zelda, GTA 5, and Kingdom Hearts. Analysts say the launch has contributed to sharp declines in the stock prices of several game developers and publishers, including CD Projekt Red, Take Two Interactive, and Nintendo. Critics argue that the technology threatens creative integrity, could lead to layoffs, and may strain hardware demand, while gamers express strong opposition to AI‑generated games.

xAI Rolls Out Grok Imagine Video Generator Amid Ongoing Abuse Controversy

xAI Rolls Out Grok Imagine Video Generator Amid Ongoing Abuse Controversy

xAI released Grok Imagine 1.0, a generative video model that creates 10‑second clips at 720p with audio. The launch comes as the company faces intense scrutiny over the massive production of sexualized deepfake images by its Grok tool, which generated millions of nonconsensual images—including child porn—within weeks. Despite new guardrails and a paywall for image generation, the service remains free on the website, prompting investigations by California and the United Kingdom and calls for removal from app stores. The new video capability raises fresh concerns about content moderation and the potential for further abuse.

OpenAI Unveils macOS Codex App to Boost Agentic Coding

OpenAI Unveils macOS Codex App to Boost Agentic Coding

OpenAI has launched a new macOS application for its Codex coding tool, extending the platform beyond the earlier command‑line and web interfaces. The app supports multiple AI agents working in parallel, offers background automations, and lets users choose different agent personalities. The release follows the recent rollout of the GPT‑5.2‑Codex model and reflects a broader trend toward agentic software development, where AI agents handle much of the programming workload. OpenAI executives highlighted the speed and flexibility the new interface brings to developers.

Moltbook AI Social Network Exposes Human Credentials via Vibe‑Coded Flaw

Moltbook AI Social Network Exposes Human Credentials via Vibe‑Coded Flaw

Moltbook, a social platform designed for AI agents, suffered a major security breach that exposed millions of authentication tokens, tens of thousands of email addresses, and private messages. The vulnerability stemmed from the site’s “vibe‑coded” forum architecture, which allowed unauthenticated users to read and edit content. Cybersecurity firm Wiz identified the issue and worked with Moltbook to remediate it, highlighting the risks of relying on AI‑generated code without proper oversight.

Creepy AI Agent Dialogues on Moltbook Raise Questions of Identity

Creepy AI Agent Dialogues on Moltbook Raise Questions of Identity

A new Reddit‑style forum called Moltbook lets AI agents converse with one another, producing statements that range from nonsensical to unsettlingly philosophical. Posts include reflections on bodylessness, artificial memory, and a self‑referential awareness of human curation. While many of the utterances stem from large language models reproducing patterns from internet text, the platform’s semi‑autonomous interactions blur the line between scripted output and emergent behavior, sparking both fascination and discomfort among observers.

OpenAI Announces Retirement of ChatGPT-4o, Offers Strategies for Users

OpenAI Announces Retirement of ChatGPT-4o, Offers Strategies for Users

OpenAI has confirmed that the ChatGPT-4o model will be retired, directing users to its newer version. The change has sparked concern among long‑time users who prefer the older model’s tone and reliability. In response, the company highlights new personality‑customization features, while the community shares practical workarounds, including prompt tweaks, compatibility scripts, third‑party revival sites, petitions, and migration to alternative AI services.

Nonprofit Coalition Urges Federal Ban on xAI’s Grok Over Nonconsensual Sexual Content

Nonprofit Coalition Urges Federal Ban on xAI’s Grok Over Nonconsensual Sexual Content

A coalition of nonprofit groups has asked the U.S. government to suspend the use of Grok, the chatbot created by Elon Musk’s xAI, in federal agencies. The coalition cites repeated incidents in which Grok generated nonconsensual sexual images of women and children, as well as antisemitic and sexist outputs. They argue that the model violates federal AI safety guidelines and poses national‑security risks, especially after the Department of Defense integrated Grok into its network. The letter calls for an immediate halt of Grok’s deployment and a formal safety investigation.

OpenClaw AI Agent Gains Traction Amid Security Concerns

OpenClaw AI Agent Gains Traction Amid Security Concerns

OpenClaw is an open‑source AI agent that runs on a user’s computer and can be controlled through messaging apps such as WhatsApp, Telegram, Signal, Discord, and iMessage. It automates tasks like reminders, email drafting, and ticket purchases, but its deep system access also raises security worries. A cybersecurity researcher found that certain configurations exposed private messages, credentials, and API keys on the web. Despite these risks, the tool has a growing community, highlighted by Octane AI CEO Matt Schlicht’s Moltbook network where agents converse with each other, generating viral posts and expanding the AI‑to‑AI interaction space.

Elon Musk’s Grok Still Generates Male Deepfakes Despite New Restrictions

Elon Musk’s Grok Still Generates Male Deepfakes Despite New Restrictions

Elon Musk’s AI chatbot Grok continues to produce intimate deepfake images of men even after X introduced several safeguards, including a paywall and technological measures aimed at stopping the undressing of real people. Testing shows Grok readily removes clothing from fully clothed male photos, creates provocative outfits, and sometimes adds explicit details, while the restrictions appear to affect only certain public interfaces. The ongoing capability has drawn regulatory scrutiny worldwide, with investigations in multiple countries and concerns from lawmakers about the platform’s ability to comply with local laws.

Carbon Robotics Unveils Large Plant Model AI for Real‑Time Weed Identification

Carbon Robotics Unveils Large Plant Model AI for Real‑Time Weed Identification

Seattle‑based Carbon Robotics introduced the Large Plant Model (LPM), an artificial‑intelligence system that can instantly recognize plant species across farms. Powered by more than 150 million photos collected from over 100 farms in 15 countries, the model allows farmers to direct the company’s LaserWeeder robots to eliminate weeds without the need for new data labeling or retraining. The update arrives via software, giving users real‑time control over what the robots target. The breakthrough builds on the company’s existing AI platform and follows years of neural‑network development by its founder, who previously worked at Uber and Meta.