Latest AI News

OpenAI Leverages Cerebras Wafer-Scale Chip to Boost Codex Speed

OpenAI Leverages Cerebras Wafer-Scale Chip to Boost Codex Speed

OpenAI has teamed with Cerebras to run its Codex-Spark coding model on the Wafer Scale Engine 3, a chip the size of a dinner plate. The partnership aims to improve inference speed, delivering roughly 1,000 tokens per second, with higher rates reported on other models. The move reflects OpenAI’s broader strategy to reduce reliance on Nvidia by striking deals with AMD, Amazon and developing its own custom silicon. The faster coding assistant arrives amid fierce competition from Anthropic, Google and other AI firms, underscoring the importance of latency for developers building software.

Google Warns of Large-Scale AI Model Extraction Attacks Targeting Gemini

Google Warns of Large-Scale AI Model Extraction Attacks Targeting Gemini

Google’s Threat Tracker report reveals that hackers are conducting "distillation attacks" by flooding the Gemini AI model with more than 100,000 prompts to steal its underlying technology. The attempts appear to originate from actors in North Korea, Russia and China and are classified as model extraction attacks, where adversaries probe a mature machine‑learning system to replicate its capabilities. While Google says the activity does not threaten end users directly, it poses a serious risk to service providers and AI developers whose models could be copied and repurposed. The report highlights a growing wave of AI‑focused theft and underscores the need for stronger defenses in the rapidly evolving AI landscape.

Google Reports Model Extraction Attacks on Gemini AI

Google Reports Model Extraction Attacks on Gemini AI

Google disclosed that commercially motivated actors have tried to clone its Gemini chatbot by prompting it more than 100,000 times in multiple non‑English languages. The effort, described as “model extraction,” is framed as intellectual‑property theft. The company’s self‑assessment also references past controversy over using ChatGPT data to train Bard, a warning from former researcher Jacob Devlin, and the broader industry practice of “distillation,” where new models are built from the outputs of existing ones.

OpenAI Launches Codex‑Spark, a Fast, Lightweight Coding Assistant Powered by Cerebras Chip

OpenAI Launches Codex‑Spark, a Fast, Lightweight Coding Assistant Powered by Cerebras Chip

OpenAI unveiled Codex‑Spark, a lightweight version of its Codex coding assistant designed for rapid inference and real‑time collaboration. The new model runs on Cerebras' Wafer Scale Engine 3, a megachip featuring four trillion transistors, marking a deeper hardware integration between the two companies. Currently in a research preview for ChatGPT Pro users, Spark aims to accelerate prototyping while complementing the heavier, longer‑running tasks of the original Codex model.

Reporter Tests RentAHuman, AI‑Powered Gig Platform Falls Short

Reporter Tests RentAHuman, AI‑Powered Gig Platform Falls Short

A journalist signed up for RentAHuman, a new marketplace where AI agents hire humans for real‑world tasks. After linking a crypto wallet and lowering hourly rates, the reporter received no job offers and found the listed gigs to be low‑pay marketing stunts, such as posting social‑media comments or delivering flowers for an AI startup. Attempts to complete a flyer‑hanging gig were thwarted by miscommunication and empty locations. Interviews with a founder of an AI developer community highlighted the platform’s hype‑driven design and lack of functional demand, leaving the reporter convinced that RentAHuman is more a publicity tool than a viable gig platform.

Microsoft Warns AI Agents Could Become Double Agents

Microsoft Warns AI Agents Could Become Double Agents

Microsoft cautions that rapid deployment of workplace AI assistants can turn them into insider threats, calling the risk a "double agent." The company’s Cyber Pulse report explains how attackers can manipulate an agent’s access or feed it malicious input, using its legitimate privileges to cause damage inside an organization. Microsoft urges firms to treat AI agents as a new class of digital identity, apply Zero Trust principles, enforce least‑privilege access, and maintain centralized visibility to prevent memory‑poisoning attacks and other forms of tampering.