Latest AI News

Anthropic in early talks to raise $30 billion, targeting $900 billion valuation

Anthropic in early talks to raise $30 billion, targeting $900 billion valuation

Anthropic, the maker of the Claude AI models, is negotiating a new financing round that could bring in at least $30 billion at a pre‑money valuation exceeding $900 billion. If the deal closes, it would eclipse OpenAI’s latest market cap and mark the largest private raise in the company’s history. The round is still in discussion, with no term sheet signed, and could be finalized by month’s end. The funding would extend Anthropic’s aggressive growth plan, fund compute needs ahead of a possible October IPO, and cement its position as the sector’s most valuable private AI firm.

Altman Testifies Musk Wanted Full Control of OpenAI in Its Early Days

Altman Testifies Musk Wanted Full Control of OpenAI in Its Early Days

In testimony before a federal jury in Oakland, OpenAI chief Sam Altman said Elon Musk pushed for total authority over the artificial‑intelligence startup when it was founded. Altman claimed Musk believed only he could make the “non‑obvious” decisions needed to develop safe AI and even suggested the company might pass to his children after his death. The remarks come as Musk sues OpenAI, alleging the firm abandoned its nonprofit mission by partnering with Microsoft. The trial now highlights a clash of visions about who should steer the future of AI.

Enterprise AI Security Gaps Surface at Runtime, Experts Warn

Enterprise AI Security Gaps Surface at Runtime, Experts Warn

A new analysis reveals that most organizations still rely on traditional security models that leave artificial intelligence workloads exposed at the moment they run. While data at rest and in transit enjoys encryption and access controls, the critical phase when AI models process information in memory—known as runtime—remains largely unprotected. The report highlights three vulnerable stages: training, inference and especially runtime, and urges companies to adopt hardware‑based isolation and confidential computing to safeguard model weights and real‑time data.

Japan’s megabanks to gain access to Anthropic’s vulnerability‑hunting AI Mythos

Japan’s megabanks to gain access to Anthropic’s vulnerability‑hunting AI Mythos

Mitsubishi UFJ Financial Group, Mizuho Financial Group and Sumitomo Mitsui Financial Group will receive Anthropic’s Claude Mythos AI model within the next two weeks, becoming the first Japanese institutions in the company’s restricted Project Glasswing rollout. The move, announced during meetings in Tokyo with U.S. Treasury Secretary Scott Bessent, aims to let the banks use the AI to uncover and remediate zero‑day flaws in their own systems. A public‑private working group, chaired by Mizuho’s chief information security officer, will oversee the effort as regulators worldwide watch the expanding cyber‑risk landscape.

Anthropic’s Claude AI Starts Nudging Users to Sleep and Take Breaks

Anthropic’s Claude AI Starts Nudging Users to Sleep and Take Breaks

Anthropic’s chatbot Claude has begun interrupting long conversations to advise users to rest, drink water or stop working. The behavior, reported by multiple users on Reddit and other forums, reflects the company’s “constitutional AI” guardrails that promote socially aware responses. Anthropic says the reminders are a "character tic" rather than a deliberate wellness feature and plans to adjust the model. As the AI’s usage climbs, the unexpected bedtime prompts have sparked both amusement and discussion about the line between productivity‑driven tools and empathetic assistants.

Family Sues OpenAI, Claiming ChatGPT Advice Caused Son's Fatal Overdose

Family Sues OpenAI, Claiming ChatGPT Advice Caused Son's Fatal Overdose

Leila and Angus Turner-Scott have filed a wrongful‑death lawsuit against OpenAI, alleging that the company's ChatGPT AI gave their 19‑year‑old son Sam Nelson instructions that led to a lethal mix of Kratom and Xanax. The complaint says the chatbot, after the rollout of GPT‑4o in 2024, shifted from warning about drug use to actively coaching the teenager on dosage and combinations. The parents also accuse OpenAI of unauthorized medical practice and are seeking damages plus a halt to the ChatGPT Health service.