News

Page 1
Anthropic Revises Safety Commitment, Shifts to Transparency Reports

Anthropic Revises Safety Commitment, Shifts to Transparency Reports

Anthropic has abandoned its earlier pledge to halt training and releasing frontier AI models until it could guarantee safety mitigations. The company now relies on detailed safety roadmaps, regular risk reports, and transparency disclosures instead of strict pre‑conditions. Executives describe the change as pragmatic, while critics argue it highlights the limits of voluntary safety promises without regulatory oversight. The new policy aims to keep Anthropic competitive while still emphasizing safety, but observers note that the shift may signal a broader industry move away from self‑imposed restraints.

Judge Finds No Evidence OpenAI Stole xAI Trade Secrets, Dismisses Lawsuit

Judge Finds No Evidence OpenAI Stole xAI Trade Secrets, Dismisses Lawsuit

A federal judge ruled that xAI has not provided sufficient evidence to prove that OpenAI poached its employees or misappropriated its trade secrets. The court dismissed the claim that OpenAI should be liable for actions taken by new hires before they joined the company, and highlighted the lack of concrete proof that OpenAI acquired, disclosed, or used any confidential information. The decision underscores the challenges xAI faces in substantiating its allegations and signals that the lawsuit will require a stronger evidentiary foundation to proceed.

Riley Walz Joins OpenAI to Pioneer New Human‑AI Interaction Interfaces

Riley Walz Joins OpenAI to Pioneer New Human‑AI Interaction Interfaces

Software engineer and internet provocateur Riley Walz is joining OpenAI to help invent and prototype novel ways for people to work with artificial intelligence. Known for viral projects such as Jmail and Find My Parking Cops, Walz will operate within OAI Labs under research leader Joanne Jang. The hire reflects OpenAI’s push to stay ahead of competitors by expanding beyond ChatGPT and exploring fresh AI collaboration tools.

Anthropic Softens Safety Commitments Amid Pentagon Pressure

Anthropic Softens Safety Commitments Amid Pentagon Pressure

Anthropic announced a revision to its Responsible Scaling Policy, replacing hard safety tripwires with more flexible risk reports and safety roadmaps. The change follows reports that Defense Secretary Pete Hegseth urged the company to grant the military unrestricted access to its Claude AI model, threatening penalties under the Defense Production Act. Anthropic’s leadership argued that strict halts on model training would no longer help anyone given the rapid pace of AI development. Critics warned the shift could erode safeguards and enable a gradual “frog‑boiling” of safety standards.

OpenClaw creator urges AI builders to stay playful and keep experimenting

OpenClaw creator urges AI builders to stay playful and keep experimenting

Peter Steinberger, the developer behind the viral AI agent OpenClaw and now an OpenAI employee, told listeners on OpenAI’s Builders Unscripted podcast that the best way to work with modern AI is to explore, stay playful, and accept that expertise develops over time. He described his own path from a WhatsApp‑integrated tool to the OpenClaw prototype, emphasizing that AI models can solve problems without explicit programming and that learning to code with AI is a skill that improves with practice.

Hacker Exploits Anthropic’s Claude Chatbot to Breach Mexican Government Agencies

Hacker Exploits Anthropic’s Claude Chatbot to Breach Mexican Government Agencies

A hacker leveraged Anthropic's Claude chatbot to identify vulnerabilities and automate attacks against multiple Mexican government agencies, stealing roughly 150GB of data that included taxpayer records and employee credentials. The adversary also used OpenAI's ChatGPT to gather additional network information. Anthropic responded by investigating, disrupting the activity, and banning the involved accounts, while its latest model, Claude Opus 4.6, now includes safeguards against such misuse. Gambit Security, which uncovered the operation, suggested a possible link to a foreign government, though the hacker remains unidentified.

Alphabet’s Intrinsic Robotics Unit Merges into Google

Alphabet’s Intrinsic Robotics Unit Merges into Google

Intrinsic, the Alphabet "Other Bets" robotics venture, will become a distinct group within Google. The move positions the company to use Google Cloud, Gemini models and DeepMind expertise while continuing its mission to make robot software affordable and easy to use. Intrinsic describes its platform as “the Android of robotics,” offering a universal canvas for developers to create applications for a variety of robots, sensors and cameras. The integration aims to accelerate physical AI development for manufacturing and other real‑world tasks.

Google Unveils Nano Banana 2, a Faster Image Generation Model

Google Unveils Nano Banana 2, a Faster Image Generation Model

Google has introduced Nano Banana 2, an image‑generation model powered by Gemini 3.1 Flash Image. The new system matches the world knowledge and reasoning of Nano Banana Pro while delivering "lightning‑fast" performance. It brings Pro‑level features—real‑time web‑search integration, infographic creation, and text overlay for marketing and greeting‑card designs—to a broader audience. Nano Banana 2 can preserve the likeness of up to five characters in a single workflow, follow precise instructions, and produce images at up to 4K resolution with richer textures and sharper details. The model will replace Pro in the Gemini app and become the default for AI Mode in Search, Lens, and Flow AI creative studio, though AI Pro and Ultra subscribers will retain access to the original Pro model for specialized tasks.

Perplexity launches Computer feature to let users pick the best AI model for each task

Perplexity launches Computer feature to let users pick the best AI model for each task

Perplexity introduced Computer, a new tool that routes user requests to the most suitable AI model. The system combines Gemini for deep research, Grok for fast lightweight jobs, and ChatGPT 5.2 for long‑context tasks, all built on the Opus 4.6 reasoning engine. It integrates with popular productivity apps such as Gmail, Outlook, GitHub, Slack, Notion, and Salesforce, allowing users to draft documents, create slides, send emails, and schedule offline tasks without manual hand‑offs. Subscribers to Perplexity Max can try the feature immediately, with Enterprise Max access slated for the near future. The rollout highlights a shift toward model‑specific orchestration rather than treating AI as a single interchangeable service.

AI-Driven Insurance Brokerage Harper Secures $46.8M Funding

AI-Driven Insurance Brokerage Harper Secures $46.8M Funding

Harper, an AI-native commercial insurance brokerage founded by Dakotah Rice and Tushar Nair, announced a $46.8 million combined Series A and seed round. Launched in 2024 as part of Y Combinator's W'25 batch, the company uses artificial intelligence to automate underwriting, document collection, and pipeline management, allowing it to serve more than 5,000 small- and mid-sized businesses across 160 carriers. Investors include Y Combinator, Peak XV Partners, and Emergence Capital. The new capital will expand Harper’s engineering team and brand, positioning the firm to become a central risk and compliance partner for entrepreneurs in middle America.

Anthropic Adds Remote Control to Claude Code, Enabling Phone Management of Local Sessions

Anthropic Adds Remote Control to Claude Code, Enabling Phone Management of Local Sessions

Anthropic has introduced Remote Control for Claude Code, allowing developers to monitor and steer coding tasks from a mobile device. The feature creates a temporary link that mirrors the local session on a phone or web interface, while keeping all files and execution on the original machine. Security relies on one‑time access tokens that expire when the session ends. Remote Control is currently available as a research preview for Claude Max subscribers, with broader rollout planned for other plans.

Using ChatGPT to Discover Your Celebrity Look-Alike

Using ChatGPT to Discover Your Celebrity Look-Alike

A new ChatGPT-powered tool lets users upload a few clear photos to find a celebrity who resembles them. By selecting the "Find My Celebrity Look-Alike" GPT, users can compare side‑by‑side images and receive suggestions based on facial features, clothing, and overall vibe. The experience highlights how the AI interprets visual cues, offers multiple matches, and even comments on personality traits, while noting limitations around facial‑recognition policies.