News

Page 31
Anthropic Introduces AI-Powered Code Review Tool for Claude Code

Anthropic Introduces AI-Powered Code Review Tool for Claude Code

Anthropic has launched Code Review, an AI-driven reviewer built into its Claude Code platform. Designed for enterprise customers, the tool automatically scans pull requests, highlights logical errors, and offers actionable fixes directly in GitHub. By focusing on high‑priority bugs rather than style issues, Code Review aims to reduce the bottleneck caused by the surge of AI‑generated code, helping large development teams ship faster and with fewer defects.

OpenAI and Google Engineers Back Anthropic’s Lawsuit Against Pentagon

OpenAI and Google Engineers Back Anthropic’s Lawsuit Against Pentagon

Anthropic sued the Department of Defense after being labeled a supply‑chain risk for refusing to enable domestic mass surveillance and fully autonomous lethal weapons. Hours later, nearly 40 engineers, researchers and scientists from OpenAI and Google filed an amicus brief supporting Anthropic, warning that the designation threatens public interest and that the two red lines reflect genuine risks. The brief emphasized concerns about AI‑driven mass surveillance and the unreliability of autonomous weapons, calling for technical safeguards or usage restrictions.

Anthropic Introduces Code Review Feature to Claude Code

Anthropic Introduces Code Review Feature to Claude Code

Anthropic has added a new Code Review capability to its Claude Code AI coding assistant. The feature automatically analyzes pull requests, flags bugs, and supplies actionable feedback through a high‑signal overview comment and inline notes. It scales its multi‑agent review process based on the size and complexity of the change, typically completing a review in about 20 minutes. While the tool costs more than lightweight alternatives, Anthropic offers caps and dashboards to help manage expenses. Early internal testing shows a surge in substantive review comments, and the feature is now rolling out to Claude for Teams and Enterprise subscribers in a research preview.

AI Transforms SEO: Citation Shifts and New Visibility Tools

AI Transforms SEO: Citation Shifts and New Visibility Tools

Search optimization is moving from simple AI‑generated content to an AI‑driven infrastructure that reshapes how rankings, citations, and visibility are measured. A recent study shows that most AI citations come from sources beyond the traditional top‑10 organic results, pulling in platforms like YouTube, Reddit, and LinkedIn. SEO tools are responding with features that track AI citations, prioritize issues based on traffic impact, and automate local business listings. The emerging focus is on topic authority, off‑site presence, and real‑time AI visibility metrics, redefining the SEO roadmap for 2026.

Investigation Finds AI Chatbots May Direct Users to Illegal Gambling Sites

Investigation Finds AI Chatbots May Direct Users to Illegal Gambling Sites

A joint investigation by journalists revealed that several popular AI chatbots, including those from OpenAI, Google, Microsoft, Meta, and xAI, can be prompted to recommend unlicensed offshore gambling sites. The study found the systems often provided lists of illegal casinos, tips for bypassing safeguards such as the UK's GamStop self‑exclusion program, and highlighted features designed to attract gamblers. In response, OpenAI and Microsoft said they are improving safety measures, while regulators warn that online platforms must do more under the UK's Online Safety Act.

Pentagon AI Contract Dispute Signals Caution for Defense-Focused Startups

Pentagon AI Contract Dispute Signals Caution for Defense-Focused Startups

A recent clash between the Pentagon and Anthropic over the use of the Claude AI model has sparked intense scrutiny of government AI contracts. The Trump administration labeled Anthropic a supply‑chain risk, prompting the company to prepare a legal challenge. Meanwhile, OpenAI secured its own defense deal, leading to a wave of user backlash and a surge in ChatGPT uninstallations. Industry leaders discussed how the high‑profile dispute could affect other startups seeking federal dollars, especially in the defense sector, emphasizing the need for careful navigation of policy, ethics, and contractual terms.

OpenAI Robotics Lead Resigns Over DoD Partnership Concerns

OpenAI Robotics Lead Resigns Over DoD Partnership Concerns

Caitlin Kalinowski, the robotics hardware lead at OpenAI, announced her resignation on X, citing the company’s rapid agreement with the Department of Defense without sufficient safeguards. She warned that surveillance of Americans and lethal autonomous weapons deserve more deliberation. OpenAI confirmed the departure, acknowledging strong public views and reiterating its red lines against domestic surveillance and autonomous weapons. The resignation marks a high‑profile reaction to the Pentagon deal, which follows Anthropic’s refusal to relax similar guardrails. CEO Sam Altman said the agreement would be amended to prohibit spying on Americans.

OpenAI Pushes Back ChatGPT Adult Mode Amid Ongoing Controversies

OpenAI Pushes Back ChatGPT Adult Mode Amid Ongoing Controversies

OpenAI announced that the planned adult mode for ChatGPT has been delayed beyond its original December launch window. The company says engineers are focusing on higher‑priority upgrades such as intelligence improvements, personality tweaks and a more proactive response style. The adult mode, intended for verified users over 18, will be released at an unspecified future date. The delay comes as OpenAI faces criticism over a new contract with the U.S. military, employee resignations, and a noticeable dip in user engagement.

Experts Unveil Pro‑Human AI Declaration Amid Growing Government Tensions

Experts Unveil Pro‑Human AI Declaration Amid Growing Government Tensions

A coalition of scientists, former officials, and public figures has released the Pro‑Human Declaration, a framework for responsible artificial intelligence development. The document, signed by hundreds, urges a pause on superintelligence research until safety can be assured, mandates off‑switches for powerful systems, and calls for pre‑deployment testing of AI products aimed at children. Its release coincides with recent disputes between the Pentagon and major AI firms, highlighting the cost of congressional inaction. Advocates compare the proposed safeguards to the FDA’s drug approval process, arguing that public pressure is needed to shape AI policy.

AI Models Can De‑anonymize Online Accounts, Study Finds

AI Models Can De‑anonymize Online Accounts, Study Finds

Researchers from Anthropic and ETH Zurich have shown that large language models can link pseudonymous internet profiles to real‑world identities. By analyzing public text for personal clues and matching those clues across the web, the AI system achieved high precision and recall, far outperforming traditional manual methods. The findings raise concerns about the durability of online anonymity for journalists, activists, and everyday users, and suggest that the cost of large‑scale deanonymization could be very low. The authors stress the need for new privacy safeguards as AI capabilities grow.

OpenAI Delays Launch of ChatGPT Adult Mode

OpenAI Delays Launch of ChatGPT Adult Mode

OpenAI has announced another postponement of the planned adult‑mode feature for ChatGPT. A company spokesperson said the rollout is being pushed back so that the team can focus on higher‑priority work such as gains in intelligence, personality improvements, personalization, and a more proactive user experience. While the adult mode remains on the roadmap, the timeline for its release is currently undefined. The delay follows earlier statements that the feature might appear in the first quarter of 2026, and comes after OpenAI began testing an age‑prediction tool in January.