News

Page 25
Inside Anthropic’s Societal Impacts Team: Tracking Claude’s Real‑World Effects

Inside Anthropic’s Societal Impacts Team: Tracking Claude’s Real‑World Effects

Anthropic’s societal impacts team, led by Deep Ganguli, examines how the company’s Claude chatbot is used and how it influences society. The small group of researchers and engineers gathers usage data through an internal tool called Clio, publishes findings on bias, misuse, and economic impact, and works closely with safety and policy teams. Their work includes identifying explicit content generation, coordinated spam, and emerging emotional‑intelligence concerns such as “AI psychosis.” While the team enjoys a collaborative culture and executive support, it faces resource constraints as its scope expands.

Gradium Secures $70 Million Seed Round to Accelerate Ultra‑Low‑Latency AI Voice Technology

Gradium Secures $70 Million Seed Round to Accelerate Ultra‑Low‑Latency AI Voice Technology

Gradium, a Paris‑based AI voice startup spun out of the French lab Kyutai, announced a $70 million seed financing led by FirstMark Capital and Eurazeo, with participation from Xavier Niel, DST Global Partners and Eric Schmidt. The company, founded by former Google DeepMind researcher Neil Zeghidour, offers ultra‑low‑latency, multilingual voice models that aim to deliver near‑instantaneous AI speech. Gradium enters a crowded market that includes major LLM firms and specialized voice startups, positioning its technology for developers seeking faster, more accurate voice capabilities across multiple languages.

Researchers Find Large Language Models May Prioritize Syntax Over Meaning

Researchers Find Large Language Models May Prioritize Syntax Over Meaning

A joint study by MIT, Northeastern University and Meta reveals that large language models can rely heavily on sentence structure, sometimes answering correctly even when the words are nonsensical. By testing prompts that preserve grammatical patterns but replace key terms, the researchers demonstrated that models often match syntax to learned responses, highlighting a potential weakness in semantic understanding. The findings shed light on why certain prompt‑injection techniques succeed and suggest avenues for improving model robustness. The team plans to present the work at an upcoming AI conference.

DeepSeek Unleashes Open-Source AI Models That Rival Leading U.S. Systems

DeepSeek Unleashes Open-Source AI Models That Rival Leading U.S. Systems

Chinese startup DeepSeek has released two new AI models—DeepSeek‑V3.2 and DeepSeek‑V3.2‑Speciale—under an open-source license. The models claim performance comparable to GPT‑5 and Gemini 3 Pro on long‑form reasoning, tool use, and dense problem solving while offering a 128,000‑token context window and reduced computational cost through Sparse Attention. Their launch challenges the dominance of U.S. AI firms, sparks regulatory scrutiny in Europe, and raises questions about the future of AI accessibility and geopolitics.

OpenAI Issues ‘Code Red’ as Google’s Gemini 3 Accelerates AI Competition

OpenAI Issues ‘Code Red’ as Google’s Gemini 3 Accelerates AI Competition

OpenAI chief executive Sam Altman announced an internal “code red,” pausing projects such as ads, shopping, health agents, and the Pulse personal assistant to focus on boosting ChatGPT’s speed, reliability and personalization. The memo calls for daily calls and temporary team transfers to speed development. Meanwhile, Google, which launched its own “code red” after ChatGPT’s debut, sees its AI user base expand with tools like Nano Banana and its new Gemini 3 model, which outperforms rivals on several benchmarks. The parallel moves highlight a pivotal moment in the AI race, with both firms investing heavily to maintain leadership.

What Not to Ask ChatGPT: 11 Risky Uses to Avoid

What Not to Ask ChatGPT: 11 Risky Uses to Avoid

ChatGPT is a powerful tool, but it isn’t suitable for every task. Experts warn against relying on the AI for diagnosing health conditions, mental‑health support, emergency safety decisions, personalized financial or tax advice, handling confidential data, illegal activities, academic cheating, real‑time news monitoring, gambling, drafting legal contracts, or creating art to pass off as original. While it can help with general information and brainstorming, users should treat it as a supplement, not a replacement for professional expertise or critical real‑time resources.

Data Center Energy Demand Set to Triple by 2035 Amid AI‑Driven Expansion

Data Center Energy Demand Set to Triple by 2035 Amid AI‑Driven Expansion

A new BloombergNEF report projects that data centers will need nearly three times the electricity they consume today, rising to 106 gigawatts by 2035. Growth will be driven by larger facilities, higher utilization rates and the surge in AI training and inference workloads. Much of the new capacity is expected in rural regions across the PJM Interconnection and Texas’s ERCOT grid, prompting regulatory scrutiny over grid reliability and electricity pricing.

Apple appoints new AI chief as John Giannandrea steps down

Apple appoints new AI chief as John Giannandrea steps down

Apple announced that John Giannandrea, the company's AI chief since 2018, is stepping down and will serve as an adviser through the spring. He will be succeeded by Amar Subramanya, a veteran who spent 16 years at Google before leading engineering for the Gemini Assistant at Microsoft. The leadership change comes as Apple Intelligence has faced a series of setbacks, including missteps with Siri and inaccurate news summaries. Subramanya’s deep experience with rival platforms is expected to help Apple address its AI challenges and accelerate development of on‑device AI services.

James Cameron Calls AI-Generated Actors ‘Horrifying’

James Cameron Calls AI-Generated Actors ‘Horrifying’

Director James Cameron warned that AI‑generated actors are "horrifying," expressing concern that synthetic performers could replace real talent. The comment followed the debut of Tilly Norwood, a photorealistic digital actress created by Particle6 and shown at the Zurich Film Festival. SAG‑AFTRA condemned the technology as a synthetic imitation built on stolen work. Cameron, known for pioneering CGI, distinguished motion‑capture—which still relies on human performers—from generative AI that can fabricate characters and performances from text prompts. He urged the industry to keep the human element at the core of filmmaking.

OpenAI Forms Strategic Partnership with Thrive Holdings, Targeting IT Services and Accounting

OpenAI Forms Strategic Partnership with Thrive Holdings, Targeting IT Services and Accounting

OpenAI announced an ownership stake in private‑equity firm Thrive Holdings, a move that involves no cash outlay but provides Thrive’s portfolio companies with OpenAI employees, models, products, and services. The partnership focuses on transforming high‑volume, rules‑driven processes in IT services and accounting, aiming to boost speed, accuracy, and cost efficiency. Thrive CEO Joshua Kushner highlighted AI’s potential to reshape industries from the inside out, while OpenAI’s leadership described the deal as a new model for collaboration with private‑equity groups. The arrangement also gives OpenAI access to data that could enhance future AI training.

Apple AI Chief Steps Down Amid Siri Delays

Apple AI Chief Steps Down Amid Siri Delays

Apple announced that its head of artificial intelligence, John Giannandrea, will leave his role, with Amar Subramanya set to take over as vice president of AI. The change comes as Apple faces setbacks with its Siri voice assistant, which has been delayed and reportedly caused confidence concerns among senior leadership. Giannandrea will remain as an advisor before retiring, while Subramanya, a former Google veteran, will oversee AI models, research, and safety, reporting to software chief Craig Federighi.

OpenAI May Be Compelled to Explain Deletion of Pirated Book Datasets

OpenAI May Be Compelled to Explain Deletion of Pirated Book Datasets

OpenAI faces pressure to reveal why it removed two internal datasets built from a shadow library of pirated books. The move comes amid a class‑action lawsuit from authors who allege the company trained ChatGPT on their works without permission. While OpenAI initially said the datasets were deleted because they fell out of use, it later claimed that any reason for deletion is protected by attorney‑client privilege. A U.S. district judge has ordered the company to produce internal communications about the deletion, including references to the library source.