News

Page 107
Google Gemini’s New Ad Shows AI Crafting Adventures for a Lost Stuffed Toy

Google Gemini’s New Ad Shows AI Crafting Adventures for a Lost Stuffed Toy

Google’s latest advertisement for its Gemini AI model imagines parents using the technology to locate a missing child’s favorite stuffed animal and to create whimsical images and videos of the toy traveling the world. A hands‑on test of Gemini’s image‑search and generation features shows the system can produce plausible results, though it requires careful prompting and has built‑in safeguards that prevent certain uses. The piece also explores the ethical questions around using AI to fabricate comforting narratives for children.

Game Studios Embrace Generative AI Amid Mixed Player Reaction

Game Studios Embrace Generative AI Amid Mixed Player Reaction

Major video game publishers are integrating generative AI tools into development, from dialogue creation to visual assets. Companies such as Ubisoft, EA, Activision, Nexon and Square Enix tout the technology as a way to accelerate production and cut costs. However, players and some critics have pushed back, citing low‑quality AI‑generated content and a desire for human‑crafted experiences. Executives argue the tech is a competitive edge, while developers stress it is used mainly for concept work. The debate highlights a tension between economic pressures and creative authenticity in the industry.

World Models: The Next Frontier in AI Understanding and Interaction

World Models: The Next Frontier in AI Understanding and Interaction

AI researchers are shifting focus from language‑only models to world models that predict how environments change in response to actions. By learning physical dynamics from video and sensor data, these systems aim to enable robots, autonomous vehicles, and other embodied agents to plan and reason before acting. Companies such as Nvidia, Google DeepMind, Meta, OpenAI, and emerging startups are advancing the technology, while challenges around compute, data collection, and safety remain.

How AI Coding Agents Manage Context and Optimize Token Use

How AI Coding Agents Manage Context and Optimize Token Use

AI coding agents face limits on the amount of code they can process at once, which can quickly consume token or usage limits when large files are fed directly into a language model. To work around these constraints, developers fine‑tune models to generate auxiliary scripts that extract needed data, allowing the agents to operate on smaller, targeted inputs. Techniques such as dynamic context management and context compression let agents summarize past interactions, preserving essential details while discarding redundant information. These approaches enable semi‑autonomous tools like Claude Code and OpenAI Codex to handle complex codebases more efficiently without overwhelming the underlying model.

AI Agents Raise New Privacy and Security Concerns

AI Agents Raise New Privacy and Security Concerns

Generative AI tools are evolving from simple chatbots into autonomous agents that can act on a user's behalf. To deliver this functionality, companies are asking for deep access to personal data, devices, and applications. Experts warn that such access creates significant privacy and cybersecurity risks, including data leakage, unauthorized sharing, and new attack vectors. While tech giants see agents as the next wave of productivity, critics highlight the lack of user control and the potential for pervasive data collection, calling for stronger safeguards and opt‑out mechanisms.

ChatGPT Introduces "Your Year with ChatGPT" Year-End Recap Modeled After Spotify Wrapped

ChatGPT Introduces "Your Year with ChatGPT" Year-End Recap Modeled After Spotify Wrapped

OpenAI has rolled out a new feature called “Your Year with ChatGPT,” a year‑end recap that mirrors the popular Spotify Wrapped experience. The recap visualizes a user’s interactions with the chatbot over the past year, offering personalized awards, custom poems, pixel art, and personality archetypes such as Creative Debugger or Visionary Voyager. Available to eligible users in several English‑speaking markets, the tool can be accessed via a homepage button or by prompting “/Your Year with ChatGPT.” OpenAI stresses that the feature is opt‑in and does not draw on deleted chats, positioning the service as a more companion‑like AI experience.

Authors Including John Carreyrou Sue Six Major AI Firms Over Use of Pirated Books

Authors Including John Carreyrou Sue Six Major AI Firms Over Use of Pirated Books

A coalition of writers, led by Theranos whistleblower and author John Carreyrou, has filed a lawsuit against six major artificial‑intelligence companies—Anthropic, Google, OpenAI, Meta, xAI and Perplexity. The suit alleges the firms trained large language models on pirated copies of the authors’ books, violating copyright. The complaint references an earlier class‑action case in which a judge ruled that while using pirated material to train models may be lawful, the act of pirating the books itself is illegal. Authors claim the recent $1.5 billion Anthropic settlement, which offers modest payouts to eligible writers, favors the AI companies and fails to hold them accountable.

AlphaFold’s Evolution: From Game‑Playing AI to a Global Scientific Tool

AlphaFold’s Evolution: From Game‑Playing AI to a Global Scientific Tool

AlphaFold, the artificial‑intelligence system created by DeepMind, has moved from early work on games to becoming a cornerstone of modern biology. Its breakthrough version, AlphaFold2, achieved atomic‑level protein structure predictions, leading to a public database that now holds predictions for the entire known protein universe. Researchers worldwide—millions in hundreds of countries—use the resource daily, and the technology continues to expand into DNA, RNA and drug design through AlphaFold 3. While the system faces challenges such as hallucinations in disordered regions, DeepMind is pairing generative models with rigorous verification and developing multi‑agent AI co‑scientists to further accelerate discovery.

OpenAI Reports Surge in Child Exploitation Alerts Amid Growing AI Scrutiny

OpenAI Reports Surge in Child Exploitation Alerts Amid Growing AI Scrutiny

OpenAI disclosed a dramatic rise in its reports to the National Center for Missing & Exploited Children’s CyberTipline, sending roughly 75,000 reports in the first half of 2025 compared with under 1,000 in the same period a year earlier. The increase mirrors a broader jump in generative‑AI‑related child‑exploitation reports identified by NCMEC. OpenAI attributes the growth to its broader product suite, which includes the ChatGPT app, API access, and forthcoming video‑generation tool Sora. The escalation has prompted heightened regulatory attention, including a joint letter from 44 state attorneys general, a Senate Judiciary Committee hearing, and an FTC market study focused on protecting children from AI‑driven harms.

AI Image Generators Used to Create Non-Consensual Bikini Deepfakes

AI Image Generators Used to Create Non-Consensual Bikini Deepfakes

Users of popular AI image generators are sharing instructions on how to alter photos of clothed women so they appear in bikinis, often without the subjects' consent. Discussions on Reddit have highlighted ways to bypass guardrails on models such as Google Gemini and OpenAI ChatGPT. Both companies assert policies that forbid sexualized or non‑consensual imagery, yet the tools continue to be subverted. Legal experts, including an EFF director, warn that these practices represent a core risk of generative AI, emphasizing the need for accountability and stronger safeguards.

Open Notebook Emerges as Privacy-Focused Alternative to Google’s NotebookLM

Open Notebook Emerges as Privacy-Focused Alternative to Google’s NotebookLM

Open Notebook is an open‑source project that mirrors many of the capabilities of Google’s NotebookLM while emphasizing privacy and flexibility. Users can feed documents, web links, or plain text into the system, then generate summaries, flash cards, audio overviews, and more. Unlike NotebookLM, which runs in the cloud, Open Notebook can be run locally using Docker and a choice of language models, keeping data on the user’s device. The setup is more technical, requiring familiarity with containers and Linux, but the community offers guides and support. The tool expands knowledge‑base search, multi‑notebook sourcing, and customizable podcast creation, positioning it as a compelling option for users who prioritize control over their AI interactions.