News

Page 1
Locai Labs Bans Under‑18 Access and Image Generation, Calls for Industry Honesty Amid UK Probe of Elon Musk’s Grok Images

Locai Labs Bans Under‑18 Access and Image Generation, Calls for Industry Honesty Amid UK Probe of Elon Musk’s Grok Images

Locai Labs CEO James Drayson announced that the company will block users under 18 and suspend image‑generation features until safety can be assured. He warned that no AI model can guarantee protection against harmful or sexualized content, urging the industry to be transparent about the risks. In the United Kingdom, regulator Ofcom has opened an investigation into Elon Musk’s Grok platform, which allows image editing that can produce non‑consensual and sexualized depictions, including of children. The controversy has already led to bans in several countries and heightened calls for stricter AI regulation.

UK regulator probes X over Grok AI chatbot misuse as Malaysia and Indonesia block service

UK regulator probes X over Grok AI chatbot misuse as Malaysia and Indonesia block service

Britain's media regulator Ofcom has opened a formal investigation into X under the Online Safety Act after reports that the Grok AI chatbot was used to create and share non‑consensual intimate images and child sexual abuse material. The probe will assess X's compliance with legal duties, risk‑assessment procedures, and age‑verification safeguards. Meanwhile, Malaysia and Indonesia have become the first countries to block access to Grok, citing insufficient safeguards against explicit AI‑generated deepfakes of women and children. Both regulators say the block will stay in place until stronger protections are put in place.

Google’s Play Store Policies Ban AI Apps Like Grok, Yet It Remains Available

Google’s Play Store Policies Ban AI Apps Like Grok, Yet It Remains Available

Google’s Play Store policy explicitly prohibits apps that host or promote non-consensual sexual content, including deepfake‑generated imagery. The AI‑driven Grok app, which can create such content, falls under this ban, yet it continues to be listed in the Play Store with a teen rating. Apple also carries the app, though its guidelines are less detailed. The disparity highlights differing enforcement approaches between the two major platforms and raises questions about policy effectiveness and enforcement consistency.

Anthropic Launches Claude Cowork, Bringing AI Coding Assistant to General Users

Anthropic Launches Claude Cowork, Bringing AI Coding Assistant to General Users

Anthropic has introduced Claude Cowork, a preview feature that extends its Claude Code AI capabilities beyond developers to everyday users. By granting the system access to a folder, users can have Claude read, edit, or create files, organize downloads, convert receipts into spreadsheets, and navigate websites via a Chrome plugin. The tool runs on the Claude Max subscription and requires a Mac with the Claude macOS app. A waitlist is open for broader access. Anthropic emphasizes explicit user permission and clear instructions to avoid unintended actions.

Meta Unveils “Meta Compute” Initiative as Dina Powell McCormick Joins as President and Vice Chairman

Meta Unveils “Meta Compute” Initiative as Dina Powell McCormick Joins as President and Vice Chairman

Meta announced a new strategic program called Meta Compute to guide its massive infrastructure investments for data centers and artificial intelligence. The rollout coincides with the appointment of former board member Dina Powell McCormick as president and vice chairman, a role that will focus on government partnerships and financing. Santosh Janardhan, head of global engineering, will oversee the top‑level initiative, while Daniel Gross will lead a new group handling long‑term capacity strategy and supplier relationships. The company also disclosed three nuclear power agreements to support its data‑center energy needs and reaffirmed its plan to spend $600 billion on AI infrastructure by 2028.

Anthropic Launches Claude Cowork AI Agent Feature

Anthropic Launches Claude Cowork AI Agent Feature

Anthropic introduced Claude Cowork, a new AI‑agent capability for its Claude chatbot, as a research preview available in the macOS app for Claude Max subscribers. The feature lets users grant Claude access to local folders so it can read, edit, or create files, handling tasks such as reorganizing downloads, generating spreadsheets, or drafting reports. Claude Cowork also integrates with services like Asana, Notion, PayPal, and Chrome, offering continuous updates and parallel task execution. Anthropic highlighted safety concerns, noting the model’s ability to delete files and the risk of prompt‑injection attacks, and urged users to join a waitlist if they are not yet subscribers.

Apple Picks Google Gemini to Power Next-Generation Siri

Apple Picks Google Gemini to Power Next-Generation Siri

Apple announced that its upcoming, more intelligent version of Siri will be powered by Google’s Gemini large‑language models. The partnership, described as multi‑year, lets Apple run Gemini on its Private Cloud Compute infrastructure, keeping user data isolated from Google’s servers. Apple highlighted the decision after an extensive evaluation, noting Gemini provides the most capable foundation for its future foundation models. Bloomberg reported that Apple may pay roughly $1 billion a year for the access and that Apple still aims to eventually replace third‑party models with its own in‑house technology.

Apple Partners with Google to Use Gemini for AI Features Including Siri

Apple Partners with Google to Use Gemini for AI Features Including Siri

Apple has announced a partnership with Google to power its upcoming AI features, including an upgraded Siri, using Google’s Gemini models and cloud infrastructure. The multi‑year deal follows Apple’s evaluation of several AI providers and aligns with its focus on privacy and on‑device processing. While the agreement is not exclusive, it marks a shift for Apple, which has traditionally built its own hardware‑software stack. The collaboration is expected to enable new experiences across Apple’s ecosystem while maintaining the company’s privacy standards.

Study Suggests Overreliance on AI May Reduce Cognitive Engagement

Study Suggests Overreliance on AI May Reduce Cognitive Engagement

A recent study compared students writing essays with and without the assistance of a generative AI tool. Participants who used the AI showed lower levels of brain activity and reduced mental connectivity, while those who wrote without assistance exhibited higher engagement. The findings raise concerns about the potential for AI tools to encourage mental shortcuts, diminish critical thinking, and amplify bias if not used responsibly. Researchers emphasize the need for further investigation and for users to remain critical of both AI outputs and media coverage of such studies.

AI, Data Sovereignty and Metro-Edge Data Centers Reshape Europe’s Digital Landscape

AI, Data Sovereignty and Metro-Edge Data Centers Reshape Europe’s Digital Landscape

Artificial intelligence is fueling Europe’s digital ambitions, but organizations face a critical need for massive, low‑latency storage that complies with strict data‑sovereignty rules. New regulations such as the GDPR, Data Governance Act and AI Act push firms to keep data within specific jurisdictions, while modern AI workloads demand petabyte‑scale capacity and ultra‑fast access. To meet these twin pressures, Europe is seeing rapid growth in metro‑edge data centers—localized facilities near major population and industrial hubs—that combine high‑density storage, compliance, and proximity to compute resources. This shift toward local‑first, hybrid architectures promises to boost AI performance while satisfying regulatory requirements.

AI Agents Enter Business Core, but Oversight Lags Behind

AI Agents Enter Business Core, but Oversight Lags Behind

Enterprises are rapidly integrating AI agents into core functions, with more than half of companies already deploying them. Despite this swift adoption, systematic verification and oversight remain largely absent. The agents are being trusted with critical tasks in sectors such as banking and healthcare, raising concerns about safety, accuracy, and potential manipulation. Industry experts argue that without multi‑layered testing frameworks and clear exit strategies, organizations risk exposing themselves to costly errors and systemic failures. The need for structured guardrails is growing as AI agents take on increasingly high‑stakes roles.

AI Won’t Replace Developers; It Will Evolve Their Role

AI Won’t Replace Developers; It Will Evolve Their Role

A new series featuring tech leaders argues that artificial intelligence is not a threat to software developers but a catalyst for their evolution. While no‑code and “vibe coding” tools can speed up simple projects, complex products still require human expertise in architecture, security, and user experience. Developers who learn to collaborate with AI will become more efficient and valuable, using the technology to handle repetitive tasks and focus on higher‑level problems. The piece emphasizes that AI is a tool, not a replacement, and that the most successful developers will be those who become AI‑savvy.