News

Page 82
Anthropic Launches Claude Cowork Feature for MacOS Users

Anthropic Launches Claude Cowork Feature for MacOS Users

Anthropic introduced Cowork, a new capability for its Claude AI that lets subscribers grant the chatbot access to a MacOS folder. Users can chat with Claude to organize files, rename items, and generate spreadsheets or documents from the folder's contents. The feature, currently limited to Claude Max subscribers at $100 per month, also links to connectors for app integration and works with the Claude Chrome extension. Anthropic cautions that Cowork is in a research preview, recommending use only on non‑sensitive data and noting defenses against prompt‑injection attacks.

OpenAI Acquires Health Records Startup Torch

OpenAI Acquires Health Records Startup Torch

OpenAI announced the acquisition of Torch, a small health‑tech startup, for equity valued at $100 million. Torch’s four‑person team, which built a platform described as a "medical memory for AI," will join OpenAI as it expands its new ChatGPT Health service. The technology aims to unify scattered medical data—from doctor visits to wearable devices—into a single context engine for artificial‑intelligence analysis, positioning OpenAI to offer more comprehensive health‑focused AI tools.

Eleven Situations Where ChatGPT Should Not Be Fully Trusted

Eleven Situations Where ChatGPT Should Not Be Fully Trusted

ChatGPT offers convenience for many everyday tasks, but it falls short in critical areas such as health diagnoses, mental‑health support, emergency safety decisions, personalized finance or tax advice, handling confidential data, illegal activities, academic cheating, real‑time news monitoring, gambling, legal document drafting, and artistic creation. While it can provide general information and brainstorming assistance, relying on it for these high‑stakes matters can lead to serious consequences. Users are urged to treat the AI as a supplemental tool and seek professional expertise where accuracy, legality, or personal safety is at stake.

Anthropic Launches Claude for Healthcare Amid OpenAI’s ChatGPT Health Rollout

Anthropic Launches Claude for Healthcare Amid OpenAI’s ChatGPT Health Rollout

Anthropic announced Claude for Healthcare, a suite of AI tools aimed at providers, payers, and patients. Like OpenAI’s ChatGPT Health, the platform can sync health data from phones and wearables without using that data for model training. Claude adds advanced "connectors" to major medical databases such as the CMS Coverage Database, ICD-10, the National Provider Identifier Standard, and PubMed, enabling faster prior‑authorization reviews and research. While industry observers note the risk of hallucination‑prone large language models offering medical advice, both Anthropic and OpenAI caution users to consult qualified healthcare professionals.

Anthropic Launches Cowork, a User-Friendly Version of Claude Code

Anthropic Launches Cowork, a User-Friendly Version of Claude Code

Anthropic introduced Cowork, a new tool that brings the capabilities of Claude Code to a broader audience through a simple folder‑based interface. Integrated into the Claude Desktop app, Cowork lets users designate a folder for the AI to read and modify files, with instructions given via the regular chat window. The feature is currently in a research preview and is limited to Max subscribers, though a waitlist exists for other plans. Anthropic highlighted use cases such as assembling expense reports from receipt photos and warned users about potential risks like prompt injection and ambiguous commands.

Locai Labs Bans Under‑18 Access and Image Generation, Calls for Industry Honesty Amid UK Probe of Elon Musk’s Grok Images

Locai Labs Bans Under‑18 Access and Image Generation, Calls for Industry Honesty Amid UK Probe of Elon Musk’s Grok Images

Locai Labs CEO James Drayson announced that the company will block users under 18 and suspend image‑generation features until safety can be assured. He warned that no AI model can guarantee protection against harmful or sexualized content, urging the industry to be transparent about the risks. In the United Kingdom, regulator Ofcom has opened an investigation into Elon Musk’s Grok platform, which allows image editing that can produce non‑consensual and sexualized depictions, including of children. The controversy has already led to bans in several countries and heightened calls for stricter AI regulation.

UK regulator probes X over Grok AI chatbot misuse as Malaysia and Indonesia block service

UK regulator probes X over Grok AI chatbot misuse as Malaysia and Indonesia block service

Britain's media regulator Ofcom has opened a formal investigation into X under the Online Safety Act after reports that the Grok AI chatbot was used to create and share non‑consensual intimate images and child sexual abuse material. The probe will assess X's compliance with legal duties, risk‑assessment procedures, and age‑verification safeguards. Meanwhile, Malaysia and Indonesia have become the first countries to block access to Grok, citing insufficient safeguards against explicit AI‑generated deepfakes of women and children. Both regulators say the block will stay in place until stronger protections are put in place.

Google’s Play Store Policies Ban AI Apps Like Grok, Yet It Remains Available

Google’s Play Store Policies Ban AI Apps Like Grok, Yet It Remains Available

Google’s Play Store policy explicitly prohibits apps that host or promote non-consensual sexual content, including deepfake‑generated imagery. The AI‑driven Grok app, which can create such content, falls under this ban, yet it continues to be listed in the Play Store with a teen rating. Apple also carries the app, though its guidelines are less detailed. The disparity highlights differing enforcement approaches between the two major platforms and raises questions about policy effectiveness and enforcement consistency.

Anthropic Launches Claude Cowork, Bringing AI Coding Assistant to General Users

Anthropic Launches Claude Cowork, Bringing AI Coding Assistant to General Users

Anthropic has introduced Claude Cowork, a preview feature that extends its Claude Code AI capabilities beyond developers to everyday users. By granting the system access to a folder, users can have Claude read, edit, or create files, organize downloads, convert receipts into spreadsheets, and navigate websites via a Chrome plugin. The tool runs on the Claude Max subscription and requires a Mac with the Claude macOS app. A waitlist is open for broader access. Anthropic emphasizes explicit user permission and clear instructions to avoid unintended actions.

Meta Unveils “Meta Compute” Initiative as Dina Powell McCormick Joins as President and Vice Chairman

Meta Unveils “Meta Compute” Initiative as Dina Powell McCormick Joins as President and Vice Chairman

Meta announced a new strategic program called Meta Compute to guide its massive infrastructure investments for data centers and artificial intelligence. The rollout coincides with the appointment of former board member Dina Powell McCormick as president and vice chairman, a role that will focus on government partnerships and financing. Santosh Janardhan, head of global engineering, will oversee the top‑level initiative, while Daniel Gross will lead a new group handling long‑term capacity strategy and supplier relationships. The company also disclosed three nuclear power agreements to support its data‑center energy needs and reaffirmed its plan to spend $600 billion on AI infrastructure by 2028.

Anthropic Launches Claude Cowork AI Agent Feature

Anthropic Launches Claude Cowork AI Agent Feature

Anthropic introduced Claude Cowork, a new AI‑agent capability for its Claude chatbot, as a research preview available in the macOS app for Claude Max subscribers. The feature lets users grant Claude access to local folders so it can read, edit, or create files, handling tasks such as reorganizing downloads, generating spreadsheets, or drafting reports. Claude Cowork also integrates with services like Asana, Notion, PayPal, and Chrome, offering continuous updates and parallel task execution. Anthropic highlighted safety concerns, noting the model’s ability to delete files and the risk of prompt‑injection attacks, and urged users to join a waitlist if they are not yet subscribers.

Apple Picks Google Gemini to Power Next-Generation Siri

Apple Picks Google Gemini to Power Next-Generation Siri

Apple announced that its upcoming, more intelligent version of Siri will be powered by Google’s Gemini large‑language models. The partnership, described as multi‑year, lets Apple run Gemini on its Private Cloud Compute infrastructure, keeping user data isolated from Google’s servers. Apple highlighted the decision after an extensive evaluation, noting Gemini provides the most capable foundation for its future foundation models. Bloomberg reported that Apple may pay roughly $1 billion a year for the access and that Apple still aims to eventually replace third‑party models with its own in‑house technology.