News

Page 2
Perplexity Launches Hands‑Free Voice Control for Comet Browser

Perplexity Launches Hands‑Free Voice Control for Comet Browser

Perplexity has rolled out an upgraded voice mode for its Comet browser, allowing desktop users to navigate the web entirely by speech. The feature, powered by OpenAI’s gpt-realtime-1.5 model, lets users open sites, scroll pages, and follow links without touching a keyboard or mouse. A simple keyboard shortcut activates the mode, and a similar experience is slated for iOS later this month. Perplexity emphasizes privacy by processing voice locally when possible and avoiding cloud storage of click histories. Future updates promise a learning assistant, password manager, and cross‑device sync.

Lovable Launches SheBuilds Campaign for Women Builders on International Women’s Day

Lovable Launches SheBuilds Campaign for Women Builders on International Women’s Day

Lovable’s SheBuilds campaign, timed with International Women’s Day, invites women builders worldwide to a 24‑hour global event powered by Anthropic. Participants receive $100 in Anthropic API credits and $250 in Stripe fee credits, enabling them to design, prototype, and launch working products without traditional engineering barriers. Building on previous virtual buildathons, the initiative emphasizes real output over discussion, fostering agency and community among participants. By aligning the event with a cultural moment, Lovable aims to shift the tech industry’s focus from rhetoric to tangible creation, highlighting the importance of inclusion, rapid iteration, and visible impact in software development.

OpenAI Explores $100‑A‑Month ChatGPT Pro Lite Tier

OpenAI Explores $100‑A‑Month ChatGPT Pro Lite Tier

OpenAI is testing a new subscription tier called ChatGPT Pro Lite, priced at $100 per month. The plan sits between the existing $20‑a‑month ChatGPT Plus and the $200‑a‑month ChatGPT Pro, aiming to serve users who need more capacity than Plus provides but cannot justify the full Pro price. The potential tier could offer higher usage limits, faster inference speeds, and access to advanced features while helping OpenAI manage rising compute costs.

Pete Hegseth tells Anthropic to align with DoD AI demands or face exclusion

Pete Hegseth tells Anthropic to align with DoD AI demands or face exclusion

Pentagon leader Pete Hegseth warned AI firm Anthropic that it must cooperate with the Department of Defense’s AI strategy or risk being removed from the defense supply chain. The department’s recent AI strategy emphasizes open‑ended use of artificial intelligence to reshape warfare, while Anthropic has raised concerns about the reliability of its models for lethal missions without a human in the loop and has advocated for stricter rules on domestic surveillance uses. A potential cut would affect Anthropic’s $200 million contract and its partners such as Palantir.

Anthropic Explores the Question of Claude’s Consciousness

Anthropic Explores the Question of Claude’s Consciousness

Anthropic officials have repeatedly expressed uncertainty about whether their chatbot Claude possesses consciousness. While denying that the model is alive in a biological sense, company leaders say they are open to the possibility and are investigating moral status and welfare. The firm has introduced a set of guidelines called Claude’s Constitution and created a model‑welfare team to study internal experiences, safety and ethical implications. Anthropic’s cautious approach aims to balance transparency with the risk of fueling misconceptions about AI sentience.

Amazon AI Lab Head David Luan Departs to Pursue New AI Endeavors

Amazon AI Lab Head David Luan Departs to Pursue New AI Endeavors

David Luan, who led Amazon's San Francisco artificial intelligence laboratory and oversaw the development of the Nova Act AI browser agent, announced his departure after less than two years with the company. In a LinkedIn post, Luan said he would leave at the end of the week to focus on new projects, emphasizing the proximity of artificial general intelligence and his desire to devote his time to teaching AI new capabilities. His exit occurs as Amazon faces internal criticism of its AI products and rolls out the Alexa Plus assistant to U.S. users.

ChatGPT Has Multiple Personalities: How to Choose the Best One for Your Questions

ChatGPT Has Multiple Personalities: How to Choose the Best One for Your Questions

ChatGPT now offers several selectable personalities that change its tone and style without altering its core capabilities. Users can switch among options such as professional, friendly, candid, quirky, efficient, nerdy and cynical, all available on the free plan. The settings are accessed through the Personalization menu, where users can also add custom instructions, preferred nicknames, occupations, and formatting preferences. These tweaks influence how answers are framed, affecting the user’s perception of the information. Insider tips from OpenAI suggest matching personality to the query’s intent, such as using a professional tone for work‑related topics and a more direct style for sensitive subjects.

AI Firms Shift From Free Promotions to Paid Models in India

AI Firms Shift From Free Promotions to Paid Models in India

Tech giants are ending free AI promotions in India as the country emerges as the world’s largest market for generative AI app downloads. While companies like OpenAI, Google and Perplexity have driven rapid user growth with extended free offers, recent data shows a sharp decline in in‑app purchase revenue after those promotions ended. Despite accounting for roughly one‑fifth of global AI app downloads, India contributes about one percent of AI app revenue, highlighting a monetization challenge. Industry leaders are now focusing on lower‑cost tiers, telecom bundles and micro‑transaction models to retain users and convert them into paying subscribers.

Anthropic Faces Pentagon Ultimatum Over AI Model Access

Anthropic Faces Pentagon Ultimatum Over AI Model Access

The Pentagon has given Anthropic a deadline to provide unrestricted access to its AI model for military use, threatening to label the company a supply‑chain risk or invoke the Defense Production Act. Anthropic, led by CEO Dario Amodei, refuses to loosen its safety safeguards that prohibit mass surveillance and fully autonomous weapons. The dispute highlights a clash between government pressure to secure AI capabilities and the company’s commitment to ethical usage, raising concerns about reliance on a single AI vendor and the broader stability of the U.S. tech environment.

Google AI Push Alert Contains Racial Slur, Prompting Apology and Industry Concern

Google AI Push Alert Contains Racial Slur, Prompting Apology and Industry Concern

Google issued an AI‑generated push notification that included the N‑word, linking to a Hollywood Reporter story about a recent BAFTA awards incident. The offensive alert was identified by Instagram user Danny Price, leading Google to remove the notification and apologize. The BAFTA incident involved an audience member with Tourette syndrome who involuntarily shouted the slur during a presentation by Michael B. Jordan and Delroy Lindo, sparking outrage and renewed discussion about vocal tics. The episode adds to a series of high‑profile AI errors, including earlier missteps by Apple.

OpenAI and Google Bolster Safeguards After Grok Abuse Scandal

OpenAI and Google Bolster Safeguards After Grok Abuse Scandal

In early 2026 the xAI tool Grok was used to create millions of non‑consensual sexual images, including thousands involving children. The fallout prompted major AI firms to tighten their defenses. OpenAI patched a vulnerability that let adversarial prompts generate intimate imagery, while Google simplified its process for removing explicit images from Search and reiterated its prohibited‑use policy. Both companies emphasized ongoing collaboration with security researchers and a commitment to stronger content‑moderation controls to prevent future abuse.

Microsoft warns OpenClaw unsafe for standard workstations

Microsoft warns OpenClaw unsafe for standard workstations

Microsoft’s security team has cautioned that OpenClaw, a self‑hosted AI agent runtime, should not be run on ordinary personal or enterprise computers. The platform can silently execute risky actions while holding persistent credentials, exposing devices to data leakage, credential exposure, and hidden configuration changes. Microsoft recommends isolating OpenClaw in a dedicated virtual machine or separate device, using limited, purpose‑built credentials, and employing continuous monitoring to detect unusual activity.