News

Page 83
FTC Removes Lina Khan-Era Blog Posts on AI Risks and Open Source

FTC Removes Lina Khan-Era Blog Posts on AI Risks and Open Source

The Federal Trade Commission has taken down three blog posts from the Lina Khan-era that addressed the risks of artificial intelligence to consumers and the role of open‑source models. The posts, originally published under Khan’s leadership, highlighted concerns such as fraud, impersonation, surveillance and discrimination. The removals come under the direction of new FTC Chair Andrew Ferguson and align with a broader pattern of content deletions observed in the current administration, which has also altered other agency publications. Critics question whether the deletions comply with federal record‑keeping laws.

Meta AI App Usage Surges After Launch of Vibes Video Feed

Meta AI App Usage Surges After Launch of Vibes Video Feed

Meta AI's mobile app saw a sharp rise in daily active users and downloads following the introduction of its Vibes AI video feed. Daily active users on iOS and Android jumped to 2.7 million by mid‑October, up from roughly 775,000 four weeks earlier, while daily downloads climbed to about 300,000. The surge coincides with Vibes' debut on September 25 and may also reflect heightened interest in AI video generators after OpenAI's Sora launch. Meanwhile, competing AI chat apps reported declines in user activity during the same period.

Anthropic Launches Claude Code Web App for Developers

Anthropic Launches Claude Code Web App for Developers

Anthropic has introduced a web‑based version of its AI coding assistant, Claude Code, extending the tool beyond its terminal interface. The new web app is rolling out to subscribers of Anthropic’s Pro and Max plans, letting users access Claude Code via the Claude website or iOS app. The move aims to let developers spin up AI coding agents wherever they work, positioning Anthropic against rivals such as GitHub Copilot, Cursor, Google and OpenAI. The product, which has seen rapid user growth and now generates significant revenue, continues to rely heavily on Anthropic’s own AI models, with most of its code written by the models themselves.

Google TV Rolls Out Gemini AI Upgrade to Early Adopters

Google TV Rolls Out Gemini AI Upgrade to Early Adopters

Google has begun rolling out its Gemini AI upgrade to select Google TV devices, with early availability on TCL's QM9K model and reports of the feature appearing on Sony Bravia TVs running Android TV 14. The update introduces a new conversational voice interface, a suite of botanical-themed voice options, and expanded capabilities such as answering content queries, summarizing headlines, generating screensavers, locating YouTube clips, and creating images. Google plans to expand Gemini to additional TCL models, upcoming Hisense TVs, the Google TV Streamer, and Walmart's onn 4K Pro streaming box over the coming months.

Friend AI Pendant Sparks NYC Subway Protest After Aggressive Ad Campaign

Friend AI Pendant Sparks NYC Subway Protest After Aggressive Ad Campaign

The wearable chatbot device Friend, launched by founder Avi Schiffmann, rolled out a high‑cost subway advertising campaign in New York City that quickly drew public ire. Commuters defaced the ads, shouted anti‑AI slogans, and organized a spontaneous protest that included tearing up cardboard cut‑outs of the device. Schiffmann later clarified he did not plan the event, traveled to New York after seeing the photos, and engaged with participants, emphasizing a desire for dialogue rather than sales to big‑tech firms.

AI Video Generators Surge: Overview of Sora, Veo 3, and Emerging Tools

AI Video Generators Surge: Overview of Sora, Veo 3, and Emerging Tools

The generative‑AI market is expanding into video, with major tech firms releasing text‑to‑video models such as OpenAI's Sora and Google's Veo 3. These tools create short clips from prompts or images, often lasting only a few seconds, and some include synchronized audio. Other companies—including Adobe, Midjourney, Runway, Luma, Pika, and Ideogram—offer comparable services, typically on a paid basis. While the technology promises new creative possibilities, it also brings challenges like hallucinations and unresolved legal and ethical questions. The ecosystem remains fast‑moving, with many products still evolving.

Amazon Web Services Outage Disrupts Major Apps and Services

Amazon Web Services Outage Disrupts Major Apps and Services

A widespread outage at Amazon Web Services caused significant disruptions across a variety of popular applications and platforms that rely on the cloud provider. The incident affected services ranging from Amazon's own Alexa to third‑party apps such as Venmo, Snapchat, and Fortnite. AWS identified a DNS resolution issue affecting its DynamoDB API, which led to increased error rates and latency in the US‑East‑1 region. The company announced that the underlying problem had been mitigated, but some services continued to experience elevated errors, particularly with new EC2 instance launches. The outage highlighted the reliance of many internet services on a single cloud infrastructure and sparked concerns about resilience and redundancy.

Amazon Web Services Outage Disrupts Major Apps and Websites Across US-East-1

Amazon Web Services Outage Disrupts Major Apps and Websites Across US-East-1

A severe outage at Amazon Web Services (AWS) disrupted a broad swath of internet services on a crisp October morning. The incident stemmed from a DNS resolution problem affecting the DynamoDB API in the US‑East‑1 region, leading to increased error rates and latency across multiple AWS services. Popular platforms such as Venmo, Snapchat, Canva, Fortnite, Alexa, Lyft, Reddit, Disney+, and many others experienced partial or complete outages. AWS identified the issue, applied mitigations, and eventually restored most services, though new EC2 instance launches remained rate‑limited for some time. The outage highlighted the extensive reliance on AWS infrastructure across the digital ecosystem.

OpenAI Serves Subpoenas on AI‑Policy Nonprofits Amid Musk Lawsuit

OpenAI Serves Subpoenas on AI‑Policy Nonprofits Amid Musk Lawsuit

In the ongoing legal battle with Elon Musk, OpenAI has issued subpoenas to a series of nonprofit organizations that have been critical of its shift to a for‑profit structure. Recipients include groups such as the San Francisco Foundation, Encode, the Future of Life Institute, and others. The subpoenas request extensive information about funding sources, communications, and any involvement with OpenAI’s governance, prompting concerns about legal costs, chilling effects on advocacy, and the broader implications for nonprofit independence in AI policy debates.

AI Drives Shift from Coding to Data Literacy in High Schools

AI Drives Shift from Coding to Data Literacy in High Schools

High school educators are adapting curricula as artificial intelligence reshapes the tech job market. While computer‑science classes remain required, schools are emphasizing statistics, data analysis, and real‑world applications to prepare students for roles that complement AI rather than compete with it. Teachers report growing interest in applied math projects, interdisciplinary courses, and AI‑assisted learning tools. This transition reflects broader industry signals that value data literacy alongside coding, prompting a re‑balancing of STEM education toward interpreting and collaborating with machine intelligence.

Anthropic Teams with U.S. Agencies to Build Nuclear‑Risk Filter for Claude

Anthropic Teams with U.S. Agencies to Build Nuclear‑Risk Filter for Claude

Anthropic has partnered with the U.S. Department of Energy and the National Nuclear Security Administration to create a specialized classifier that blocks its Claude chatbot from providing information that could aid nuclear weapon development. The collaboration involved testing Claude in a Top‑Secret cloud environment, red‑team exercises by the NNSA, and the development of a filter based on a list of nuclear‑risk indicators. While the effort is praised as a proactive safety measure, experts express mixed views, questioning the classifier’s effectiveness and the broader implications of private AI firms accessing sensitive national‑security data.